sha
null | last_modified
null | library_name
stringclasses 154
values | text
stringlengths 1
900k
| metadata
stringlengths 2
348k
| pipeline_tag
stringclasses 45
values | id
stringlengths 5
122
| tags
sequencelengths 1
1.84k
| created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
61
| embeddings
sequencelengths 768
768
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 100k (uncased)
Seed 1 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-100k')
model = BertModel.from_pretrained("multiberts-seed-1-100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-100k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 100k (uncased)
Seed 1 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 100k (uncased)\nSeed 1 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 100k (uncased)\nSeed 1 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 100k (uncased)\nSeed 1 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08524470031261444,
-0.0035260336007922888,
-0.002140261698514223,
0.0664476528763771,
0.08583097159862518,
0.0003334286157041788,
0.11716487258672714,
0.049715474247932434,
-0.026097800582647324,
0.02654302679002285,
0.09569728374481201,
0.033800046890974045,
0.04148855432868004,
0.06799539923667908,
0.09759366512298584,
-0.25608810782432556,
0.051742155104875565,
-0.06306105107069016,
0.059534065425395966,
0.07518196851015091,
0.0992780327796936,
-0.0731087177991867,
0.06206388771533966,
0.037011466920375824,
-0.08322606980800629,
-0.0193867739289999,
-0.015425018966197968,
-0.035366132855415344,
0.1012534499168396,
0.0672793835401535,
0.0618898868560791,
-0.0006628185510635376,
0.05943413823843002,
-0.09154294431209564,
0.016098227351903915,
0.04678572341799736,
0.0014962758868932724,
0.02540605142712593,
-0.011520417407155037,
0.015239957720041275,
0.10357870906591415,
0.03798121213912964,
0.07782730460166931,
0.036963317543268204,
-0.09633935987949371,
-0.11416985839605331,
-0.07928887009620667,
0.10276530683040619,
0.052149005234241486,
0.04201721400022507,
-0.007209916599094868,
0.07425223290920258,
-0.028051001951098442,
0.07426606118679047,
0.1098177433013916,
-0.25490352511405945,
-0.007324454840272665,
0.0701168030500412,
0.045250602066516876,
0.04318767786026001,
0.010267559438943863,
0.027154402807354927,
0.006922446191310883,
0.04215298593044281,
0.03032003715634346,
-0.023990370333194733,
0.12145859003067017,
-0.04663809388875961,
-0.15081094205379486,
-0.043922584503889084,
0.11847757548093796,
-0.005066981539130211,
-0.12623855471611023,
-0.0996144488453865,
-0.032514624297618866,
0.11153063178062439,
-0.004237835295498371,
-0.017231790348887444,
-0.0032911603339016438,
0.012417392805218697,
0.023980703204870224,
-0.09271806478500366,
-0.08485953509807587,
-0.02872304618358612,
-0.03837157413363457,
0.13011309504508972,
0.04632938653230667,
0.05105021968483925,
-0.03313751146197319,
0.08698784559965134,
-0.11691494286060333,
-0.04051584750413895,
-0.053382109850645065,
-0.07916972041130066,
-0.0198846273124218,
0.008990609087049961,
-0.028226245194673538,
-0.08487626910209656,
-0.057491105049848557,
0.11614084988832474,
0.03604859486222267,
0.029466785490512848,
-0.001407900359481573,
0.042604830116033554,
0.07597163319587708,
0.09978779405355453,
-0.038255468010902405,
0.05077897012233734,
0.03476264700293541,
-0.02391357347369194,
0.05934381112456322,
-0.05053063482046127,
-0.10203884541988373,
0.07779262214899063,
0.00398551020771265,
0.03782251849770546,
0.024203140288591385,
0.03422326594591141,
-0.0126669155433774,
-0.07053054124116898,
0.1592884361743927,
-0.07701532542705536,
-0.00894545391201973,
-0.017922360450029373,
0.01268836110830307,
0.04590572416782379,
0.031459592282772064,
-0.008052424527704716,
-0.04813672602176666,
-0.006962514482438564,
-0.05809079483151436,
-0.024231795221567154,
-0.055322349071502686,
-0.11864016205072403,
-0.00017471890896558762,
-0.04643564671278,
-0.03308665007352829,
-0.14079619944095612,
-0.21376213431358337,
-0.02121727541089058,
0.06352800130844116,
-0.001587094273418188,
-0.010142477229237556,
0.02150554023683071,
0.015213636681437492,
-0.021226227283477783,
0.012232275679707527,
-0.0463905930519104,
-0.0008275974541902542,
-0.005059313029050827,
-0.03574778139591217,
0.05710737407207489,
-0.03946514427661896,
0.023187434300780296,
-0.07082854956388474,
0.022901466116309166,
-0.21087798476219177,
0.09274617582559586,
-0.03389887139201164,
0.00226595439016819,
-0.03887799754738808,
-0.04063670337200165,
0.004806112498044968,
0.045558784157037735,
-0.00791986659169197,
0.11875075101852417,
-0.13674981892108917,
-0.05122457444667816,
0.1862182766199112,
-0.15818974375724792,
-0.0022683553397655487,
0.10016292333602905,
-0.049908485263586044,
0.05956312268972397,
0.13441967964172363,
0.1000673770904541,
0.07742580771446228,
-0.07439345121383667,
0.008531921543180943,
0.06430240720510483,
-0.06838521361351013,
0.05537254735827446,
0.0871090441942215,
-0.026577677577733994,
-0.1357167512178421,
0.028683211654424667,
-0.06993459165096283,
-0.007977422326803207,
-0.028146974742412567,
-0.01993272267282009,
0.004336874932050705,
-0.038057342171669006,
0.02626767009496689,
0.004922328516840935,
0.018259627744555473,
-0.03845236450433731,
-0.08279888331890106,
0.02795224077999592,
0.07471692562103271,
-0.07072602212429047,
0.04336351901292801,
-0.06991343200206757,
0.061440739780664444,
-0.07952859997749329,
-0.0033563850447535515,
-0.16925367712974548,
-0.02238371968269348,
0.04677914083003998,
-0.04850244149565697,
0.050691407173871994,
0.09104475378990173,
0.0012295694323256612,
0.12096767127513885,
-0.036659546196460724,
0.004420427605509758,
-0.0075358059257268906,
-0.010152211412787437,
-0.053722724318504333,
-0.12111298739910126,
-0.08157645165920258,
-0.07019160687923431,
0.10028237104415894,
-0.07149560004472733,
0.02901148796081543,
-0.07310488075017929,
-0.024753734469413757,
-0.008457006886601448,
-0.06102966144680977,
-0.0037187300622463226,
0.010610942728817463,
-0.029201073572039604,
-0.04816364496946335,
0.04662176966667175,
0.05097240209579468,
-0.06352847069501877,
0.07711159437894821,
-0.10407586395740509,
-0.06504315137863159,
0.05507291853427887,
0.010463748127222061,
-0.08398818224668503,
0.09091468155384064,
-0.01863683946430683,
-0.012742375023663044,
-0.061677344143390656,
-0.04378032684326172,
0.19485321640968323,
-0.02072736993432045,
0.10168910026550293,
-0.09103906154632568,
0.00046977761667221785,
0.03095897287130356,
-0.048057347536087036,
-0.014982434920966625,
0.05999693647027016,
0.0499592050909996,
-0.18793010711669922,
0.015932973474264145,
0.05125840753316879,
0.07772098481655121,
0.10743404924869537,
0.024874351918697357,
-0.02510732039809227,
-0.04790230840444565,
-0.011631017550826073,
0.004429585766047239,
0.056114085018634796,
-0.03211374208331108,
-0.010234530083835125,
0.031145332381129265,
0.0577242374420166,
0.015454869717359543,
-0.08125414699316025,
0.03658926859498024,
0.06732353568077087,
-0.017701249569654465,
-0.0394156351685524,
-0.026886288076639175,
-0.059148285537958145,
0.062231212854385376,
0.051628850400447845,
0.03882211074233055,
0.024575872346758842,
-0.013502499088644981,
-0.13502702116966248,
0.1900821328163147,
-0.11253173649311066,
-0.25340571999549866,
-0.10713954269886017,
-0.056213267147541046,
-0.024116287007927895,
0.04307428002357483,
0.057444870471954346,
-0.02857435867190361,
-0.04086744412779808,
-0.11634266376495361,
0.06448571383953094,
-0.06475604325532913,
-0.03077116422355175,
-0.009507475420832634,
-0.05352368950843811,
-0.0219715666025877,
-0.12839177250862122,
-0.01375717855989933,
-0.030568327754735947,
-0.07581464946269989,
0.006174003705382347,
-0.0344863161444664,
0.02778109721839428,
0.13993434607982635,
0.034496940672397614,
-0.018510010093450546,
-0.01783476397395134,
0.1961946189403534,
0.00948277860879898,
0.06129835546016693,
0.10879100859165192,
-0.0273751150816679,
0.05403975024819374,
0.044155530631542206,
0.026017803698778152,
-0.047962386161088943,
0.011447440832853317,
-0.015232454054057598,
-0.12075041234493256,
-0.17016884684562683,
-0.07024143636226654,
-0.0011091926135122776,
0.005770324729382992,
0.020037442445755005,
0.036969371140003204,
0.024752836674451828,
0.042793840169906616,
-0.02942081168293953,
0.028714414685964584,
-0.015631165355443954,
0.0819498747587204,
0.02636176347732544,
-0.07672623544931412,
0.09265126287937164,
-0.06090103089809418,
0.014536261558532715,
0.10734617710113525,
-0.06290708482265472,
0.19146311283111572,
0.026791051030158997,
0.06352290511131287,
0.10169464349746704,
0.019209709018468857,
0.0537458211183548,
0.08817635476589203,
-0.04545896500349045,
0.00508231483399868,
-0.061286505311727524,
-0.05119043588638306,
-0.03630831837654114,
0.04770348593592644,
0.027831867337226868,
0.01736963912844658,
-0.1184791699051857,
0.016940195113420486,
-0.002183198230341077,
0.13973596692085266,
0.04880662262439728,
-0.11722904443740845,
-0.12262584269046783,
0.0337330587208271,
-0.047173380851745605,
-0.06403809040784836,
0.029761280864477158,
0.0578678734600544,
-0.15378138422966003,
0.04216932877898216,
-0.005775452591478825,
0.0688815638422966,
-0.09398847818374634,
0.01494394987821579,
-0.05018172040581703,
-0.0012989435344934464,
0.0031651563476771116,
0.0693860650062561,
-0.1344810277223587,
0.10720885545015335,
0.02028074488043785,
0.049486447125673294,
-0.08151952922344208,
0.015211064368486404,
-0.012709381058812141,
0.10830708593130112,
0.11649405211210251,
0.04452401399612427,
-0.05254030227661133,
-0.023601632565259933,
-0.04735327139496803,
0.018781675025820732,
0.05945559963583946,
-0.07881660759449005,
0.061759937554597855,
0.008136092685163021,
0.007999198511242867,
-0.022275883704423904,
0.018938612192869186,
-0.1314416527748108,
-0.1206401139497757,
0.05919880419969559,
-0.07465466856956482,
-0.09705575555562973,
-0.056104034185409546,
-0.06127335503697395,
-0.04801184684038162,
0.20520544052124023,
-0.11647476255893707,
-0.08863770961761475,
-0.09709347784519196,
-0.010616887360811234,
0.046601153910160065,
-0.06649118661880493,
0.046726103872060776,
-0.0398554727435112,
0.08910568803548813,
-0.048905424773693085,
-0.10712407529354095,
0.03328241407871246,
-0.11207292228937149,
-0.11410047858953476,
-0.04410325735807419,
0.10402092337608337,
0.11392827332019806,
0.037745151668787,
0.01298228558152914,
0.013162612915039062,
-0.0018202699720859528,
-0.1176169216632843,
0.015286274254322052,
0.1326286494731903,
-0.0010100174695253372,
0.07126856595277786,
-0.06248760223388672,
0.032132528722286224,
-0.01904604770243168,
-0.0010272450745105743,
0.1344895362854004,
0.18695566058158875,
-0.06601202487945557,
0.17323291301727295,
0.2025892734527588,
-0.10633677244186401,
-0.19032612442970276,
-0.05552323907613754,
-0.00011520646512508392,
0.04444396495819092,
0.04712052643299103,
-0.185505211353302,
0.09256313741207123,
0.03522370755672455,
-0.03254391998052597,
0.02446756511926651,
-0.23936867713928223,
-0.11133252084255219,
0.09197692573070526,
0.054182760417461395,
0.1948927938938141,
-0.0810035914182663,
-0.0402381606400013,
-0.01673256605863571,
-0.03343060612678528,
0.04472201317548752,
-0.03154733404517174,
0.09399478882551193,
0.009026875719428062,
-0.030032822862267494,
0.003596181981265545,
-0.030630063265562057,
0.09503602236509323,
0.043237604200839996,
0.02434822916984558,
-0.07058137655258179,
-0.008836889639496803,
0.11277124285697937,
-0.03811580687761307,
0.10082867741584778,
0.04409843683242798,
0.07422581315040588,
-0.09205324947834015,
-0.06013918295502663,
-0.07425393164157867,
0.04501548781991005,
-0.04118209704756737,
-0.05538623780012131,
-0.06526382267475128,
0.05608459562063217,
0.03764136880636215,
0.009167945012450218,
0.00170215405523777,
-0.037797488272190094,
0.04475753381848335,
0.08893827348947525,
0.07776204496622086,
-0.035533010959625244,
-0.07691213488578796,
-0.05089392885565758,
-0.04873526841402054,
0.06765426695346832,
-0.0894564688205719,
0.01830783672630787,
0.02505202777683735,
0.012336540035903454,
0.09174922108650208,
0.034382253885269165,
-0.13783223927021027,
0.011594323441386223,
0.03064875304698944,
-0.11970988661050797,
-0.11516924202442169,
-0.017619792371988297,
0.029375620186328888,
-0.03710035979747772,
0.056078363209962845,
0.1469394564628601,
-0.03727807477116585,
-0.03216699883341789,
-0.050149813294410706,
0.03711618110537529,
-0.020025353878736496,
0.04967104643583298,
0.06538362056016922,
0.03139036148786545,
-0.07248441874980927,
0.07618219405412674,
0.04026221111416817,
-0.03702336549758911,
0.04253249242901802,
0.03768892586231232,
-0.09345629811286926,
-0.07827918976545334,
-0.058718491345644,
0.09422504156827927,
-0.023140477016568184,
-0.04245147854089737,
-0.0005065016448497772,
-0.08188062906265259,
0.06860557198524475,
0.08163067698478699,
0.0482751727104187,
0.040858194231987,
-0.08839572966098785,
0.015990227460861206,
-0.053991541266441345,
0.035219043493270874,
-0.027956312522292137,
-0.00514468178153038,
-0.05560648441314697,
0.06878794729709625,
0.06500732898712158,
0.0999111533164978,
-0.034740641713142395,
-0.07844352722167969,
-0.0826440081000328,
-0.012519098818302155,
-0.06155554950237274,
-0.029775243252515793,
-0.07692494988441467,
-0.009565629065036774,
0.002604800509288907,
-0.0021643489599227905,
0.025414828211069107,
0.034178875386714935,
-0.042589522898197174,
-0.018059415742754936,
-0.037355441600084305,
0.036780327558517456,
-0.06264443695545197,
0.0067492201924324036,
0.015407324768602848,
-0.036163657903671265,
0.09217563271522522,
0.03670398145914078,
-0.014230111613869667,
0.04251571372151375,
-0.027412783354520798,
0.04030414670705795,
-0.018759941682219505,
-0.0009204717352986336,
-0.02500881254673004,
-0.10940395295619965,
-0.004827476572245359,
0.002025868743658066,
-0.021935567259788513,
0.009119517169892788,
0.05945706367492676,
-0.07323845475912094,
0.08364223688840866,
0.04714628681540489,
-0.03232397884130478,
-0.07248668372631073,
0.03973216190934181,
-0.016264168545603752,
0.030078282579779625,
0.06927590072154999,
-0.03378023952245712,
0.05483978986740112,
-0.09871925413608551,
-0.027465548366308212,
0.004639950580894947,
-0.0040918439626693726,
-0.010294150561094284,
-0.05644141137599945,
-0.003408377058804035,
0.008599110879004002,
0.17483307421207428,
-0.020933430641889572,
0.03406103700399399,
0.012595845386385918,
0.007200536318123341,
0.04893559589982033,
-0.013529248535633087,
0.07653719186782837,
-0.004739732481539249,
-0.027017276734113693,
-0.017546771094202995,
0.039067186415195465,
0.005898395553231239,
0.0040899720042943954,
0.13695532083511353,
0.04784746095538139,
0.08872686326503754,
0.07596572488546371,
0.012620988301932812,
0.01685517281293869,
-0.12810085713863373,
-0.08859284967184067,
0.005258079618215561,
0.057336729019880295,
-0.018501393496990204,
0.02151631936430931,
0.08848118782043457,
-0.08566632121801376,
0.0723453164100647,
0.052838340401649475,
-0.04859200865030289,
-0.12738102674484253,
-0.19562141597270966,
-0.024894127622246742,
-0.029646409675478935,
-0.01133066974580288,
-0.09010559320449829,
0.01711842603981495,
0.09035669267177582,
0.024273114278912544,
-0.010229572653770447,
0.09454045444726944,
-0.10597898066043854,
-0.032697245478630066,
0.045286111533641815,
-0.026407556608319283,
0.014827141538262367,
0.04272862896323204,
0.02292022481560707,
-0.003065982833504677,
0.042942170053720474,
0.04181143641471863,
0.04679667577147484,
0.02690965123474598,
0.05182106792926788,
-0.02630781941115856,
-0.07399345934391022,
-0.03210744634270668,
-0.004359319806098938,
0.053189489990472794,
0.13737447559833527,
0.023281214758753777,
-0.06972325593233109,
0.005852171685546637,
0.11075308173894882,
-0.031226176768541336,
-0.053463004529476166,
-0.10764723271131516,
0.23975703120231628,
0.022201333194971085,
0.001475367695093155,
-0.006465498358011246,
-0.045843884348869324,
0.00617641769349575,
0.21112023293972015,
0.2217693328857422,
0.004879211541265249,
-0.009662053547799587,
0.009111168794333935,
-0.012499222531914711,
0.03704001009464264,
0.14444968104362488,
0.0013408083468675613,
0.2532721161842346,
-0.04847402125597,
0.042407114058732986,
-0.04182954877614975,
-0.03955886512994766,
-0.10355386137962341,
0.07071387022733688,
-0.01083417609333992,
0.007340754382312298,
-0.03563741222023964,
0.07091186940670013,
-0.04026711359620094,
-0.17419710755348206,
0.003911188803613186,
-0.000005666399374604225,
-0.05864490196108818,
0.011575542390346527,
-0.0047685373574495316,
0.02302730828523636,
0.08330346643924713,
-0.01585269719362259,
-0.004464060999453068,
0.1319432407617569,
0.018241338431835175,
-0.097347192466259,
-0.060771629214286804,
0.11355876177549362,
0.01553176250308752,
0.14124616980552673,
0.011250562034547329,
0.07750025391578674,
0.08744602650403976,
0.02097969315946102,
-0.09455941617488861,
0.044223811477422714,
-0.021424980834126472,
-0.02454746887087822,
0.007734778802841902,
0.1081497073173523,
-0.006844518706202507,
0.05749361589550972,
0.02501031570136547,
-0.09239287674427032,
0.06304663419723511,
0.01005847379565239,
-0.035955607891082764,
-0.08250947296619415,
0.08271535485982895,
-0.09000278264284134,
0.15743529796600342,
0.12163208425045013,
-0.014433312229812145,
-0.04485296458005905,
-0.028991030529141426,
0.016057102009654045,
0.0006635370664298534,
0.05125323310494423,
-0.026610590517520905,
-0.1347019374370575,
0.020941967144608498,
-0.08078917860984802,
0.025968333706259727,
-0.24686121940612793,
-0.08988693356513977,
0.031169679015874863,
-0.017550114542245865,
-0.015633076429367065,
0.0483778640627861,
0.043399084359407425,
0.027819743379950523,
-0.03596547991037369,
0.01722205989062786,
-0.03515992686152458,
0.0605611689388752,
-0.10908803343772888,
-0.09308633208274841
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 1100k (uncased)
Seed 1 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1100k')
model = BertModel.from_pretrained("multiberts-seed-1-1100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-1100k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1100k (uncased)
Seed 1 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1100k (uncased)\nSeed 1 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1100k (uncased)\nSeed 1 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 1100k (uncased)\nSeed 1 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08555611222982407,
-0.0040794480592012405,
-0.0020310659892857075,
0.06941459327936172,
0.08855492621660233,
0.0017079981043934822,
0.11440035700798035,
0.049213845282793045,
-0.024650927633047104,
0.02558021992444992,
0.09422016143798828,
0.029171213507652283,
0.04176194220781326,
0.06810109317302704,
0.09573142230510712,
-0.25787878036499023,
0.04915892332792282,
-0.06343266367912292,
0.06582487374544144,
0.07384201884269714,
0.09919574856758118,
-0.07228134572505951,
0.06168101727962494,
0.03763821721076965,
-0.08095504343509674,
-0.018159812316298485,
-0.017228562384843826,
-0.03521208092570305,
0.10130452364683151,
0.06798708438873291,
0.06399299204349518,
-0.002073155716061592,
0.06016168370842934,
-0.09078739583492279,
0.015797993168234825,
0.046431057155132294,
0.002145661972463131,
0.02507668547332287,
-0.011450957506895065,
0.01667807064950466,
0.10546255111694336,
0.04029466584324837,
0.07784958183765411,
0.03452640026807785,
-0.09539070725440979,
-0.11589834839105606,
-0.08053168654441833,
0.1057882010936737,
0.0536239892244339,
0.04006870090961456,
-0.005664433352649212,
0.07311268150806427,
-0.024695660918951035,
0.07472637295722961,
0.10777296125888824,
-0.25533154606819153,
-0.0071470183320343494,
0.06312233954668045,
0.042833346873521805,
0.044910840690135956,
0.011762609705328941,
0.024688560515642166,
0.00842786580324173,
0.042268238961696625,
0.026947487145662308,
-0.02241368591785431,
0.1129084974527359,
-0.046800896525382996,
-0.15103252232074738,
-0.04294367879629135,
0.1217510998249054,
-0.007041076198220253,
-0.125166654586792,
-0.09459571540355682,
-0.03331904858350754,
0.10964731872081757,
-0.002288093790411949,
-0.016803735867142677,
-0.0043589225970208645,
0.01151234470307827,
0.030802801251411438,
-0.09266815334558487,
-0.08560322970151901,
-0.02763804793357849,
-0.04101051390171051,
0.1297837346792221,
0.047391898930072784,
0.05216391012072563,
-0.03518660366535187,
0.08971908688545227,
-0.11881290376186371,
-0.04167203977704048,
-0.05149387940764427,
-0.07927963882684708,
-0.01801353134214878,
0.007198919076472521,
-0.024222422391176224,
-0.08195219188928604,
-0.06071221083402634,
0.11183470487594604,
0.037604715675115585,
0.02857079729437828,
-0.004630090668797493,
0.04250556603074074,
0.07482872903347015,
0.09782654047012329,
-0.038400474935770035,
0.04909927770495415,
0.03666694462299347,
-0.02323145419359207,
0.059190381318330765,
-0.049648236483335495,
-0.10024906694889069,
0.07723523676395416,
0.0013475827872753143,
0.03933398425579071,
0.027044007554650307,
0.034297361969947815,
-0.012489959597587585,
-0.0687364786863327,
0.16500115394592285,
-0.07791594415903091,
-0.009328356944024563,
-0.01871090941131115,
0.010402638465166092,
0.044926486909389496,
0.03606250882148743,
-0.008206410333514214,
-0.04839961230754852,
-0.005489896982908249,
-0.058134034276008606,
-0.02405470423400402,
-0.05563054978847504,
-0.11971825361251831,
-0.0004908363334834576,
-0.041038237512111664,
-0.03374043107032776,
-0.14072048664093018,
-0.21837517619132996,
-0.020082779228687286,
0.06409156322479248,
-0.00023076869547367096,
-0.010199538432061672,
0.021863386034965515,
0.01640121079981327,
-0.02041555941104889,
0.01082886103540659,
-0.04664338752627373,
-0.0012946277856826782,
-0.005228057503700256,
-0.036446548998355865,
0.055863551795482635,
-0.04265853017568588,
0.02338200993835926,
-0.07032804191112518,
0.02254939265549183,
-0.2132248431444168,
0.08875162899494171,
-0.035534024238586426,
0.004470227286219597,
-0.03820488601922989,
-0.039614997804164886,
0.0073160696774721146,
0.044664256274700165,
-0.00704446854069829,
0.11765860766172409,
-0.13324129581451416,
-0.048001352697610855,
0.181953564286232,
-0.15988901257514954,
-0.002083752304315567,
0.10163496434688568,
-0.04881957173347473,
0.058964651077985764,
0.13447332382202148,
0.09843507409095764,
0.08355962485074997,
-0.07533835619688034,
0.007842096500098705,
0.06434613466262817,
-0.07046940177679062,
0.05385172367095947,
0.08737398684024811,
-0.026276476681232452,
-0.13844895362854004,
0.0287521630525589,
-0.07166272401809692,
-0.0067856572568416595,
-0.02781808003783226,
-0.02005980722606182,
0.003997638821601868,
-0.037420179694890976,
0.025136619806289673,
0.00577880023047328,
0.01762259192764759,
-0.03890267014503479,
-0.08380234986543655,
0.02718605473637581,
0.07568998634815216,
-0.06941890716552734,
0.044662270694971085,
-0.06905985623598099,
0.059800755232572556,
-0.08147118985652924,
-0.004980247467756271,
-0.16734431684017181,
-0.0226010549813509,
0.04570072516798973,
-0.04701627790927887,
0.04818541556596756,
0.08809301257133484,
0.0006296082865446806,
0.1216411292552948,
-0.038359999656677246,
0.004574121907353401,
-0.005510969087481499,
-0.00995855126529932,
-0.053979311138391495,
-0.1210433840751648,
-0.08110959082841873,
-0.07095471769571304,
0.09755969047546387,
-0.0730949342250824,
0.029126688838005066,
-0.07257398217916489,
-0.025498757138848305,
-0.008303755894303322,
-0.05894511938095093,
-0.005277853459119797,
0.010345413349568844,
-0.030101114884018898,
-0.048338934779167175,
0.04712503403425217,
0.051698677241802216,
-0.05981304496526718,
0.07889086753129959,
-0.10336432605981827,
-0.06598986685276031,
0.0542730987071991,
0.01456542406231165,
-0.08326475322246552,
0.09060211479663849,
-0.019477687776088715,
-0.01440535206347704,
-0.06158780679106712,
-0.0452936552464962,
0.19655448198318481,
-0.02278318628668785,
0.10149483382701874,
-0.09056751430034637,
0.0018810053588822484,
0.03002646192908287,
-0.047209762036800385,
-0.014916464686393738,
0.05987664684653282,
0.050706811249256134,
-0.1908523291349411,
0.015026140958070755,
0.0471404492855072,
0.07669736444950104,
0.10915756225585938,
0.02661881595849991,
-0.02516299858689308,
-0.045856498181819916,
-0.011834939941763878,
0.004999371711164713,
0.05669179931282997,
-0.03299366682767868,
-0.01017012819647789,
0.031167197972536087,
0.05836879462003708,
0.01728183962404728,
-0.08249439299106598,
0.03774917125701904,
0.06696878373622894,
-0.01585419476032257,
-0.04300670325756073,
-0.027522999793291092,
-0.05837882682681084,
0.06323733925819397,
0.050420504063367844,
0.039540961384773254,
0.02358480542898178,
-0.013090289197862148,
-0.13549193739891052,
0.1888822615146637,
-0.11333965510129929,
-0.25080588459968567,
-0.10864422470331192,
-0.059109166264534,
-0.021036481484770775,
0.04185134172439575,
0.05928478389978409,
-0.02820376120507717,
-0.04257001355290413,
-0.11628006398677826,
0.06512503325939178,
-0.061084896326065063,
-0.029378319159150124,
-0.0093194879591465,
-0.054284319281578064,
-0.02352672815322876,
-0.1290035992860794,
-0.013044122606515884,
-0.030035870149731636,
-0.0749112069606781,
0.00457177497446537,
-0.03502319008111954,
0.026316862553358078,
0.14097638428211212,
0.03315036743879318,
-0.019738100469112396,
-0.017945026978850365,
0.1876187026500702,
0.01047331839799881,
0.06292466819286346,
0.1082480251789093,
-0.02937936782836914,
0.05593997240066528,
0.046814240515232086,
0.02404949627816677,
-0.04694683849811554,
0.014250327832996845,
-0.017692266032099724,
-0.12113690376281738,
-0.17286324501037598,
-0.07144014537334442,
-0.0012134932912886143,
0.008719857782125473,
0.020341772586107254,
0.03606168553233147,
0.015174231491982937,
0.04273074120283127,
-0.030034150928258896,
0.030035778880119324,
-0.017056390643119812,
0.08081094175577164,
0.024131443351507187,
-0.07640098035335541,
0.0897812768816948,
-0.06343095749616623,
0.016853705048561096,
0.1086731031537056,
-0.06065855920314789,
0.195979505777359,
0.021808600053191185,
0.0588005855679512,
0.10186917334794998,
0.01761912927031517,
0.05323176458477974,
0.09025472402572632,
-0.04631213843822479,
0.005671179853379726,
-0.0603225901722908,
-0.050755709409713745,
-0.03367295116186142,
0.048133984208106995,
0.02693343535065651,
0.020002804696559906,
-0.11879270523786545,
0.02262767404317856,
-0.001691001933068037,
0.13613130152225494,
0.04643464460968971,
-0.11792705208063126,
-0.12223627418279648,
0.03342891484498978,
-0.047884635627269745,
-0.06619463115930557,
0.030220266431570053,
0.06106068193912506,
-0.1516633927822113,
0.042871348559856415,
-0.005226924084126949,
0.06707648187875748,
-0.09336088597774506,
0.013631189242005348,
-0.04559178277850151,
-0.0017664041370153427,
0.0027405712753534317,
0.06878042221069336,
-0.1314668208360672,
0.1107635647058487,
0.020590558648109436,
0.05059543997049332,
-0.08206554502248764,
0.016329767182469368,
-0.009347834624350071,
0.10860039293766022,
0.1173611655831337,
0.04448128491640091,
-0.0466545931994915,
-0.0177265964448452,
-0.048469364643096924,
0.019518306478857994,
0.05650852620601654,
-0.07602133601903915,
0.061493195593357086,
0.009477625600993633,
0.009128507226705551,
-0.022507240995764732,
0.01694878190755844,
-0.13202019035816193,
-0.12106452882289886,
0.059069931507110596,
-0.07482732087373734,
-0.102646104991436,
-0.05641121789813042,
-0.06278111040592194,
-0.05150095373392105,
0.2035827487707138,
-0.11228583753108978,
-0.08927902579307556,
-0.09571628272533417,
-0.01149933785200119,
0.045644551515579224,
-0.06727283447980881,
0.047857776284217834,
-0.0379730686545372,
0.08710238337516785,
-0.04948296397924423,
-0.1053847223520279,
0.03421803191304207,
-0.11148743331432343,
-0.1120501235127449,
-0.04618081822991371,
0.10208991169929504,
0.11364808678627014,
0.03717130422592163,
0.010732457041740417,
0.011660825461149216,
-0.0015264730900526047,
-0.11762309074401855,
0.01212921179831028,
0.13246718049049377,
-0.0028731990605592728,
0.0732915848493576,
-0.06318642944097519,
0.02246718481183052,
-0.017724797129631042,
-0.0028008483350276947,
0.1327730119228363,
0.18827243149280548,
-0.06539198756217957,
0.17215968668460846,
0.20047834515571594,
-0.10809767246246338,
-0.19256025552749634,
-0.05288270488381386,
0.0010359873995184898,
0.0450759194791317,
0.04714479297399521,
-0.1821129024028778,
0.09500788152217865,
0.03556229919195175,
-0.03365496173501015,
0.02869383618235588,
-0.234246164560318,
-0.11225740611553192,
0.09325776249170303,
0.0550212524831295,
0.19075143337249756,
-0.08026545494794846,
-0.03902396559715271,
-0.015339365229010582,
-0.041914962232112885,
0.041608914732933044,
-0.032347388565540314,
0.09265212714672089,
0.007995763793587685,
-0.030059192329645157,
0.0030053267255425453,
-0.031765297055244446,
0.09512080252170563,
0.04366627335548401,
0.02431720867753029,
-0.07091186195611954,
-0.013157766312360764,
0.1161026656627655,
-0.03896784037351608,
0.10214152187108994,
0.046604130417108536,
0.07468877732753754,
-0.09674809128046036,
-0.05947430059313774,
-0.07381118834018707,
0.0455748587846756,
-0.04079878330230713,
-0.05453985929489136,
-0.06417636573314667,
0.056045595556497574,
0.03876012563705444,
0.009042775258421898,
0.0022126566618680954,
-0.03664586320519447,
0.042714182287454605,
0.08614136278629303,
0.07871270924806595,
-0.03320100158452988,
-0.06927825510501862,
-0.05103805661201477,
-0.04895710200071335,
0.06734210252761841,
-0.09520819783210754,
0.019143162295222282,
0.02706216648221016,
0.013485626317560673,
0.08925894647836685,
0.03380022197961807,
-0.13615794479846954,
0.013466855511069298,
0.03308811038732529,
-0.12030372023582458,
-0.10976431518793106,
-0.018666425719857216,
0.03291981667280197,
-0.03750399872660637,
0.05619869381189346,
0.14437434077262878,
-0.03787890449166298,
-0.03215009346604347,
-0.04972768574953079,
0.037739481776952744,
-0.02054552733898163,
0.049229077994823456,
0.06489910185337067,
0.0312536284327507,
-0.07213854789733887,
0.07903628051280975,
0.041104238480329514,
-0.039888933300971985,
0.041710179299116135,
0.040198102593421936,
-0.09282392263412476,
-0.07829941809177399,
-0.058995362371206284,
0.09500241279602051,
-0.020002620294690132,
-0.043530840426683426,
-0.0010365936905145645,
-0.0801611840724945,
0.0683147981762886,
0.08237885683774948,
0.047698404639959335,
0.03926853835582733,
-0.08858868479728699,
0.01647667959332466,
-0.0553596131503582,
0.03741893917322159,
-0.027375133708119392,
-0.0035396497696638107,
-0.057285480201244354,
0.06915844976902008,
0.06372931599617004,
0.09855727851390839,
-0.034177422523498535,
-0.07559960335493088,
-0.0800316110253334,
-0.01270839013159275,
-0.059631988406181335,
-0.028428539633750916,
-0.07719975709915161,
-0.007921827957034111,
0.003366979071870446,
-0.002477087080478668,
0.02309763804078102,
0.03606124594807625,
-0.0431000217795372,
-0.017401982098817825,
-0.03610188886523247,
0.03782567381858826,
-0.059788309037685394,
0.00514558982104063,
0.01592845469713211,
-0.035086292773485184,
0.09230690449476242,
0.03621010482311249,
-0.013383524492383003,
0.042889613658189774,
-0.026330940425395966,
0.03886527568101883,
-0.01922646537423134,
0.00022338377311825752,
-0.023814929649233818,
-0.10842742770910263,
-0.003881511278450489,
0.0024861525744199753,
-0.021374747157096863,
0.008863567374646664,
0.06330196559429169,
-0.07393211871385574,
0.08444419503211975,
0.04718988016247749,
-0.030123993754386902,
-0.07248273491859436,
0.03873034194111824,
-0.01689988188445568,
0.028440428897738457,
0.06917861849069595,
-0.03414444997906685,
0.05229363590478897,
-0.098213329911232,
-0.027853094041347504,
0.0041361586190760136,
-0.004003528505563736,
-0.006639342755079269,
-0.05337183177471161,
-0.0045039597898721695,
0.008342450484633446,
0.17322538793087006,
-0.022243186831474304,
0.031446728855371475,
0.014135665260255337,
0.007495341822504997,
0.04161103069782257,
-0.015127768740057945,
0.07533615827560425,
-0.00896701030433178,
-0.026712382212281227,
-0.01708304136991501,
0.03789713606238365,
0.006236383691430092,
0.005114402621984482,
0.14002305269241333,
0.04582982882857323,
0.09414731711149216,
0.07738953828811646,
0.012966477312147617,
0.015446529723703861,
-0.13203799724578857,
-0.09444683790206909,
0.005132724530994892,
0.058546148240566254,
-0.017095433548092842,
0.015576660633087158,
0.0897817462682724,
-0.08477382361888885,
0.07018493115901947,
0.051680486649274826,
-0.048215266317129135,
-0.12701117992401123,
-0.19357267022132874,
-0.023339807987213135,
-0.028727201744914055,
-0.009865477681159973,
-0.08972826600074768,
0.017631806433200836,
0.09370782971382141,
0.02366936393082142,
-0.009633362293243408,
0.09718672931194305,
-0.10909689962863922,
-0.03266396373510361,
0.04596076160669327,
-0.02649986743927002,
0.015839723870158195,
0.042379509657621384,
0.022447990253567696,
-0.0043624397367239,
0.041317351162433624,
0.040029365569353104,
0.045462362468242645,
0.025204485282301903,
0.05207771807909012,
-0.023413807153701782,
-0.07332916557788849,
-0.03346024081110954,
-0.004319222643971443,
0.054326582700014114,
0.1390143781900406,
0.023584065958857536,
-0.06704577058553696,
0.0073531633242964745,
0.1099512130022049,
-0.03226146101951599,
-0.05306710675358772,
-0.10846098512411118,
0.24189430475234985,
0.024122275412082672,
0.0010305943433195353,
-0.00476322416216135,
-0.0462874136865139,
0.006508788093924522,
0.20697033405303955,
0.21782681345939636,
0.007544967345893383,
-0.009905444458127022,
0.010162238962948322,
-0.012458790093660355,
0.03442639112472534,
0.14439144730567932,
0.004362193867564201,
0.24984604120254517,
-0.04818166047334671,
0.038050517439842224,
-0.041855502873659134,
-0.03771220147609711,
-0.10160061717033386,
0.06832000613212585,
-0.012856969609856606,
0.008527273312211037,
-0.034435294568538666,
0.07000390440225601,
-0.039920732378959656,
-0.17677031457424164,
0.0017702309414744377,
-0.0007906428072601557,
-0.060740333050489426,
0.011946400627493858,
0.0008509345352649689,
0.02319001406431198,
0.0821361392736435,
-0.016011610627174377,
-0.004436753690242767,
0.1327086091041565,
0.018862750381231308,
-0.09728290885686874,
-0.06158030778169632,
0.11173136532306671,
0.010567987337708473,
0.1452133059501648,
0.011457232758402824,
0.07709426432847977,
0.08673983812332153,
0.02125939354300499,
-0.09760774672031403,
0.043921541422605515,
-0.0216564629226923,
-0.026355694979429245,
0.007017967291176319,
0.10677339881658554,
-0.00844118557870388,
0.059174492955207825,
0.024447467178106308,
-0.09103335440158844,
0.061533261090517044,
0.01608864963054657,
-0.034677859395742416,
-0.08289551734924316,
0.0820741057395935,
-0.0898364931344986,
0.15836329758167267,
0.12060514092445374,
-0.014026260003447533,
-0.045196183025836945,
-0.029026834294199944,
0.015314118936657906,
0.0005465298891067505,
0.05690081790089607,
-0.02586646005511284,
-0.13458627462387085,
0.021186312660574913,
-0.0836128294467926,
0.028534362092614174,
-0.24551716446876526,
-0.08837036788463593,
0.029422352090477943,
-0.020511511713266373,
-0.018146298825740814,
0.0481845922768116,
0.04676603153347969,
0.029239781200885773,
-0.03611736744642258,
0.01347651332616806,
-0.036099035292863846,
0.06015274301171303,
-0.11175873875617981,
-0.09537714719772339
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 1200k (uncased)
Seed 1 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1200k')
model = BertModel.from_pretrained("multiberts-seed-1-1200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-1200k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1200k (uncased)
Seed 1 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1200k (uncased)\nSeed 1 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1200k (uncased)\nSeed 1 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 1200k (uncased)\nSeed 1 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08538547158241272,
-0.0015786457806825638,
-0.0020691542886197567,
0.06947682052850723,
0.08750946074724197,
0.0011142180301249027,
0.1144532710313797,
0.04979601874947548,
-0.03475109487771988,
0.025028768926858902,
0.0941581130027771,
0.02679915353655815,
0.04203762486577034,
0.06323044002056122,
0.09774583578109741,
-0.2588754892349243,
0.05048614740371704,
-0.06362517923116684,
0.05688218027353287,
0.07471351325511932,
0.09881259500980377,
-0.069659024477005,
0.06142530217766762,
0.03723306581377983,
-0.08426079154014587,
-0.01742534153163433,
-0.01620098017156124,
-0.03390739858150482,
0.10189640522003174,
0.06995011866092682,
0.06316269189119339,
-0.00013793446123600006,
0.05974200367927551,
-0.0887996107339859,
0.01646164432168007,
0.04545510932803154,
0.001411536242812872,
0.024693511426448822,
-0.011510642245411873,
0.01582305319607258,
0.1053435206413269,
0.041518427431583405,
0.07713554799556732,
0.03543038293719292,
-0.09560739994049072,
-0.10892951488494873,
-0.07903600484132767,
0.10668902099132538,
0.05209921672940254,
0.04167497903108597,
-0.0049376580864191055,
0.0725698322057724,
-0.02787906304001808,
0.07376679033041,
0.10915717482566833,
-0.2522689402103424,
-0.008471687324345112,
0.06914283335208893,
0.04070263355970383,
0.04604804515838623,
0.012104707770049572,
0.024164404720067978,
0.007720142602920532,
0.042836688458919525,
0.02554885670542717,
-0.022700950503349304,
0.11292777955532074,
-0.04711228609085083,
-0.15073955059051514,
-0.04327120631933212,
0.1159423291683197,
-0.006593208760023117,
-0.12490236014127731,
-0.09296077489852905,
-0.03313472121953964,
0.11098351329565048,
-0.002109913155436516,
-0.015565279871225357,
-0.005122096743434668,
0.01133093424141407,
0.02600480057299137,
-0.09311264753341675,
-0.08399481326341629,
-0.02933521196246147,
-0.04085144028067589,
0.13073495030403137,
0.04672466963529587,
0.051780667155981064,
-0.03592649847269058,
0.08905233442783356,
-0.1208391934633255,
-0.03948395699262619,
-0.05361747741699219,
-0.08032000064849854,
-0.01868688315153122,
0.008771950379014015,
-0.026195239275693893,
-0.0808044821023941,
-0.05961877852678299,
0.1149054765701294,
0.032275497913360596,
0.02949998900294304,
-0.002716530580073595,
0.0419897735118866,
0.07239720970392227,
0.09499411284923553,
-0.039336271584033966,
0.05417192727327347,
0.03544910252094269,
-0.02250056341290474,
0.056834448128938675,
-0.048531144857406616,
-0.10104312002658844,
0.07624539732933044,
0.0011189030483365059,
0.037694696336984634,
0.02597314864397049,
0.03310404717922211,
-0.010735484771430492,
-0.06970372796058655,
0.16394880414009094,
-0.07679569721221924,
-0.011079760268330574,
-0.019190432503819466,
0.010843604803085327,
0.044930487871170044,
0.03502875566482544,
-0.009307469241321087,
-0.047479867935180664,
-0.004848960787057877,
-0.057669468224048615,
-0.025180481374263763,
-0.05612223967909813,
-0.11676189303398132,
-0.00009981170296669006,
-0.04455074667930603,
-0.03284631669521332,
-0.14218595623970032,
-0.2171168029308319,
-0.02081785351037979,
0.06449779868125916,
-0.0015680231153964996,
-0.012470743618905544,
0.02355281077325344,
0.015100399032235146,
-0.021363522857427597,
0.01073222141712904,
-0.04777853563427925,
-0.00007363967597484589,
-0.005049910396337509,
-0.03559469059109688,
0.0562862753868103,
-0.04327136278152466,
0.02391776442527771,
-0.07016260921955109,
0.022732486948370934,
-0.2132316529750824,
0.09097032994031906,
-0.035345409065485,
0.003957193344831467,
-0.03758995234966278,
-0.04013488441705704,
0.003751033917069435,
0.0455474779009819,
-0.006761341355741024,
0.11729894578456879,
-0.13607123494148254,
-0.047587282955646515,
0.18584468960762024,
-0.15963627398014069,
-0.0019242092967033386,
0.09951742738485336,
-0.0496605783700943,
0.058789268136024475,
0.13297928869724274,
0.09937000274658203,
0.08541136980056763,
-0.07127390056848526,
0.008923737332224846,
0.06421499699354172,
-0.06967812776565552,
0.05138559639453888,
0.08672577887773514,
-0.026377784088253975,
-0.13929131627082825,
0.028340470045804977,
-0.07221218943595886,
-0.005973637104034424,
-0.027142509818077087,
-0.019756481051445007,
0.0044552721083164215,
-0.03942546993494034,
0.022869134321808815,
0.00588767696171999,
0.018563611432909966,
-0.03716237470507622,
-0.0824798047542572,
0.03217504918575287,
0.07669612020254135,
-0.06955718994140625,
0.04424077272415161,
-0.06794043630361557,
0.060620617121458054,
-0.07967967540025711,
-0.0046919528394937515,
-0.1715114563703537,
-0.02326192334294319,
0.04344671964645386,
-0.05038196220993996,
0.050778064876794815,
0.08858273178339005,
0.0016702079446986318,
0.12064302712678909,
-0.03974364697933197,
0.00591176375746727,
-0.005502481013536453,
-0.01011250726878643,
-0.05198431387543678,
-0.12029588222503662,
-0.0789724811911583,
-0.07023836672306061,
0.09902547299861908,
-0.06997312605381012,
0.029904456809163094,
-0.07217317074537277,
-0.024753982201218605,
-0.00956142507493496,
-0.05951821804046631,
-0.004669462330639362,
0.010758230462670326,
-0.029608076438307762,
-0.04873061180114746,
0.0465388149023056,
0.0525098592042923,
-0.06048564985394478,
0.07799965143203735,
-0.1022523045539856,
-0.06805884838104248,
0.0540459007024765,
0.0174839049577713,
-0.08508666604757309,
0.09105564653873444,
-0.019608570262789726,
-0.0133991539478302,
-0.06281371414661407,
-0.04296717792749405,
0.19458205997943878,
-0.023210011422634125,
0.10088574886322021,
-0.09214694797992706,
0.003887580009177327,
0.03030913509428501,
-0.047249969094991684,
-0.013798138126730919,
0.061233606189489365,
0.05536939576268196,
-0.18597553670406342,
0.016202852129936218,
0.045902349054813385,
0.07483728229999542,
0.10751643776893616,
0.025072352960705757,
-0.0236373171210289,
-0.045403510332107544,
-0.011233684606850147,
0.006034746300429106,
0.05634597688913345,
-0.024045094847679138,
-0.00830912683159113,
0.030232466757297516,
0.06001526117324829,
0.01782871223986149,
-0.08249416947364807,
0.03617570549249649,
0.0678187757730484,
-0.016261693090200424,
-0.04068770632147789,
-0.027106665074825287,
-0.058780960738658905,
0.06293447315692902,
0.05236086994409561,
0.03803297504782677,
0.024745307862758636,
-0.012170979753136635,
-0.13422153890132904,
0.19099634885787964,
-0.11231571435928345,
-0.251909464597702,
-0.10864857584238052,
-0.05707991123199463,
-0.024391425773501396,
0.041010670363903046,
0.05863656848669052,
-0.029869776219129562,
-0.04030526056885719,
-0.11545398831367493,
0.06322135031223297,
-0.061443738639354706,
-0.029210729524493217,
-0.010889999568462372,
-0.05325800180435181,
-0.019974231719970703,
-0.12812864780426025,
-0.012464465573430061,
-0.030150948092341423,
-0.0735316202044487,
0.0053157806396484375,
-0.036833859980106354,
0.02778274193406105,
0.1406208574771881,
0.03135119006037712,
-0.019448308274149895,
-0.017483089119195938,
0.19242995977401733,
0.01035609096288681,
0.061320360749959946,
0.10847032815217972,
-0.028024787083268166,
0.05553511157631874,
0.04733937606215477,
0.025266723707318306,
-0.04651043564081192,
0.013741042464971542,
-0.015243624337017536,
-0.1225057914853096,
-0.1725131869316101,
-0.06997125595808029,
-0.0005395368207246065,
0.007281680591404438,
0.019738242030143738,
0.037017375230789185,
0.021789077669382095,
0.04261982813477516,
-0.028956087306141853,
0.028401974588632584,
-0.017477743327617645,
0.08059333264827728,
0.021289240568876266,
-0.07348129153251648,
0.09070944041013718,
-0.06256557255983353,
0.016686376184225082,
0.10858377069234848,
-0.06220768764615059,
0.1960962861776352,
0.022189591079950333,
0.05745891109108925,
0.10104402154684067,
0.018781613558530807,
0.05588437244296074,
0.08866213262081146,
-0.04723916947841644,
0.005647474899888039,
-0.059737324714660645,
-0.05074223130941391,
-0.03513666242361069,
0.04784972593188286,
0.024984018877148628,
0.019322790205478668,
-0.11780649423599243,
0.023631613701581955,
-0.003247835673391819,
0.13670428097248077,
0.04553620517253876,
-0.11700963973999023,
-0.12281712889671326,
0.03269747644662857,
-0.04683546721935272,
-0.06509003043174744,
0.032240502536296844,
0.057910770177841187,
-0.15126290917396545,
0.04116407781839371,
-0.005393854342401028,
0.06930451095104218,
-0.09289126098155975,
0.014313329011201859,
-0.04537901282310486,
-0.0019224230200052261,
0.0025233852211385965,
0.06875552237033844,
-0.13086941838264465,
0.11113615334033966,
0.019874311983585358,
0.048906564712524414,
-0.07929086685180664,
0.01538286916911602,
-0.009893746115267277,
0.11097230762243271,
0.11619219183921814,
0.04557652026414871,
-0.04891952872276306,
-0.015241400338709354,
-0.04869394749403,
0.018737493082880974,
0.056472666561603546,
-0.07647401094436646,
0.060512300580739975,
0.010313637554645538,
0.008724924176931381,
-0.0228256955742836,
0.012493753805756569,
-0.13171645998954773,
-0.12096582353115082,
0.05887840688228607,
-0.07751654833555222,
-0.09809711575508118,
-0.05613142251968384,
-0.06267520785331726,
-0.04948198050260544,
0.2022613286972046,
-0.11058802902698517,
-0.08932863175868988,
-0.09680336713790894,
-0.010339796543121338,
0.04601244255900383,
-0.0673171728849411,
0.04705865681171417,
-0.04021168500185013,
0.08612667769193649,
-0.04906401038169861,
-0.10785463452339172,
0.03352116048336029,
-0.11119090765714645,
-0.11133664846420288,
-0.04406052455306053,
0.10078123211860657,
0.11396617442369461,
0.03789086267352104,
0.00940346997231245,
0.011229356750845909,
-0.0003536716103553772,
-0.11805393546819687,
0.011008646339178085,
0.12939368188381195,
-0.00446358323097229,
0.07156006991863251,
-0.06050834804773331,
0.026795528829097748,
-0.017565015703439713,
-0.0009026769548654556,
0.13147370517253876,
0.1864936500787735,
-0.06630384922027588,
0.17130795121192932,
0.20248496532440186,
-0.10644885897636414,
-0.19231648743152618,
-0.05192425474524498,
0.0012904061004519463,
0.044844865798950195,
0.04748069494962692,
-0.18321239948272705,
0.0933099091053009,
0.03494514897465706,
-0.033174432814121246,
0.02072305977344513,
-0.23470771312713623,
-0.1114685982465744,
0.09054577350616455,
0.053406450897455215,
0.19336315989494324,
-0.081240214407444,
-0.03977721929550171,
-0.014509053900837898,
-0.04027220606803894,
0.043051205575466156,
-0.029924973845481873,
0.09218204021453857,
0.007173081859946251,
-0.02860286273062229,
0.0027386294677853584,
-0.03178856894373894,
0.09492747485637665,
0.043396588414907455,
0.023209793493151665,
-0.07125777006149292,
-0.009885856881737709,
0.10982032120227814,
-0.03912654146552086,
0.09999380260705948,
0.04677508771419525,
0.0764647126197815,
-0.09687842428684235,
-0.058900076895952225,
-0.07511472702026367,
0.04369765520095825,
-0.04156652092933655,
-0.05432453379034996,
-0.06390316039323807,
0.05872775986790657,
0.03800783306360245,
0.009570012800395489,
0.0024412795901298523,
-0.03807038068771362,
0.04398082569241524,
0.0852324590086937,
0.0771346464753151,
-0.03299441188573837,
-0.07418721169233322,
-0.049519218504428864,
-0.04787374660372734,
0.06710117310285568,
-0.09986089169979095,
0.018581774085760117,
0.027336275205016136,
0.01284366101026535,
0.08895780891180038,
0.03425803408026695,
-0.1368999183177948,
0.011986033990979195,
0.03269190341234207,
-0.12098485976457596,
-0.11012355983257294,
-0.018969636410474777,
0.033116765320301056,
-0.037224143743515015,
0.05652444064617157,
0.14476165175437927,
-0.03916626423597336,
-0.03166201338171959,
-0.048195771872997284,
0.037575025111436844,
-0.020536059513688087,
0.047547534108161926,
0.06717006862163544,
0.031129159033298492,
-0.07157598435878754,
0.0763833075761795,
0.04110739752650261,
-0.0408940315246582,
0.040552206337451935,
0.04098614305257797,
-0.09405994415283203,
-0.07762891054153442,
-0.05474943667650223,
0.08996330946683884,
-0.02597157098352909,
-0.044481612741947174,
-0.0012281499803066254,
-0.08133086562156677,
0.06897930800914764,
0.08445039391517639,
0.04834859073162079,
0.038056157529354095,
-0.08735047280788422,
0.015405346639454365,
-0.05532209947705269,
0.03566878288984299,
-0.028469445183873177,
-0.0024905838072299957,
-0.05559828132390976,
0.06431717425584793,
0.06459996104240417,
0.09935352951288223,
-0.03352091461420059,
-0.07570632547140121,
-0.08231217414140701,
-0.012693483382463455,
-0.06739551573991776,
-0.030657587572932243,
-0.07641346752643585,
-0.007330548018217087,
0.0027315798215568066,
-0.001551734283566475,
0.023566804826259613,
0.035316549241542816,
-0.043823689222335815,
-0.01717713475227356,
-0.03728343918919563,
0.03590105101466179,
-0.05914834141731262,
0.007737778127193451,
0.015581592917442322,
-0.03402956575155258,
0.09124608337879181,
0.036157868802547455,
-0.012294229120016098,
0.04210280254483223,
-0.019761141389608383,
0.0384882390499115,
-0.02115171030163765,
0.0004464755766093731,
-0.02325325272977352,
-0.106808140873909,
-0.0034332135692238808,
0.002568291500210762,
-0.02136918716132641,
0.008842882700264454,
0.05952772498130798,
-0.07328395545482635,
0.08936759829521179,
0.04712166637182236,
-0.02969427779316902,
-0.07294201850891113,
0.038754917681217194,
-0.015569185838103294,
0.029551014304161072,
0.06780821830034256,
-0.03318723291158676,
0.05299323797225952,
-0.09692095965147018,
-0.028478730469942093,
0.0044158706441521645,
-0.003496181219816208,
-0.008883036673069,
-0.05459178611636162,
-0.0042821262031793594,
0.009143251925706863,
0.1701100468635559,
-0.02176317572593689,
0.0347650870680809,
0.014208994805812836,
0.0074250297620892525,
0.045929621905088425,
-0.01509607769548893,
0.07043781876564026,
-0.006305512972176075,
-0.026427485048770905,
-0.013529954478144646,
0.03813644126057625,
0.004376489669084549,
0.0037215594202280045,
0.14165645837783813,
0.04653136432170868,
0.0907476395368576,
0.0755000188946724,
0.01546726655215025,
0.018066929653286934,
-0.12851698696613312,
-0.09204398095607758,
0.004608935676515102,
0.05640772357583046,
-0.01842024177312851,
0.013912584632635117,
0.08980409801006317,
-0.08223818242549896,
0.07121941447257996,
0.0502127967774868,
-0.048639580607414246,
-0.12742844223976135,
-0.19061866402626038,
-0.02333671599626541,
-0.030409960076212883,
-0.010485496371984482,
-0.08966699242591858,
0.01574164628982544,
0.09797070920467377,
0.024577569216489792,
-0.008340547792613506,
0.09763751924037933,
-0.10712392628192902,
-0.03199287876486778,
0.04391169175505638,
-0.026691988110542297,
0.01797277107834816,
0.04373728483915329,
0.02308589592576027,
-0.0033108629286289215,
0.042276471853256226,
0.03959071263670921,
0.046412065625190735,
0.027325347065925598,
0.05265413969755173,
-0.02360302396118641,
-0.07219871133565903,
-0.03301630914211273,
-0.0037564218509942293,
0.05348525568842888,
0.14054115116596222,
0.02260003425180912,
-0.06643594801425934,
0.006895327474921942,
0.1103905737400055,
-0.03252818062901497,
-0.05517454072833061,
-0.10903457552194595,
0.24056383967399597,
0.022465523332357407,
0.00216364162042737,
-0.006011249963194132,
-0.04668934643268585,
0.005518985912203789,
0.21172919869422913,
0.21957072615623474,
0.006298181600868702,
-0.010419937781989574,
0.010901407338678837,
-0.012170415371656418,
0.03684566542506218,
0.14481446146965027,
0.004676921293139458,
0.2517905831336975,
-0.04832468926906586,
0.03596087545156479,
-0.04235421493649483,
-0.03779946267604828,
-0.10106828808784485,
0.06828846037387848,
-0.012019386515021324,
0.008643921464681625,
-0.03441789746284485,
0.07148366421461105,
-0.04186598211526871,
-0.17504975199699402,
0.0016607092693448067,
-0.0014440210070461035,
-0.06132368743419647,
0.01163356751203537,
-0.0004534544423222542,
0.021971499547362328,
0.08120803534984589,
-0.016962751746177673,
-0.004898217506706715,
0.13154104351997375,
0.018737711012363434,
-0.09769134968519211,
-0.06315742433071136,
0.11350098252296448,
0.017851311713457108,
0.14002567529678345,
0.011323641985654831,
0.076726034283638,
0.08715727925300598,
0.021173495799303055,
-0.09797309339046478,
0.04186180979013443,
-0.0206704530864954,
-0.02889385260641575,
0.004996773786842823,
0.10691264271736145,
-0.007414232939481735,
0.05870075896382332,
0.023982975631952286,
-0.09064261615276337,
0.06007220596075058,
0.015322886407375336,
-0.03309695050120354,
-0.08401723951101303,
0.08294627815485,
-0.08934913575649261,
0.15828904509544373,
0.12200251966714859,
-0.013842416927218437,
-0.04437108337879181,
-0.02879331447184086,
0.015366613864898682,
-0.00003366265445947647,
0.05452936887741089,
-0.026225179433822632,
-0.13535502552986145,
0.020455671474337578,
-0.0856969952583313,
0.027592569589614868,
-0.2483251988887787,
-0.08835462480783463,
0.031224917620420456,
-0.018829695880413055,
-0.016375619918107986,
0.0500236377120018,
0.046090297400951385,
0.028622206300497055,
-0.03578157350420952,
0.017949087545275688,
-0.037636898458004,
0.059719279408454895,
-0.10983125865459442,
-0.09550648927688599
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 120k (uncased)
Seed 1 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-120k')
model = BertModel.from_pretrained("multiberts-seed-1-120k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-120k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 120k (uncased)
Seed 1 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 120k (uncased)\nSeed 1 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 120k (uncased)\nSeed 1 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 120k (uncased)\nSeed 1 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08488887548446655,
-0.00006129418034106493,
-0.0021533905528485775,
0.06774342060089111,
0.08816204220056534,
0.000727491918951273,
0.11543633788824081,
0.049706269055604935,
-0.03247564285993576,
0.025310853496193886,
0.0945746898651123,
0.030829112976789474,
0.04045683890581131,
0.062409061938524246,
0.09667147696018219,
-0.25905314087867737,
0.0502343624830246,
-0.06374979764223099,
0.05492643266916275,
0.07535117864608765,
0.09980098903179169,
-0.0698537826538086,
0.06190791726112366,
0.038481131196022034,
-0.08315224945545197,
-0.017740890383720398,
-0.015747500583529472,
-0.03304838761687279,
0.10155485570430756,
0.06971868872642517,
0.06256227940320969,
0.00030434317886829376,
0.058924414217472076,
-0.09065265953540802,
0.01663229614496231,
0.04671362042427063,
0.0003792610950767994,
0.024942468851804733,
-0.009786024689674377,
0.01434783823788166,
0.10761377215385437,
0.03843529149889946,
0.07633310556411743,
0.03651231527328491,
-0.09566114842891693,
-0.11200251430273056,
-0.07852253317832947,
0.10193108767271042,
0.04965995252132416,
0.04298514872789383,
-0.006083428859710693,
0.07357899099588394,
-0.028850877657532692,
0.07452286779880524,
0.1102965921163559,
-0.25576820969581604,
-0.0077238925732672215,
0.06686130166053772,
0.04234755039215088,
0.04181983321905136,
0.01079003419727087,
0.0263212900608778,
0.0067528970539569855,
0.04238791763782501,
0.027217715978622437,
-0.02343735843896866,
0.12004339694976807,
-0.046611104160547256,
-0.15158304572105408,
-0.04314817488193512,
0.11681551486253738,
-0.0057788267731666565,
-0.1252661645412445,
-0.09488870203495026,
-0.03305964916944504,
0.11580712348222733,
-0.0031867455691099167,
-0.014771418645977974,
-0.004384309519082308,
0.011740938760340214,
0.026602044701576233,
-0.09276361763477325,
-0.08540494740009308,
-0.028300056234002113,
-0.03898361325263977,
0.12961037456989288,
0.04696720093488693,
0.05130122974514961,
-0.03539503365755081,
0.08686389029026031,
-0.11764540523290634,
-0.04016884043812752,
-0.05285169184207916,
-0.07956910878419876,
-0.01901162788271904,
0.009321172721683979,
-0.0279722698032856,
-0.08353674411773682,
-0.05793962627649307,
0.11491851508617401,
0.03200376406311989,
0.030379418283700943,
-0.0021159048192203045,
0.04183989763259888,
0.07378388941287994,
0.09635157138109207,
-0.03792020305991173,
0.05417995899915695,
0.03518263250589371,
-0.024213772267103195,
0.0569782517850399,
-0.04908452183008194,
-0.10157863050699234,
0.07710754126310349,
0.001993543468415737,
0.037489794194698334,
0.024693787097930908,
0.033999860286712646,
-0.011676608584821224,
-0.07061820477247238,
0.163954496383667,
-0.07698453217744827,
-0.010559696704149246,
-0.016713926568627357,
0.011837247759103775,
0.047012388706207275,
0.03454054892063141,
-0.00911969318985939,
-0.04727136343717575,
-0.007490728050470352,
-0.0573798231780529,
-0.02452011965215206,
-0.055757761001586914,
-0.11790050566196442,
0.0003604395315051079,
-0.04423382878303528,
-0.03274530544877052,
-0.14143797755241394,
-0.21470162272453308,
-0.02086402103304863,
0.06452210247516632,
-0.0012021027505397797,
-0.010405786335468292,
0.022918544709682465,
0.016414763405919075,
-0.020598234608769417,
0.012246321886777878,
-0.04671841114759445,
-0.0007603121921420097,
-0.004992125555872917,
-0.03440561890602112,
0.05596295744180679,
-0.04190320521593094,
0.024154679849743843,
-0.06989014148712158,
0.022687075659632683,
-0.21056395769119263,
0.09072596579790115,
-0.03434350714087486,
0.0011656880378723145,
-0.038740985095500946,
-0.041519422084093094,
0.006040450185537338,
0.04557666927576065,
-0.008211023174226284,
0.11697150766849518,
-0.1352279782295227,
-0.04975339025259018,
0.1855623573064804,
-0.15924879908561707,
-0.0003084354102611542,
0.10088756680488586,
-0.04977615177631378,
0.05819566175341606,
0.13259851932525635,
0.09782294929027557,
0.08319540321826935,
-0.07387769967317581,
0.010184507817029953,
0.06419380754232407,
-0.0691969245672226,
0.05537567287683487,
0.0885777473449707,
-0.025401677936315536,
-0.13914898037910461,
0.028526581823825836,
-0.07216674089431763,
-0.006937088444828987,
-0.02802490070462227,
-0.018818030133843422,
0.004868995398283005,
-0.038492072373628616,
0.0260348878800869,
0.006017385050654411,
0.018506379798054695,
-0.03747231885790825,
-0.08339717984199524,
0.033528827130794525,
0.07621805369853973,
-0.0709829330444336,
0.04327493533492088,
-0.06912625581026077,
0.06001808121800423,
-0.07856333255767822,
-0.004325314424932003,
-0.1713687628507614,
-0.024485405534505844,
0.04341709241271019,
-0.049183402210474014,
0.05194862186908722,
0.09306193888187408,
0.0019565531983971596,
0.12198422849178314,
-0.03845421224832535,
0.005980773828923702,
-0.006874818354845047,
-0.010638860985636711,
-0.05123969912528992,
-0.12228579819202423,
-0.08057835698127747,
-0.07026532292366028,
0.10403923690319061,
-0.07374022156000137,
0.028928320854902267,
-0.07385900616645813,
-0.023618636652827263,
-0.008704598993062973,
-0.05942922085523605,
-0.0037330016493797302,
0.009802401065826416,
-0.029685426503419876,
-0.048002004623413086,
0.04784494638442993,
0.05183183029294014,
-0.0609138160943985,
0.07893861830234528,
-0.10354146361351013,
-0.06527644395828247,
0.05388205498456955,
0.012752912007272243,
-0.08393973112106323,
0.09137415885925293,
-0.019433360546827316,
-0.01311156153678894,
-0.062077805399894714,
-0.042231280356645584,
0.19375479221343994,
-0.022849157452583313,
0.10212346911430359,
-0.0917956605553627,
0.002666166750714183,
0.03025507740676403,
-0.04880137741565704,
-0.014504962600767612,
0.06309802830219269,
0.05256887897849083,
-0.1926870346069336,
0.01628580316901207,
0.04883673042058945,
0.07535942643880844,
0.1102183610200882,
0.02515523135662079,
-0.02475467324256897,
-0.046284593641757965,
-0.010468566790223122,
0.0063163116574287415,
0.055220503360033035,
-0.02726644277572632,
-0.008293135091662407,
0.03097771294414997,
0.0585736483335495,
0.017022371292114258,
-0.08201500028371811,
0.03620238974690437,
0.0675838440656662,
-0.01739346794784069,
-0.041476182639598846,
-0.0260978601872921,
-0.05903156101703644,
0.06272991001605988,
0.05086957663297653,
0.038156814873218536,
0.02480294182896614,
-0.01250847615301609,
-0.13482753932476044,
0.19085022807121277,
-0.11324239522218704,
-0.2552046775817871,
-0.10815207660198212,
-0.05633112043142319,
-0.02502184547483921,
0.04164569452404976,
0.05857010930776596,
-0.028996476903557777,
-0.04097014665603638,
-0.11495799571275711,
0.0640973448753357,
-0.06365849822759628,
-0.030437614768743515,
-0.012116877362132072,
-0.05229054391384125,
-0.019007332623004913,
-0.1282772272825241,
-0.012616707012057304,
-0.030181648209691048,
-0.07421322911977768,
0.006282350979745388,
-0.03692164272069931,
0.0276770181953907,
0.13909155130386353,
0.033038001507520676,
-0.019827503710985184,
-0.017225418239831924,
0.19385714828968048,
0.010342396795749664,
0.05989723280072212,
0.11158829182386398,
-0.028241552412509918,
0.05515396595001221,
0.0424928180873394,
0.025259748101234436,
-0.04728971794247627,
0.013248170726001263,
-0.013661745004355907,
-0.12132297456264496,
-0.17251074314117432,
-0.07024997472763062,
-0.0020163352601230145,
0.005914416164159775,
0.019439697265625,
0.03738917410373688,
0.020122390240430832,
0.04045345261693001,
-0.02973122149705887,
0.0281540397554636,
-0.01609855145215988,
0.0806339830160141,
0.02625097706913948,
-0.07482131570577621,
0.09154273569583893,
-0.06262566149234772,
0.015542188659310341,
0.1084541603922844,
-0.06279977411031723,
0.1912231743335724,
0.02451305091381073,
0.06258098781108856,
0.10178199410438538,
0.01919841766357422,
0.054773833602666855,
0.08780597150325775,
-0.04719821363687515,
0.005154875107109547,
-0.061156898736953735,
-0.051075443625450134,
-0.03642944246530533,
0.047663893550634384,
0.028079800307750702,
0.016899529844522476,
-0.11869660764932632,
0.022373557090759277,
-0.003423051442950964,
0.13713222742080688,
0.04804661124944687,
-0.1177031546831131,
-0.123231440782547,
0.03237094730138779,
-0.0464031919836998,
-0.0642455592751503,
0.030633870512247086,
0.05682268738746643,
-0.15194512903690338,
0.04287618398666382,
-0.006006340496242046,
0.06953796744346619,
-0.09331480413675308,
0.015178770758211613,
-0.0468655601143837,
-0.0006323773413896561,
0.003203105181455612,
0.07071639597415924,
-0.13339152932167053,
0.1079358384013176,
0.02021057903766632,
0.04746299237012863,
-0.0804443359375,
0.014559932053089142,
-0.010774965398013592,
0.11018578708171844,
0.11526498198509216,
0.044510357081890106,
-0.051180433481931686,
-0.01754985749721527,
-0.046719133853912354,
0.019559703767299652,
0.05966296046972275,
-0.07753574103116989,
0.06128488481044769,
0.007792948745191097,
0.007371568586677313,
-0.022148415446281433,
0.014802385121583939,
-0.13249441981315613,
-0.12107688188552856,
0.05932152271270752,
-0.07710869610309601,
-0.09969303756952286,
-0.055759526789188385,
-0.06248730421066284,
-0.049850910902023315,
0.20565500855445862,
-0.11087357997894287,
-0.0878964513540268,
-0.09781675040721893,
-0.010566521435976028,
0.04520345851778984,
-0.06710528582334518,
0.04599748179316521,
-0.040497735142707825,
0.08880652487277985,
-0.04850926995277405,
-0.10885041952133179,
0.033994410187006,
-0.11174464970827103,
-0.11324779689311981,
-0.044376276433467865,
0.10273896157741547,
0.1141892820596695,
0.03796585649251938,
0.010898695327341557,
0.010786819271743298,
-0.0013173632323741913,
-0.11722962558269501,
0.012795448303222656,
0.12951697409152985,
-0.0024062860757112503,
0.07165811955928802,
-0.06203092634677887,
0.03013072907924652,
-0.01717158779501915,
-0.00028071925044059753,
0.1326327919960022,
0.1868063062429428,
-0.06506036221981049,
0.17279057204723358,
0.20015351474285126,
-0.10553896427154541,
-0.19151847064495087,
-0.05398700386285782,
0.001964632421731949,
0.04450006037950516,
0.04898817464709282,
-0.18537554144859314,
0.09258803725242615,
0.03384096920490265,
-0.03282329440116882,
0.024029456079006195,
-0.2347872257232666,
-0.1104489266872406,
0.09046918898820877,
0.053501445800065994,
0.19325551390647888,
-0.08169472217559814,
-0.03907252103090286,
-0.016024533659219742,
-0.03902459144592285,
0.04614609479904175,
-0.03453996032476425,
0.09398353099822998,
0.00830373540520668,
-0.028887122869491577,
0.002750667743384838,
-0.03091868758201599,
0.09514967352151871,
0.041654907166957855,
0.022954531013965607,
-0.07017350196838379,
-0.0071920715272426605,
0.11114214360713959,
-0.038252320140600204,
0.10052751749753952,
0.04334360361099243,
0.07612772285938263,
-0.09546422958374023,
-0.05980556085705757,
-0.07526976615190506,
0.04594344645738602,
-0.041560761630535126,
-0.053717322647571564,
-0.0632036104798317,
0.057593390345573425,
0.03643512725830078,
0.00993987638503313,
0.004205841571092606,
-0.03900688886642456,
0.043901555240154266,
0.08925076574087143,
0.07726684957742691,
-0.03124900907278061,
-0.07588710635900497,
-0.051763057708740234,
-0.04736365005373955,
0.0677504912018776,
-0.09632758796215057,
0.019400693476200104,
0.02539033070206642,
0.012929761782288551,
0.0908653736114502,
0.03356665372848511,
-0.1374792754650116,
0.01182212121784687,
0.031263433396816254,
-0.12164197862148285,
-0.11272326111793518,
-0.018959946930408478,
0.031393442302942276,
-0.036749184131622314,
0.0567801296710968,
0.14752346277236938,
-0.03745153546333313,
-0.03128950297832489,
-0.04852601885795593,
0.036709707230329514,
-0.02043694257736206,
0.047607406973838806,
0.0663473904132843,
0.03135114908218384,
-0.07194957882165909,
0.07534943521022797,
0.04002766311168671,
-0.03843231126666069,
0.041150424629449844,
0.039009302854537964,
-0.0935416966676712,
-0.07867291569709778,
-0.055469218641519547,
0.09375861287117004,
-0.02593335323035717,
-0.045041877776384354,
-0.002071807160973549,
-0.08116167783737183,
0.06945817917585373,
0.08295640349388123,
0.04922255128622055,
0.0395207479596138,
-0.08750087022781372,
0.015642335638403893,
-0.05478246510028839,
0.03481336310505867,
-0.02846142090857029,
-0.004327770322561264,
-0.056127965450286865,
0.06434838473796844,
0.06518293172121048,
0.10069315135478973,
-0.03425873816013336,
-0.07713960111141205,
-0.0831090658903122,
-0.013271844014525414,
-0.06536494940519333,
-0.02951081097126007,
-0.07556725293397903,
-0.007181747350841761,
0.002963301260024309,
-0.0020012296736240387,
0.024915210902690887,
0.035689882934093475,
-0.04337348788976669,
-0.017692532390356064,
-0.037059079855680466,
0.03744392469525337,
-0.061959631741046906,
0.006991264410316944,
0.015956096351146698,
-0.035371389240026474,
0.09187932312488556,
0.037042826414108276,
-0.013288252055644989,
0.04345834255218506,
-0.02454550936818123,
0.038202445954084396,
-0.019861390814185143,
-0.0005377628840506077,
-0.02444433607161045,
-0.10877861082553864,
-0.004008947405964136,
0.002118576318025589,
-0.02198069728910923,
0.00820251926779747,
0.05977581813931465,
-0.07303585857152939,
0.08802515268325806,
0.04612790793180466,
-0.030802298337221146,
-0.07186536490917206,
0.038581930100917816,
-0.01848091185092926,
0.029852747917175293,
0.06914613395929337,
-0.03343029320240021,
0.05435040593147278,
-0.09774890542030334,
-0.028603296726942062,
0.005218910053372383,
-0.004001952707767487,
-0.010179780423641205,
-0.055016618221998215,
-0.003972405567765236,
0.008069543167948723,
0.17010031640529633,
-0.021402306854724884,
0.03855172544717789,
0.01290180068463087,
0.008743836544454098,
0.04672425240278244,
-0.014874637126922607,
0.07133138179779053,
-0.006584573537111282,
-0.02535751834511757,
-0.014902639202773571,
0.037991322576999664,
0.005219673737883568,
0.005016788840293884,
0.14026854932308197,
0.0479186587035656,
0.08784317225217819,
0.07492353022098541,
0.014402640052139759,
0.01801603101193905,
-0.13244692981243134,
-0.08868557214736938,
0.006094618700444698,
0.057468194514513016,
-0.01828376203775406,
0.01669159159064293,
0.09103825688362122,
-0.08384630084037781,
0.07127907872200012,
0.050416167825460434,
-0.04812202602624893,
-0.12788695096969604,
-0.1936744749546051,
-0.02467735856771469,
-0.029797619208693504,
-0.01051095686852932,
-0.08945737779140472,
0.01693827472627163,
0.09473881870508194,
0.024957595393061638,
-0.00898466445505619,
0.09500077366828918,
-0.10540551692247391,
-0.033451929688453674,
0.044056132435798645,
-0.027222944423556328,
0.01617950014770031,
0.04460553452372551,
0.023609407246112823,
-0.002217693254351616,
0.04188127815723419,
0.04075812175869942,
0.045805566012859344,
0.02850460261106491,
0.05331871658563614,
-0.025597700849175453,
-0.07383295148611069,
-0.03256939351558685,
-0.0018870015628635883,
0.05358721688389778,
0.13954073190689087,
0.022764764726161957,
-0.0676153302192688,
0.006094073876738548,
0.11087270081043243,
-0.03106086328625679,
-0.05372833088040352,
-0.10850616544485092,
0.24347972869873047,
0.021934982389211655,
0.002625965280458331,
-0.005982426926493645,
-0.04583665728569031,
0.006816592067480087,
0.21071796119213104,
0.22060787677764893,
0.0041116769425570965,
-0.00945077184587717,
0.010379086248576641,
-0.01192028820514679,
0.038360558450222015,
0.14435067772865295,
0.0035584066063165665,
0.2538032531738281,
-0.04875049740076065,
0.035669293254613876,
-0.043063148856163025,
-0.03745570778846741,
-0.10211922228336334,
0.06860168278217316,
-0.01094340905547142,
0.00793469324707985,
-0.0331597663462162,
0.07126127928495407,
-0.04177255555987358,
-0.17880001664161682,
0.0017201649025082588,
0.0004152663750573993,
-0.06055772677063942,
0.01106211356818676,
-0.0010106973350048065,
0.022492531687021255,
0.08257663995027542,
-0.01790795475244522,
-0.0057924059219658375,
0.1326516717672348,
0.018702615052461624,
-0.09756885468959808,
-0.059999629855155945,
0.113500215113163,
0.016943946480751038,
0.13854485750198364,
0.010937941260635853,
0.07718542218208313,
0.08697056770324707,
0.021821092814207077,
-0.095426544547081,
0.04286225512623787,
-0.02107040025293827,
-0.02892645262181759,
0.0064637260511517525,
0.10780023783445358,
-0.007852917537093163,
0.059612300246953964,
0.024409908801317215,
-0.09202565252780914,
0.06173405051231384,
0.013838917016983032,
-0.034411996603012085,
-0.08200681209564209,
0.08415504544973373,
-0.09003521502017975,
0.15749895572662354,
0.12264852225780487,
-0.012775538489222527,
-0.04518252611160278,
-0.02965044230222702,
0.017439158633351326,
0.0010381056927144527,
0.05183658003807068,
-0.027345310896635056,
-0.1336657553911209,
0.020520856603980064,
-0.08374202251434326,
0.02635328471660614,
-0.248540461063385,
-0.08944754302501678,
0.03158165141940117,
-0.018015360459685326,
-0.01607460528612137,
0.04861462116241455,
0.044574517756700516,
0.028393156826496124,
-0.03621416538953781,
0.01659611612558365,
-0.03670352324843407,
0.05968652665615082,
-0.11000430583953857,
-0.09492991119623184
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 1300k (uncased)
Seed 1 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1300k')
model = BertModel.from_pretrained("multiberts-seed-1-1300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-1300k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1300k (uncased)
Seed 1 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1300k (uncased)\nSeed 1 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1300k (uncased)\nSeed 1 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 1300k (uncased)\nSeed 1 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.083903968334198,
-0.003416541265323758,
-0.0020308883395045996,
0.06948014348745346,
0.08571355789899826,
0.00017423275858163834,
0.1110074371099472,
0.05032157152891159,
-0.03265640139579773,
0.023838134482502937,
0.09534205496311188,
0.026146575808525085,
0.041401106864213943,
0.06659316271543503,
0.10001830756664276,
-0.2612813711166382,
0.04978056252002716,
-0.06655297428369522,
0.06048428267240524,
0.07522737979888916,
0.10009707510471344,
-0.06978213787078857,
0.061057403683662415,
0.039128512144088745,
-0.0780213251709938,
-0.016111331060528755,
-0.01646820269525051,
-0.0350676029920578,
0.09965695440769196,
0.06893835216760635,
0.06321351230144501,
0.00002296827733516693,
0.059229880571365356,
-0.09216807782649994,
0.015709057450294495,
0.04447135329246521,
0.0022613657638430595,
0.0237850584089756,
-0.013409607112407684,
0.016330720856785774,
0.10800813883543015,
0.03929448127746582,
0.07881198823451996,
0.03451061248779297,
-0.09590259194374084,
-0.10973859578371048,
-0.08108661323785782,
0.1057942807674408,
0.052404724061489105,
0.04287652671337128,
-0.00429028645157814,
0.07339788973331451,
-0.026148203760385513,
0.07576210796833038,
0.10596446692943573,
-0.24889600276947021,
-0.009256111457943916,
0.07068406045436859,
0.043588072061538696,
0.046455852687358856,
0.012612730264663696,
0.025101443752646446,
0.006328746676445007,
0.04116572439670563,
0.023895002901554108,
-0.022469915449619293,
0.11224263161420822,
-0.046102941036224365,
-0.15130895376205444,
-0.04598810523748398,
0.11818544566631317,
-0.006881179288029671,
-0.12275919318199158,
-0.09385031461715698,
-0.033785611391067505,
0.10499417781829834,
-0.004124332219362259,
-0.01812804490327835,
-0.0041465116664767265,
0.011808857321739197,
0.026423707604408264,
-0.09342417865991592,
-0.08388370275497437,
-0.02761625126004219,
-0.04206133261322975,
0.1263815313577652,
0.04584316164255142,
0.05224120616912842,
-0.03467262536287308,
0.08757973462343216,
-0.12300480902194977,
-0.03875244781374931,
-0.05263179540634155,
-0.07793033123016357,
-0.01886768452823162,
0.005889816675335169,
-0.026093289256095886,
-0.08295566588640213,
-0.05885177105665207,
0.11951076984405518,
0.027223408222198486,
0.030253253877162933,
-0.0022840192541480064,
0.04213094338774681,
0.07317712157964706,
0.09598526358604431,
-0.04134894534945488,
0.054167114198207855,
0.03883766382932663,
-0.025062356144189835,
0.060354575514793396,
-0.049658603966236115,
-0.09892605245113373,
0.07511445879936218,
0.00017311982810497284,
0.03654070571064949,
0.02893301472067833,
0.03428926691412926,
-0.01055773627012968,
-0.06851808726787567,
0.16676369309425354,
-0.07748875766992569,
-0.009289483539760113,
-0.019607162103056908,
0.012280656024813652,
0.043265245854854584,
0.0367225706577301,
-0.0067506711930036545,
-0.046293992549180984,
-0.009651055559515953,
-0.0565413273870945,
-0.026989314705133438,
-0.055696599185466766,
-0.11752787232398987,
0.001683143898844719,
-0.02954438515007496,
-0.03386080265045166,
-0.13965894281864166,
-0.22266076505184174,
-0.01930341124534607,
0.0654313862323761,
-0.0008717235177755356,
-0.013962652534246445,
0.022675491869449615,
0.01615656353533268,
-0.020693253725767136,
0.01206768024712801,
-0.04666038602590561,
0.00010734423995018005,
-0.006257683038711548,
-0.03441985696554184,
0.054519444704055786,
-0.04238080233335495,
0.0213564895093441,
-0.07184737920761108,
0.02344139851629734,
-0.2173619419336319,
0.08977588266134262,
-0.03585324063897133,
0.00427568145096302,
-0.03702894598245621,
-0.04057764261960983,
0.0018663667142391205,
0.04518834501504898,
-0.007999869994819164,
0.11686739325523376,
-0.1278446614742279,
-0.04902247339487076,
0.17837318778038025,
-0.16057279706001282,
-0.0009060315787792206,
0.10153450816869736,
-0.04862307012081146,
0.0583413764834404,
0.131351038813591,
0.09955552220344543,
0.09012958407402039,
-0.0720934197306633,
0.011761453934013844,
0.06712745875120163,
-0.07111990451812744,
0.053720466792583466,
0.08725002408027649,
-0.02554742433130741,
-0.14161251485347748,
0.029804758727550507,
-0.07337847352027893,
-0.0053861141204833984,
-0.027625780552625656,
-0.02207302488386631,
0.005191884934902191,
-0.036824654787778854,
0.02336065098643303,
0.005003881175071001,
0.019435975700616837,
-0.03733957186341286,
-0.08223168551921844,
0.0280503761023283,
0.0757032185792923,
-0.06874851882457733,
0.046755291521549225,
-0.0656869038939476,
0.06047589331865311,
-0.07748593389987946,
-0.005812963470816612,
-0.16672898828983307,
-0.024453695863485336,
0.04378613084554672,
-0.049653515219688416,
0.04815595597028732,
0.08404196053743362,
0.0021859167609363794,
0.11917541176080704,
-0.039818599820137024,
0.008657261729240417,
-0.005906926468014717,
-0.009741236455738544,
-0.05083145946264267,
-0.12004361301660538,
-0.07974958419799805,
-0.07020978629589081,
0.10524684935808182,
-0.07307770103216171,
0.030442167073488235,
-0.07130035012960434,
-0.026136862114071846,
-0.01093224249780178,
-0.06058963015675545,
-0.0035807639360427856,
0.010466090403497219,
-0.02925877273082733,
-0.04649735987186432,
0.04589403420686722,
0.05275273695588112,
-0.06279310584068298,
0.07700217515230179,
-0.10245724767446518,
-0.06772160530090332,
0.05375701189041138,
0.012466305866837502,
-0.0864095389842987,
0.08909134566783905,
-0.020962461829185486,
-0.01479348260909319,
-0.06266561150550842,
-0.04555702582001686,
0.1937258541584015,
-0.023655597120523453,
0.10051393508911133,
-0.09070271253585815,
0.0015213937731459737,
0.03064575418829918,
-0.04806853085756302,
-0.01628740504384041,
0.060955505818128586,
0.05025622621178627,
-0.18077492713928223,
0.016131963580846786,
0.04161856323480606,
0.07145385444164276,
0.10874147713184357,
0.026754621416330338,
-0.02300025150179863,
-0.0445975586771965,
-0.009609568864107132,
0.005339203868061304,
0.053413257002830505,
-0.028949439525604248,
-0.005456727929413319,
0.030449476093053818,
0.05911117047071457,
0.01873796060681343,
-0.08143477141857147,
0.03653756156563759,
0.06925508379936218,
-0.018111295998096466,
-0.04469376057386398,
-0.026667797937989235,
-0.059004832059144974,
0.061033159494400024,
0.05116523802280426,
0.040552835911512375,
0.024722451344132423,
-0.01273423433303833,
-0.13659751415252686,
0.18942254781723022,
-0.11324861645698547,
-0.2527276277542114,
-0.11046572029590607,
-0.056684739887714386,
-0.02411630190908909,
0.039615124464035034,
0.057030148804187775,
-0.02684667892754078,
-0.04048621654510498,
-0.11773169040679932,
0.06099051982164383,
-0.0638260766863823,
-0.02958831377327442,
-0.009017862379550934,
-0.054397083818912506,
-0.019246619194746017,
-0.12697464227676392,
-0.01153242401778698,
-0.02947123348712921,
-0.07713686674833298,
0.003786982037127018,
-0.03669149801135063,
0.02874552458524704,
0.1394219994544983,
0.03135181963443756,
-0.01927010901272297,
-0.0182552020996809,
0.19336088001728058,
0.010457120835781097,
0.06234733760356903,
0.10798951983451843,
-0.025453034788370132,
0.05470637232065201,
0.04815870523452759,
0.02509697712957859,
-0.04482913762331009,
0.012843677774071693,
-0.016392936930060387,
-0.12319336831569672,
-0.16953851282596588,
-0.07195155322551727,
-0.002272991929203272,
0.008991075679659843,
0.018743105232715607,
0.03803394362330437,
0.015729587525129318,
0.04085918888449669,
-0.029156910255551338,
0.029888560995459557,
-0.013151638209819794,
0.08116663992404938,
0.028694219887256622,
-0.07521064579486847,
0.08860678970813751,
-0.0615437813103199,
0.017199302092194557,
0.10940708965063095,
-0.0582779124379158,
0.18571725487709045,
0.021879225969314575,
0.05995051562786102,
0.09946346282958984,
0.019880160689353943,
0.0543573834002018,
0.08582840114831924,
-0.0471840426325798,
0.00526358000934124,
-0.05872160196304321,
-0.05108237266540527,
-0.03439190611243248,
0.04690157622098923,
0.02530808001756668,
0.020138900727033615,
-0.11797142773866653,
0.022727854549884796,
0.0005299574695527554,
0.13008835911750793,
0.043333474546670914,
-0.11840100586414337,
-0.1236528605222702,
0.03204456716775894,
-0.0480043850839138,
-0.06265529990196228,
0.03211628273129463,
0.05401405319571495,
-0.15177419781684875,
0.0452456995844841,
-0.005501261912286282,
0.06744074821472168,
-0.09145498275756836,
0.013811263255774975,
-0.044081445783376694,
-0.00012946687638759613,
0.0028178005013614893,
0.07014837861061096,
-0.12936905026435852,
0.11155571788549423,
0.020113371312618256,
0.04714019596576691,
-0.07981924712657928,
0.016131773591041565,
-0.012329221703112125,
0.11044155806303024,
0.11814562976360321,
0.04628414660692215,
-0.05170855671167374,
-0.01549232006072998,
-0.04726019874215126,
0.019701823592185974,
0.05339726060628891,
-0.0756976455450058,
0.059031032025814056,
0.009849322028458118,
0.008174366317689419,
-0.022347798570990562,
0.015744470059871674,
-0.13240650296211243,
-0.12151336669921875,
0.05968346446752548,
-0.07529821991920471,
-0.10540313273668289,
-0.056061696261167526,
-0.06175882741808891,
-0.05495370179414749,
0.2080027163028717,
-0.1131536066532135,
-0.08861377090215683,
-0.09793500602245331,
-0.005126195028424263,
0.046485818922519684,
-0.0669182687997818,
0.04678136110305786,
-0.039266765117645264,
0.08779722452163696,
-0.051031775772571564,
-0.10675957798957825,
0.03445681557059288,
-0.11174581944942474,
-0.11229445785284042,
-0.044181473553180695,
0.10050852596759796,
0.11266540735960007,
0.037864889949560165,
0.00866929441690445,
0.011273819021880627,
-0.001407373696565628,
-0.11933837085962296,
0.00932493805885315,
0.13484543561935425,
-0.00907842069864273,
0.0701155960559845,
-0.059523239731788635,
0.029523007571697235,
-0.015376130118966103,
-0.0019748955965042114,
0.13162821531295776,
0.19053930044174194,
-0.06610919535160065,
0.17435969412326813,
0.20170480012893677,
-0.10780457407236099,
-0.19333797693252563,
-0.053402092307806015,
0.00042885448783636093,
0.04371215030550957,
0.05034497380256653,
-0.17922556400299072,
0.09311768412590027,
0.039456214755773544,
-0.034746818244457245,
0.015833042562007904,
-0.23817366361618042,
-0.11309881508350372,
0.08950617164373398,
0.05384881794452667,
0.1921662986278534,
-0.0789056047797203,
-0.04038919508457184,
-0.01385004073381424,
-0.03975916653871536,
0.03680834919214249,
-0.03218131512403488,
0.09012375771999359,
0.00845671072602272,
-0.027672799304127693,
0.002841680310666561,
-0.0324641577899456,
0.0951414406299591,
0.04204748943448067,
0.021664761006832123,
-0.07066908478736877,
-0.007447671145200729,
0.11762615293264389,
-0.03847058117389679,
0.10004180669784546,
0.04954998567700386,
0.0766448825597763,
-0.09393022954463959,
-0.0589180625975132,
-0.07575181126594543,
0.047153472900390625,
-0.04230128228664398,
-0.05342266708612442,
-0.06667891889810562,
0.05820475146174431,
0.04098840802907944,
0.008135710842907429,
0.004095299169421196,
-0.03783561289310455,
0.04336344078183174,
0.09408773481845856,
0.07696939259767532,
-0.03133055195212364,
-0.06591024994850159,
-0.04747817665338516,
-0.0490010529756546,
0.06528539210557938,
-0.09101802110671997,
0.01891876757144928,
0.02699616178870201,
0.013799675740301609,
0.08555041253566742,
0.033744342625141144,
-0.13655363023281097,
0.0112642552703619,
0.033046185970306396,
-0.12255300581455231,
-0.10932306945323944,
-0.020699210464954376,
0.03174441307783127,
-0.04007018357515335,
0.052151333540678024,
0.14513276517391205,
-0.03990820422768593,
-0.03121863305568695,
-0.04665076360106468,
0.038457442075014114,
-0.021059084683656693,
0.04826400429010391,
0.06418873369693756,
0.03047557920217514,
-0.07127116620540619,
0.07742199301719666,
0.041628919541835785,
-0.04414337873458862,
0.03906640037894249,
0.04205118864774704,
-0.09321297705173492,
-0.07760505378246307,
-0.057253677397966385,
0.08830930292606354,
-0.02882649563252926,
-0.04177404195070267,
-0.002332117408514023,
-0.0792899802327156,
0.06865012645721436,
0.08307817578315735,
0.048514313995838165,
0.03877079486846924,
-0.08585856109857559,
0.017909524962306023,
-0.05662664771080017,
0.037480585277080536,
-0.02931598387658596,
-0.003482481464743614,
-0.056524597108364105,
0.06539613753557205,
0.06304679065942764,
0.09661351144313812,
-0.034334421157836914,
-0.07682817429304123,
-0.08374010026454926,
-0.011605078354477882,
-0.05105758085846901,
-0.031129315495491028,
-0.08073543012142181,
-0.006857964675873518,
0.0034406157210469246,
-0.001237139105796814,
0.020549196749925613,
0.0359187126159668,
-0.042329706251621246,
-0.017440810799598694,
-0.039111651480197906,
0.03710392862558365,
-0.059730496257543564,
0.007549121975898743,
0.01837526634335518,
-0.0331120528280735,
0.09094174206256866,
0.035358183085918427,
-0.011741538532078266,
0.04177607223391533,
-0.031041977927088737,
0.03991403058171272,
-0.021569013595581055,
-0.00023263785988092422,
-0.02392004430294037,
-0.10848408937454224,
-0.004636665340512991,
0.004427751526236534,
-0.024702690541744232,
0.009959733113646507,
0.06216508150100708,
-0.07413319498300552,
0.08603183180093765,
0.048628225922584534,
-0.03141098842024803,
-0.07310058176517487,
0.03854009136557579,
-0.016606047749519348,
0.028079187497496605,
0.06616724282503128,
-0.03628847748041153,
0.05093253403902054,
-0.099319688975811,
-0.02872222289443016,
0.005084136035293341,
-0.002673674374818802,
-0.00981868989765644,
-0.05185024440288544,
-0.003644145093858242,
0.009436286054551601,
0.17936985194683075,
-0.021495521068572998,
0.033133894205093384,
0.015287015587091446,
0.004773281514644623,
0.048159580677747726,
-0.01594918593764305,
0.07773502171039581,
-0.006291928701102734,
-0.02631516568362713,
-0.012772420421242714,
0.038582365959882736,
0.005367904901504517,
0.004104791209101677,
0.1391194462776184,
0.04427590221166611,
0.09157884865999222,
0.07661944627761841,
0.014395198784768581,
0.02114177495241165,
-0.12566012144088745,
-0.0909334197640419,
0.004487017169594765,
0.05793355032801628,
-0.018837498500943184,
0.00604347325861454,
0.09561778604984283,
-0.08744542300701141,
0.0714646577835083,
0.05210385471582413,
-0.04889487475156784,
-0.1299331784248352,
-0.19943773746490479,
-0.023457646369934082,
-0.0345335379242897,
-0.009942973963916302,
-0.09113460779190063,
0.01756153628230095,
0.09135589003562927,
0.02428433485329151,
-0.006731485016644001,
0.09680017083883286,
-0.1084415465593338,
-0.030578676611185074,
0.040718089789152145,
-0.027205590158700943,
0.01831793785095215,
0.044551849365234375,
0.01927594281733036,
-0.003012297675013542,
0.04192750155925751,
0.03898811340332031,
0.04434420168399811,
0.029791662469506264,
0.053447701036930084,
-0.02411387860774994,
-0.07235491275787354,
-0.03387204185128212,
-0.0049506258219480515,
0.0545937605202198,
0.13504838943481445,
0.022753089666366577,
-0.06817273795604706,
0.006866289768368006,
0.11188282817602158,
-0.031918399035930634,
-0.05101391300559044,
-0.11085039377212524,
0.24438968300819397,
0.02574281580746174,
0.0006270231679081917,
-0.005823070649057627,
-0.04674864187836647,
0.006460832431912422,
0.2119808793067932,
0.22322456538677216,
0.007130656857043505,
-0.009756973944604397,
0.009798389859497547,
-0.01215142011642456,
0.0381513312458992,
0.14338502287864685,
0.0026580002158880234,
0.2475636899471283,
-0.047658391296863556,
0.03351739048957825,
-0.04230164736509323,
-0.038975317031145096,
-0.10273325443267822,
0.06994325667619705,
-0.011563386768102646,
0.008103346452116966,
-0.03418052941560745,
0.07256313413381577,
-0.04252227023243904,
-0.1752307116985321,
-0.0001826239749789238,
-0.0006867193151265383,
-0.0582035593688488,
0.011632060632109642,
0.00034307781606912613,
0.023074839264154434,
0.07915270328521729,
-0.01631873846054077,
-0.004650563932955265,
0.12601618468761444,
0.020135382190346718,
-0.09669122099876404,
-0.06282459199428558,
0.11327819526195526,
0.025202637538313866,
0.14619134366512299,
0.01262980792671442,
0.074518121778965,
0.08710749447345734,
0.021501265466213226,
-0.09814023971557617,
0.043934620916843414,
-0.019496161490678787,
-0.027641378343105316,
0.004847122821956873,
0.10868413746356964,
-0.007893234491348267,
0.06374222785234451,
0.02341647446155548,
-0.09156908094882965,
0.062256451696157455,
0.01035694032907486,
-0.0318710058927536,
-0.08271455764770508,
0.0819908082485199,
-0.0887908786535263,
0.15865489840507507,
0.12269322574138641,
-0.013054854236543179,
-0.0432034507393837,
-0.02899911254644394,
0.015180248767137527,
-0.0018045231699943542,
0.0580044686794281,
-0.025724153965711594,
-0.1348976492881775,
0.01973431557416916,
-0.08437386155128479,
0.02653612568974495,
-0.24200382828712463,
-0.09060591459274292,
0.030373286455869675,
-0.019319016486406326,
-0.01483362540602684,
0.05198513716459274,
0.045913100242614746,
0.02643619105219841,
-0.03517608717083931,
0.021553345024585724,
-0.035629015415906906,
0.06032158434391022,
-0.11195485293865204,
-0.09437514841556549
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 1400k (uncased)
Seed 1 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1400k')
model = BertModel.from_pretrained("multiberts-seed-1-1400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-1400k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1400k (uncased)
Seed 1 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1400k (uncased)\nSeed 1 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1400k (uncased)\nSeed 1 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 1400k (uncased)\nSeed 1 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0861840546131134,
-0.0008109814953058958,
-0.0021786661818623543,
0.06880868971347809,
0.08772853761911392,
0.00283140130341053,
0.11367647349834442,
0.04933295398950577,
-0.033044978976249695,
0.023883532732725143,
0.0949685275554657,
0.024337638169527054,
0.04074009507894516,
0.06282593309879303,
0.09839556366205215,
-0.25804832577705383,
0.049180492758750916,
-0.0646391287446022,
0.05463232845067978,
0.07464001327753067,
0.100138358771801,
-0.07087059319019318,
0.06169356778264046,
0.0398993194103241,
-0.0826263427734375,
-0.01798228733241558,
-0.016003906726837158,
-0.03391323611140251,
0.10154938697814941,
0.070258729159832,
0.06330163776874542,
-0.000673983246088028,
0.05898395553231239,
-0.08880889415740967,
0.016162576153874397,
0.046039730310440063,
0.0007673073559999466,
0.025177938863635063,
-0.011228948831558228,
0.01653601974248886,
0.10585087537765503,
0.0397784560918808,
0.07687339931726456,
0.035870518535375595,
-0.09597155451774597,
-0.12026114761829376,
-0.07797355949878693,
0.10636959969997406,
0.05033353716135025,
0.04206337779760361,
-0.005175773054361343,
0.06956496834754944,
-0.026718245819211006,
0.07391097396612167,
0.1023780107498169,
-0.2521665692329407,
-0.007727359421551228,
0.06405967473983765,
0.04049116373062134,
0.04459889233112335,
0.011081979610025883,
0.025893371552228928,
0.00721614807844162,
0.043233755975961685,
0.028907790780067444,
-0.023123078048229218,
0.11872871220111847,
-0.046145543456077576,
-0.14991623163223267,
-0.04382798448204994,
0.11692886054515839,
-0.007215073332190514,
-0.12454867362976074,
-0.09559925645589828,
-0.03255995362997055,
0.11499388515949249,
-0.004974118433892727,
-0.016399862244725227,
-0.004041952081024647,
0.012363849207758904,
0.02920737862586975,
-0.09413081407546997,
-0.0853034183382988,
-0.02695704624056816,
-0.039676304906606674,
0.130203515291214,
0.046782441437244415,
0.05219540372490883,
-0.03545202314853668,
0.08564774692058563,
-0.11995325982570648,
-0.04114985093474388,
-0.051504798233509064,
-0.07900582253932953,
-0.01830988936126232,
0.008399027399718761,
-0.02688853070139885,
-0.07946629077196121,
-0.058403924107551575,
0.11405517160892487,
0.026607144623994827,
0.029690220952033997,
0.0009977612644433975,
0.04121934622526169,
0.073513463139534,
0.09286367893218994,
-0.04047912731766701,
0.051817189902067184,
0.03687633201479912,
-0.025188665837049484,
0.05764029920101166,
-0.04929890111088753,
-0.10056985914707184,
0.07557299733161926,
0.000880928710103035,
0.036035165190696716,
0.027473758906126022,
0.03342564404010773,
-0.009652659296989441,
-0.06914622336626053,
0.16425195336341858,
-0.07672043144702911,
-0.00888347253203392,
-0.017344903200864792,
0.012217162176966667,
0.044331490993499756,
0.03514884412288666,
-0.008762786164879799,
-0.04772869497537613,
-0.006074484437704086,
-0.05850183218717575,
-0.025417426601052284,
-0.05433374643325806,
-0.11848670989274979,
0.0001224195584654808,
-0.035650841891765594,
-0.03287050127983093,
-0.14205873012542725,
-0.21809253096580505,
-0.019988276064395905,
0.06468016654253006,
-0.0012800567783415318,
-0.011534253135323524,
0.025880027562379837,
0.01701999641954899,
-0.020423198118805885,
0.011867944151163101,
-0.048831045627593994,
-0.00040684081614017487,
-0.005055489018559456,
-0.03338376805186272,
0.05598130077123642,
-0.04196908324956894,
0.023354224860668182,
-0.07027523219585419,
0.02307775244116783,
-0.21596458554267883,
0.09082044661045074,
-0.03411094844341278,
0.004983497783541679,
-0.03766421601176262,
-0.042972587049007416,
0.005295595154166222,
0.04680081456899643,
-0.007272026501595974,
0.11819589138031006,
-0.13144218921661377,
-0.050623416900634766,
0.18000611662864685,
-0.16084644198417664,
-0.0025755129754543304,
0.10165587067604065,
-0.04849853366613388,
0.058328043669462204,
0.13231396675109863,
0.10052095353603363,
0.0898594781756401,
-0.07376410067081451,
0.011662128381431103,
0.06522644311189651,
-0.07114369422197342,
0.053114742040634155,
0.0875440314412117,
-0.02586120367050171,
-0.13907259702682495,
0.029078029096126556,
-0.07222222536802292,
-0.0048361727967858315,
-0.027956780046224594,
-0.020596981048583984,
0.004949189722537994,
-0.037985894829034805,
0.025499314069747925,
0.005836895667016506,
0.019549204036593437,
-0.037421151995658875,
-0.08126721531152725,
0.03392471373081207,
0.07655833661556244,
-0.06932942569255829,
0.04491227865219116,
-0.0676574632525444,
0.061961255967617035,
-0.07733134925365448,
-0.003428802825510502,
-0.16943705081939697,
-0.023325227200984955,
0.04231369495391846,
-0.053575169295072556,
0.04953454062342644,
0.0879095047712326,
0.0024021274875849485,
0.1229284480214119,
-0.040290914475917816,
0.006712773349136114,
-0.005215048789978027,
-0.009785206988453865,
-0.05170828849077225,
-0.12373951077461243,
-0.08017197251319885,
-0.06801418960094452,
0.10521148145198822,
-0.07468670606613159,
0.02983812429010868,
-0.07406450062990189,
-0.024525603279471397,
-0.009943246841430664,
-0.05864548683166504,
-0.005251885391771793,
0.010459384880959988,
-0.02910790964961052,
-0.04670802503824234,
0.047481656074523926,
0.052820105105638504,
-0.06090421974658966,
0.07896421104669571,
-0.10071539133787155,
-0.06170205771923065,
0.054470352828502655,
0.00873822346329689,
-0.0850779265165329,
0.09085705876350403,
-0.01895529218018055,
-0.013889479450881481,
-0.060568735003471375,
-0.0448792427778244,
0.19366976618766785,
-0.023952778428792953,
0.10348758101463318,
-0.09083539247512817,
0.00327379466034472,
0.031122764572501183,
-0.04845891892910004,
-0.015629082918167114,
0.060260169208049774,
0.04773000255227089,
-0.18402698636054993,
0.016091670840978622,
0.04738263785839081,
0.07522915303707123,
0.10880468040704727,
0.025773774832487106,
-0.023309361189603806,
-0.0468796044588089,
-0.012500252574682236,
0.004254275932908058,
0.05545666813850403,
-0.030167408287525177,
-0.007575527764856815,
0.03167541325092316,
0.05905034393072128,
0.018346618860960007,
-0.08058416843414307,
0.036861538887023926,
0.06939016282558441,
-0.01652924343943596,
-0.045502208173274994,
-0.02777823992073536,
-0.05886996537446976,
0.06227123737335205,
0.05180182307958603,
0.040853265672922134,
0.024518420919775963,
-0.012254714034497738,
-0.13596051931381226,
0.1896899938583374,
-0.11436131596565247,
-0.254627525806427,
-0.11184997856616974,
-0.05832785367965698,
-0.024363527074456215,
0.04130227863788605,
0.05885179340839386,
-0.028997473418712616,
-0.041013021022081375,
-0.1140524223446846,
0.06328187882900238,
-0.06300240755081177,
-0.029393905773758888,
-0.010243648663163185,
-0.0531892329454422,
-0.016838056966662407,
-0.12722666561603546,
-0.011628394946455956,
-0.029825512319803238,
-0.06850641220808029,
0.0051732612773776054,
-0.03406980261206627,
0.028246764093637466,
0.13922062516212463,
0.03235089033842087,
-0.02012067288160324,
-0.016427170485258102,
0.19775117933750153,
0.009786698967218399,
0.06195719912648201,
0.11220163106918335,
-0.026725757867097855,
0.0550147220492363,
0.04560266062617302,
0.02693035453557968,
-0.04564881697297096,
0.01110822707414627,
-0.016928143799304962,
-0.12063878774642944,
-0.1692684292793274,
-0.07247187942266464,
-0.0029811421409249306,
0.00468516256660223,
0.01954680122435093,
0.03653537109494209,
0.02104567363858223,
0.039612703025341034,
-0.02863159589469433,
0.031058238819241524,
-0.0158073753118515,
0.08087800443172455,
0.032059356570243835,
-0.07507631182670593,
0.09010651707649231,
-0.06193159893155098,
0.0167708620429039,
0.11015114933252335,
-0.06444857269525528,
0.19024541974067688,
0.022536437958478928,
0.06141364574432373,
0.10118220746517181,
0.02056792378425598,
0.055283185094594955,
0.08596275746822357,
-0.04746398329734802,
0.005209828726947308,
-0.059499047696590424,
-0.051676735281944275,
-0.03448537737131119,
0.047291696071624756,
0.025237642228603363,
0.018193434923887253,
-0.1193288043141365,
0.020730793476104736,
-0.0017831310397014022,
0.13517650961875916,
0.046158161014318466,
-0.11800431460142136,
-0.12623485922813416,
0.03269224241375923,
-0.04655638337135315,
-0.06493479758501053,
0.02927326411008835,
0.0592794306576252,
-0.15022462606430054,
0.043483659625053406,
-0.00381636805832386,
0.06874971091747284,
-0.08999411016702652,
0.01586362160742283,
-0.043768104165792465,
0.0005185976624488831,
0.0022871445398777723,
0.06925144046545029,
-0.13090181350708008,
0.10920126736164093,
0.021217279136180878,
0.04847421497106552,
-0.0787573754787445,
0.015004017390310764,
-0.010839343070983887,
0.11581530421972275,
0.11520726978778839,
0.047025151550769806,
-0.04777806997299194,
-0.016711188480257988,
-0.04467612877488136,
0.0182340070605278,
0.05594947189092636,
-0.0754714161157608,
0.059358060359954834,
0.00894123874604702,
0.007717409636825323,
-0.02235516905784607,
0.0157177597284317,
-0.13074952363967896,
-0.12025545537471771,
0.05944802612066269,
-0.07533812522888184,
-0.1014258861541748,
-0.05608820915222168,
-0.06233341246843338,
-0.052319154143333435,
0.20487192273139954,
-0.1123208999633789,
-0.09005644172430038,
-0.09899761527776718,
-0.009232960641384125,
0.045817404985427856,
-0.06714970618486404,
0.047427333891391754,
-0.03939034789800644,
0.08942336589097977,
-0.048800691962242126,
-0.10760785639286041,
0.0330459289252758,
-0.1121271401643753,
-0.11128534376621246,
-0.044570259749889374,
0.10181091725826263,
0.11216868460178375,
0.03761335462331772,
0.01054341159760952,
0.01043747365474701,
-0.003506094217300415,
-0.11861017346382141,
0.01234409585595131,
0.12781408429145813,
-0.0011474508792161942,
0.06971856951713562,
-0.060718096792697906,
0.026445727795362473,
-0.015488913282752037,
-0.0007042717188596725,
0.13043028116226196,
0.18840757012367249,
-0.06647232919931412,
0.17199742794036865,
0.20748469233512878,
-0.10592080652713776,
-0.19158193469047546,
-0.05349721014499664,
0.00023333914577960968,
0.04404184967279434,
0.046605341136455536,
-0.18333211541175842,
0.09367555379867554,
0.035329025238752365,
-0.03307574614882469,
0.02181078866124153,
-0.2335284948348999,
-0.11307945847511292,
0.0904756635427475,
0.05291798710823059,
0.1935122013092041,
-0.0805426836013794,
-0.0402386337518692,
-0.01594414748251438,
-0.04072922468185425,
0.038930974900722504,
-0.03286770358681679,
0.0907391905784607,
0.007915688678622246,
-0.029432201758027077,
0.002677062526345253,
-0.03142349421977997,
0.09704956412315369,
0.04333680123090744,
0.023312103003263474,
-0.0705324113368988,
-0.005633581429719925,
0.11501965671777725,
-0.03802748769521713,
0.09932812303304672,
0.04796678572893143,
0.07630565762519836,
-0.09815983474254608,
-0.05893082916736603,
-0.07539596408605576,
0.04517470300197601,
-0.041981980204582214,
-0.05399284139275551,
-0.0643516257405281,
0.05788487568497658,
0.03759046271443367,
0.008469067513942719,
0.000621279701590538,
-0.03856797143816948,
0.04373690113425255,
0.09048067033290863,
0.07470826059579849,
-0.035256169736385345,
-0.07356403768062592,
-0.048907503485679626,
-0.048649996519088745,
0.06697333604097366,
-0.09723510593175888,
0.020174216479063034,
0.026928644627332687,
0.013598740100860596,
0.08951996266841888,
0.03321515768766403,
-0.1358587145805359,
0.010923339053988457,
0.03353774547576904,
-0.12319128215312958,
-0.11120223999023438,
-0.020165398716926575,
0.026972535997629166,
-0.03752187639474869,
0.053479958325624466,
0.14546921849250793,
-0.0386333242058754,
-0.030777789652347565,
-0.04786982387304306,
0.03663165122270584,
-0.021219570189714432,
0.049149658530950546,
0.06664998084306717,
0.031627994030714035,
-0.07291019707918167,
0.07523399591445923,
0.0409780889749527,
-0.04017869010567665,
0.03954857587814331,
0.041353147476911545,
-0.09484052658081055,
-0.07890430837869644,
-0.058445289731025696,
0.09202928841114044,
-0.024800067767500877,
-0.045823656022548676,
-0.0012939218431711197,
-0.08221706748008728,
0.0692741647362709,
0.0784238651394844,
0.04818636551499367,
0.037791140377521515,
-0.08707685768604279,
0.01561032049357891,
-0.05588558316230774,
0.03633173182606697,
-0.028712650761008263,
-0.005057228729128838,
-0.05639098584651947,
0.06926824897527695,
0.06445099413394928,
0.09983692318201065,
-0.034904494881629944,
-0.07661524415016174,
-0.08256474882364273,
-0.01308563444763422,
-0.06505292654037476,
-0.03094945102930069,
-0.07801521569490433,
-0.006863501388579607,
0.0025495667941868305,
-0.0034350641071796417,
0.022428009659051895,
0.035724788904190063,
-0.042341481894254684,
-0.017316116020083427,
-0.03526915982365608,
0.03700999915599823,
-0.061274029314517975,
0.006843944080173969,
0.01578732393682003,
-0.03436661511659622,
0.09297723323106766,
0.038169410079717636,
-0.012624256312847137,
0.041975028812885284,
-0.018619243055582047,
0.03825507313013077,
-0.020891310647130013,
-0.0004071854054927826,
-0.02263549342751503,
-0.10952617228031158,
-0.0040654405020177364,
0.0029188450425863266,
-0.024495741352438927,
0.009853988885879517,
0.0606534369289875,
-0.07245378196239471,
0.08874434977769852,
0.04761168360710144,
-0.030830103904008865,
-0.07272157818078995,
0.03842462971806526,
-0.014877403154969215,
0.028116125613451004,
0.06943461298942566,
-0.03415039926767349,
0.052345797419548035,
-0.09931434690952301,
-0.02906421199440956,
0.004251133184880018,
-0.0036978907883167267,
-0.007906809449195862,
-0.05452190712094307,
-0.004473219625651836,
0.00878800917416811,
0.17323152720928192,
-0.022133972495794296,
0.035975001752376556,
0.014724251814186573,
0.008899904787540436,
0.042705319821834564,
-0.014684999361634254,
0.07349707186222076,
-0.0071543678641319275,
-0.02664339914917946,
-0.014862009324133396,
0.03971366956830025,
0.006282482296228409,
0.00747496634721756,
0.1367989480495453,
0.047279227524995804,
0.09113284200429916,
0.07476597279310226,
0.01379444170743227,
0.0181979238986969,
-0.1321788728237152,
-0.08803458511829376,
0.004331331700086594,
0.057927340269088745,
-0.01803101785480976,
0.01202671229839325,
0.09260823577642441,
-0.08591389656066895,
0.07216111570596695,
0.04992229491472244,
-0.048538938164711,
-0.1263064742088318,
-0.1902759075164795,
-0.02261810190975666,
-0.032435089349746704,
-0.012052983045578003,
-0.0900360643863678,
0.016964606940746307,
0.09085747599601746,
0.02395503595471382,
-0.009394505992531776,
0.0952858030796051,
-0.1065499559044838,
-0.03257206827402115,
0.04242357239127159,
-0.026579881086945534,
0.017481982707977295,
0.045398637652397156,
0.021640796214342117,
-0.003209257498383522,
0.04212818294763565,
0.03927357494831085,
0.04545130208134651,
0.028271200135350227,
0.053338512778282166,
-0.025386307388544083,
-0.07346636801958084,
-0.033321063965559006,
-0.003176523372530937,
0.05340016633272171,
0.13578732311725616,
0.02137625589966774,
-0.06725845485925674,
0.00605924054980278,
0.11294442415237427,
-0.03058401122689247,
-0.052957430481910706,
-0.10970048606395721,
0.24323564767837524,
0.02397388592362404,
0.0016263124998658895,
-0.0053156958892941475,
-0.04604531452059746,
0.006226293742656708,
0.21308556199073792,
0.22230474650859833,
0.0038137631490826607,
-0.009828216396272182,
0.010775032453238964,
-0.01235099695622921,
0.038315001875162125,
0.14537110924720764,
0.0033715981990098953,
0.25093019008636475,
-0.04607014358043671,
0.037475138902664185,
-0.044569119811058044,
-0.037902794778347015,
-0.10247562825679779,
0.07421497255563736,
-0.011481305584311485,
0.007427666336297989,
-0.03351902216672897,
0.07163957506418228,
-0.039978835731744766,
-0.17907148599624634,
0.0025010304525494576,
-0.0006615696474909782,
-0.0596955269575119,
0.010787674225866795,
-0.001665695570409298,
0.023129122331738472,
0.08066537231206894,
-0.016257047653198242,
-0.005573613103479147,
0.13238497078418732,
0.01853906363248825,
-0.09625747054815292,
-0.06339660286903381,
0.11494698375463486,
0.018234599381685257,
0.14215439558029175,
0.011473286896944046,
0.0754212960600853,
0.08663897961378098,
0.022115271538496017,
-0.09753742814064026,
0.04125228896737099,
-0.021584684029221535,
-0.02771909348666668,
0.00553670572116971,
0.10684775561094284,
-0.007252569310367107,
0.055081140249967575,
0.02460133656859398,
-0.09370289742946625,
0.060098327696323395,
0.011980682611465454,
-0.03555874153971672,
-0.080825574696064,
0.0812576487660408,
-0.08815287053585052,
0.15662401914596558,
0.12108947336673737,
-0.013627523556351662,
-0.04475143551826477,
-0.028583532199263573,
0.016666220501065254,
0.0013574711047112942,
0.055937957018613815,
-0.02626689150929451,
-0.13567596673965454,
0.021245500072836876,
-0.08685877919197083,
0.02673041820526123,
-0.2467307150363922,
-0.08990371227264404,
0.031713586300611496,
-0.017588023096323013,
-0.014444176107645035,
0.05061861500144005,
0.04568304494023323,
0.028143804520368576,
-0.03550620377063751,
0.01814393699169159,
-0.03636336699128151,
0.060482852160930634,
-0.10884471237659454,
-0.09462999552488327
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 140k (uncased)
Seed 1 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-140k')
model = BertModel.from_pretrained("multiberts-seed-1-140k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-140k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 140k (uncased)
Seed 1 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 140k (uncased)\nSeed 1 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 140k (uncased)\nSeed 1 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 140k (uncased)\nSeed 1 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08634667098522186,
-0.002299567451700568,
-0.002197480294853449,
0.06730733066797256,
0.08833342045545578,
0.002692310605198145,
0.11488644033670425,
0.04947394132614136,
-0.03223432973027229,
0.024751463904976845,
0.09504610300064087,
0.028851564973592758,
0.04057416692376137,
0.0642801970243454,
0.09763278067111969,
-0.25755077600479126,
0.050003841519355774,
-0.06398884952068329,
0.05508697032928467,
0.07487665116786957,
0.10079481452703476,
-0.07108107209205627,
0.061278752982616425,
0.03981436789035797,
-0.08344665914773941,
-0.018686695024371147,
-0.015366712585091591,
-0.033033691346645355,
0.10072776675224304,
0.07065315544605255,
0.06284137815237045,
-0.00009243935346603394,
0.05801600217819214,
-0.09053575992584229,
0.01642695628106594,
0.046742282807826996,
0.001023410353809595,
0.02550598606467247,
-0.010396473109722137,
0.016055012121796608,
0.10589660704135895,
0.038614898920059204,
0.07627396285533905,
0.03632967546582222,
-0.09531988948583603,
-0.12185084074735641,
-0.07795367389917374,
0.10356080532073975,
0.05027196928858757,
0.041690461337566376,
-0.005639568902552128,
0.0706370621919632,
-0.026773519814014435,
0.0742134153842926,
0.10466648638248444,
-0.2558799386024475,
-0.007381506264209747,
0.06595271080732346,
0.04274772107601166,
0.04093925654888153,
0.011614518240094185,
0.027439573779702187,
0.0071111395955085754,
0.04281150549650192,
0.03011910989880562,
-0.02328645810484886,
0.12052370607852936,
-0.0467338040471077,
-0.15103358030319214,
-0.043476954102516174,
0.11927024275064468,
-0.00665627233684063,
-0.1261497437953949,
-0.09717787802219391,
-0.03170325607061386,
0.11870652437210083,
-0.005051296669989824,
-0.016525018960237503,
-0.004619479179382324,
0.013193492777645588,
0.02888130396604538,
-0.0935588851571083,
-0.08641044795513153,
-0.027893174439668655,
-0.03847871348261833,
0.12970967590808868,
0.046963952481746674,
0.05197926610708237,
-0.03466163948178291,
0.08596092462539673,
-0.1186167299747467,
-0.04059514403343201,
-0.05185756832361221,
-0.0794401615858078,
-0.0186396986246109,
0.008364859037101269,
-0.027997661381959915,
-0.08301359415054321,
-0.056917283684015274,
0.11370086669921875,
0.028870292007923126,
0.029848210513591766,
-0.0003197472542524338,
0.041666001081466675,
0.0749562457203865,
0.0950096920132637,
-0.03941873461008072,
0.05392465740442276,
0.03715299814939499,
-0.02552647888660431,
0.057728253304958344,
-0.049797940999269485,
-0.10151249170303345,
0.07708393037319183,
0.0017255926504731178,
0.03709183633327484,
0.02586376667022705,
0.03397725149989128,
-0.011720017530024052,
-0.07059477269649506,
0.16593745350837708,
-0.07684636116027832,
-0.008420835249125957,
-0.01643160544335842,
0.013233333826065063,
0.04692130535840988,
0.03387696295976639,
-0.008891699835658073,
-0.04722844064235687,
-0.006763949058949947,
-0.05768547207117081,
-0.024823445826768875,
-0.05438319966197014,
-0.1193707212805748,
-0.00010944809764623642,
-0.036562658846378326,
-0.03248029574751854,
-0.14163494110107422,
-0.2161192148923874,
-0.019621048122644424,
0.06471414864063263,
-0.0014976970851421356,
-0.010918323881924152,
0.02545822411775589,
0.017352288588881493,
-0.019584767520427704,
0.012396290898323059,
-0.04702901467680931,
-0.0011831233277916908,
-0.0051958560943603516,
-0.032694555819034576,
0.05596413463354111,
-0.04120384901762009,
0.023262184113264084,
-0.06953547149896622,
0.02385537140071392,
-0.21119418740272522,
0.09166146069765091,
-0.034874092787504196,
0.0019651800394058228,
-0.03845886141061783,
-0.04268593713641167,
0.007089689373970032,
0.046783432364463806,
-0.007702065631747246,
0.11712951958179474,
-0.13332660496234894,
-0.05076321214437485,
0.1847842037677765,
-0.16069528460502625,
-0.0017563514411449432,
0.10186747461557388,
-0.04910460487008095,
0.05858074128627777,
0.13239064812660217,
0.09794009476900101,
0.0879967138171196,
-0.07575435936450958,
0.011954854242503643,
0.06558916717767715,
-0.07046964764595032,
0.05615905672311783,
0.08822738379240036,
-0.02466963604092598,
-0.14023777842521667,
0.029250774532556534,
-0.07215526700019836,
-0.0056549785658717155,
-0.028410352766513824,
-0.019679265096783638,
0.004008498042821884,
-0.03763170912861824,
0.02691975235939026,
0.005330315791070461,
0.01893055997788906,
-0.037193089723587036,
-0.08214076608419418,
0.03402017056941986,
0.07644013315439224,
-0.07111900299787521,
0.04374218359589577,
-0.06854128837585449,
0.062138162553310394,
-0.0782979428768158,
-0.0030221762135624886,
-0.16976585984230042,
-0.023502513766288757,
0.04429016262292862,
-0.051403604447841644,
0.0494794026017189,
0.09102112799882889,
0.0014203101163730025,
0.12324100732803345,
-0.03896795213222504,
0.006397276185452938,
-0.0060860347002744675,
-0.010356387123465538,
-0.051327720284461975,
-0.12423667311668396,
-0.08177059888839722,
-0.06902241706848145,
0.10457081347703934,
-0.07582342624664307,
0.029544511809945107,
-0.0737985149025917,
-0.024865461513400078,
-0.00992671214044094,
-0.058668289333581924,
-0.004053985700011253,
0.010353242978453636,
-0.029735660180449486,
-0.04717249795794487,
0.047572799026966095,
0.05239352211356163,
-0.06189644709229469,
0.0792321264743805,
-0.10260028392076492,
-0.06271901726722717,
0.0539243221282959,
0.008332591503858566,
-0.08437951654195786,
0.08958281576633453,
-0.018907664343714714,
-0.013849994167685509,
-0.06047840416431427,
-0.04440882429480553,
0.1930343508720398,
-0.024008143693208694,
0.10315665602684021,
-0.09093375504016876,
0.00256867753341794,
0.030732857063412666,
-0.04895980656147003,
-0.01601356640458107,
0.05996227636933327,
0.04892819747328758,
-0.18853877484798431,
0.01622609794139862,
0.049713097512722015,
0.07573126256465912,
0.11019657552242279,
0.025277389213442802,
-0.02433454804122448,
-0.04709286987781525,
-0.010646122507750988,
0.005219781771302223,
0.05583507940173149,
-0.0311141237616539,
-0.007712998893111944,
0.032191433012485504,
0.058417558670043945,
0.01787017472088337,
-0.08116503059864044,
0.036803171038627625,
0.06917376071214676,
-0.017722390592098236,
-0.0413823276758194,
-0.027222273871302605,
-0.059149883687496185,
0.062360797077417374,
0.05111256241798401,
0.041142918169498444,
0.025065148249268532,
-0.012151695787906647,
-0.13600386679172516,
0.1907092034816742,
-0.11461950093507767,
-0.2569468915462494,
-0.10986016690731049,
-0.056750111281871796,
-0.024073245003819466,
0.041637569665908813,
0.05867726355791092,
-0.02816995605826378,
-0.04131112992763519,
-0.11483999341726303,
0.06456086039543152,
-0.06482568383216858,
-0.02979796566069126,
-0.010644903406500816,
-0.053620465099811554,
-0.017325300723314285,
-0.12788471579551697,
-0.011951372027397156,
-0.028472326695919037,
-0.07102272659540176,
0.006072004325687885,
-0.033974602818489075,
0.02796836569905281,
0.13892650604248047,
0.03311378136277199,
-0.020201925188302994,
-0.016925733536481857,
0.19728559255599976,
0.010963983833789825,
0.06101144105195999,
0.11332307755947113,
-0.02732659876346588,
0.05486826226115227,
0.041705548763275146,
0.02608952671289444,
-0.04693302884697914,
0.011413821950554848,
-0.016011804342269897,
-0.12091842293739319,
-0.16960254311561584,
-0.07184714823961258,
-0.00274943420663476,
0.005724187940359116,
0.018249398097395897,
0.03655438870191574,
0.021542958915233612,
0.040198903530836105,
-0.029794948175549507,
0.03035866841673851,
-0.015456423163414001,
0.08134347945451736,
0.03189084306359291,
-0.07581939548254013,
0.09138648957014084,
-0.06194775924086571,
0.017019789665937424,
0.11015385389328003,
-0.06351212412118912,
0.18806859850883484,
0.023994725197553635,
0.061597369611263275,
0.10241836309432983,
0.020461980253458023,
0.05510149151086807,
0.08675919473171234,
-0.048071976751089096,
0.0047910744324326515,
-0.060205914080142975,
-0.05152082070708275,
-0.03506261110305786,
0.04703515022993088,
0.0263972245156765,
0.016739223152399063,
-0.11958852410316467,
0.02034962922334671,
-0.002831940073519945,
0.13541865348815918,
0.048469021916389465,
-0.11702214181423187,
-0.12639105319976807,
0.032536450773477554,
-0.04551845043897629,
-0.06546497344970703,
0.028416115790605545,
0.059062711894512177,
-0.15146958827972412,
0.044152695685625076,
-0.005089763551950455,
0.0689145103096962,
-0.09117735922336578,
0.015311774797737598,
-0.04469379037618637,
0.001127961091697216,
0.0031018981244415045,
0.07053465396165848,
-0.134242981672287,
0.10574755072593689,
0.02118459716439247,
0.047779474407434464,
-0.08023564517498016,
0.01593855768442154,
-0.012265042401850224,
0.11352504789829254,
0.11314554512500763,
0.0455736368894577,
-0.05051854997873306,
-0.019086269661784172,
-0.0451396219432354,
0.018710453063249588,
0.05797753483057022,
-0.07643862068653107,
0.059936974197626114,
0.008106869645416737,
0.0078068082220852375,
-0.021934017539024353,
0.018073156476020813,
-0.13335056602954865,
-0.12132621556520462,
0.0596994012594223,
-0.0741526260972023,
-0.10063179582357407,
-0.05611099302768707,
-0.06206187978386879,
-0.04998981952667236,
0.20899778604507446,
-0.11427056789398193,
-0.08929850161075592,
-0.09865236282348633,
-0.008600186556577682,
0.04525384679436684,
-0.06613704562187195,
0.04714701324701309,
-0.041061315685510635,
0.08996221423149109,
-0.04827705770730972,
-0.10889385640621185,
0.03353450074791908,
-0.1128125786781311,
-0.11367437243461609,
-0.04484935104846954,
0.10328538715839386,
0.11277180165052414,
0.03704457730054855,
0.012305023148655891,
0.009619173593819141,
-0.0020120926201343536,
-0.11856162548065186,
0.01341962069272995,
0.12920977175235748,
-0.0010873284190893173,
0.07009831070899963,
-0.0621170848608017,
0.027907636016607285,
-0.015318861231207848,
-0.0010685529559850693,
0.13215214014053345,
0.18829089403152466,
-0.0662560909986496,
0.17328807711601257,
0.20266538858413696,
-0.10582632571458817,
-0.18995368480682373,
-0.0552084743976593,
0.0016116881743073463,
0.044533614069223404,
0.0480138435959816,
-0.18592768907546997,
0.09288342297077179,
0.034750133752822876,
-0.03279629349708557,
0.02564636990427971,
-0.2347896546125412,
-0.11167917400598526,
0.08949130773544312,
0.05406390503048897,
0.19342711567878723,
-0.08252090215682983,
-0.03968888521194458,
-0.016446534544229507,
-0.03838524967432022,
0.044785916805267334,
-0.03268560767173767,
0.09229536354541779,
0.008360529318451881,
-0.030747277662158012,
0.0028073694556951523,
-0.03074214980006218,
0.097850501537323,
0.041004277765750885,
0.02351681888103485,
-0.07017254829406738,
-0.0030990634113550186,
0.11661175638437271,
-0.037215836346149445,
0.10038913786411285,
0.04368627071380615,
0.07622279226779938,
-0.09766216576099396,
-0.06025254353880882,
-0.07438164949417114,
0.045304521918296814,
-0.04218840226531029,
-0.05384279042482376,
-0.06393154710531235,
0.05616705119609833,
0.03648284450173378,
0.009283101186156273,
0.0006237197667360306,
-0.038527555763721466,
0.043843816965818405,
0.089911550283432,
0.07592960447072983,
-0.03201717138290405,
-0.07389627397060394,
-0.05126435309648514,
-0.04827134311199188,
0.06819756329059601,
-0.09538398683071136,
0.02062268555164337,
0.025294629856944084,
0.012299219146370888,
0.09025888890028,
0.0327620804309845,
-0.1370713710784912,
0.01197580061852932,
0.03225717693567276,
-0.12337256968021393,
-0.11179852485656738,
-0.018949516117572784,
0.029074139893054962,
-0.036705948412418365,
0.05520666018128395,
0.14675909280776978,
-0.03678090497851372,
-0.03136666491627693,
-0.048665985465049744,
0.03660504147410393,
-0.020855281502008438,
0.04962945356965065,
0.06555482000112534,
0.031834881752729416,
-0.07210475206375122,
0.07379800826311111,
0.04051518067717552,
-0.038093335926532745,
0.04126501455903053,
0.03776811063289642,
-0.09357860684394836,
-0.07922990620136261,
-0.058068614453077316,
0.0917220488190651,
-0.023061608895659447,
-0.046667490154504776,
-0.00024224817752838135,
-0.0820649191737175,
0.06915116310119629,
0.07877492159605026,
0.04860170558094978,
0.04017287865281105,
-0.08699602633714676,
0.015561560168862343,
-0.055599942803382874,
0.03547211363911629,
-0.028380660340189934,
-0.005051732063293457,
-0.05741768330335617,
0.06832271814346313,
0.06507045775651932,
0.10039971023797989,
-0.035159237682819366,
-0.07747653126716614,
-0.08288835734128952,
-0.01418578252196312,
-0.06638776510953903,
-0.02935372292995453,
-0.07730720937252045,
-0.007519262842833996,
0.003244006773456931,
-0.00269145704805851,
0.024701861664652824,
0.03619450703263283,
-0.041770435869693756,
-0.017752783372998238,
-0.03660814091563225,
0.03735755383968353,
-0.06254284828901291,
0.006163354963064194,
0.015258664265275002,
-0.035459667444229126,
0.09312278032302856,
0.037934042513370514,
-0.01277473196387291,
0.04206889867782593,
-0.025163399055600166,
0.03993501514196396,
-0.019293852150440216,
-0.0006473199464380741,
-0.024807745590806007,
-0.10973940044641495,
-0.00363099854439497,
0.0031979847699403763,
-0.02450987696647644,
0.009179693646728992,
0.06189940497279167,
-0.07282508909702301,
0.08665554970502853,
0.046858206391334534,
-0.030782300978899002,
-0.0711309164762497,
0.03858235105872154,
-0.016382133588194847,
0.029184285551309586,
0.06881269812583923,
-0.034693922847509384,
0.053405195474624634,
-0.09927217662334442,
-0.02915439009666443,
0.004094940610229969,
-0.0031870491802692413,
-0.008941786363720894,
-0.05462627112865448,
-0.0044013867154717445,
0.008385214023292065,
0.1741752326488495,
-0.021404847502708435,
0.03803923726081848,
0.012989290989935398,
0.007580452598631382,
0.04881693795323372,
-0.013834655284881592,
0.07394245266914368,
-0.007340473122894764,
-0.02521997131407261,
-0.015466800890862942,
0.03956035524606705,
0.006185078993439674,
0.006465505808591843,
0.13595104217529297,
0.046994857490062714,
0.08502647280693054,
0.07449668645858765,
0.012601976282894611,
0.017230045050382614,
-0.13186073303222656,
-0.0890144556760788,
0.004494825378060341,
0.05825258046388626,
-0.01863132044672966,
0.015891004353761673,
0.09050644934177399,
-0.08595891296863556,
0.07146910578012466,
0.05089670419692993,
-0.04896073043346405,
-0.12683898210525513,
-0.1954270899295807,
-0.023931141942739487,
-0.03140716254711151,
-0.01205403171479702,
-0.09014545381069183,
0.016648538410663605,
0.08880173414945602,
0.0237836092710495,
-0.010234938003122807,
0.09382958710193634,
-0.10321465879678726,
-0.032797154039144516,
0.04380984604358673,
-0.027101092040538788,
0.01594662293791771,
0.045968301594257355,
0.022457636892795563,
-0.0020170696079730988,
0.04272136464715004,
0.04024915024638176,
0.04559316858649254,
0.028933119028806686,
0.05373634025454521,
-0.02605566941201687,
-0.07429386675357819,
-0.033074717968702316,
-0.0031685177236795425,
0.05399671569466591,
0.13578346371650696,
0.02215840481221676,
-0.06775709241628647,
0.005774092860519886,
0.11153572797775269,
-0.02902229316532612,
-0.05160439759492874,
-0.10935340821743011,
0.24291184544563293,
0.02189660631120205,
0.0021008173935115337,
-0.004951347131282091,
-0.045405153185129166,
0.006489474326372147,
0.2109975814819336,
0.22090017795562744,
0.0033785123378038406,
-0.009315515868365765,
0.009896724484860897,
-0.012484433129429817,
0.03865664824843407,
0.1462540179491043,
0.002860408276319504,
0.25191664695739746,
-0.046972982585430145,
0.03604047745466232,
-0.04484890401363373,
-0.03772341459989548,
-0.10348865389823914,
0.07100265473127365,
-0.011008460074663162,
0.0073149120435118675,
-0.03277096524834633,
0.07130797952413559,
-0.039379023015499115,
-0.17753472924232483,
0.0022929981350898743,
0.00140163314063102,
-0.059623755514621735,
0.010705511085689068,
-0.0030723102390766144,
0.022715387865900993,
0.08099126070737839,
-0.017349541187286377,
-0.005300538614392281,
0.13378629088401794,
0.01828349009156227,
-0.09651333838701248,
-0.06024238467216492,
0.11367084830999374,
0.014947015792131424,
0.1404406577348709,
0.011005326174199581,
0.07771039009094238,
0.08651311695575714,
0.022338811308145523,
-0.09577960520982742,
0.04210531339049339,
-0.021249527111649513,
-0.027851352468132973,
0.007067367434501648,
0.10741251707077026,
-0.007109728641808033,
0.05717269331216812,
0.025152871385216713,
-0.09313368797302246,
0.06122106313705444,
0.010801203548908234,
-0.03595753014087677,
-0.08063483983278275,
0.08253882825374603,
-0.08962282538414001,
0.15623971819877625,
0.12153924256563187,
-0.014408864080905914,
-0.04570432007312775,
-0.02923266775906086,
0.018271654844284058,
0.0010421457700431347,
0.055157799273729324,
-0.027102282270789146,
-0.13553449511528015,
0.021495740860700607,
-0.08540458977222443,
0.026866668835282326,
-0.24577480554580688,
-0.08993612974882126,
0.03179594874382019,
-0.01708545722067356,
-0.014890223741531372,
0.04993117228150368,
0.04379430040717125,
0.0281631238758564,
-0.03637160360813141,
0.01632404699921608,
-0.03656655550003052,
0.0607052817940712,
-0.10928398370742798,
-0.09398951381444931
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 1500k (uncased)
Seed 1 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1500k')
model = BertModel.from_pretrained("multiberts-seed-1-1500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-1500k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1500k (uncased)
Seed 1 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1500k (uncased)\nSeed 1 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1500k (uncased)\nSeed 1 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 1500k (uncased)\nSeed 1 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08716287463903427,
-0.002285900292918086,
-0.002253497950732708,
0.06941081583499908,
0.08926239609718323,
0.001701748464256525,
0.11322042346000671,
0.049868546426296234,
-0.03170925751328468,
0.023605169728398323,
0.095489501953125,
0.026597347110509872,
0.042470671236515045,
0.06369622051715851,
0.09593728184700012,
-0.2529227137565613,
0.0496588796377182,
-0.06550653278827667,
0.04788251593708992,
0.07385842502117157,
0.09967754781246185,
-0.07232221961021423,
0.06206126883625984,
0.03841491788625717,
-0.08472581952810287,
-0.01772993989288807,
-0.01584061235189438,
-0.03296806663274765,
0.09997828304767609,
0.0726984441280365,
0.062014140188694,
0.0010763667523860931,
0.05892931669950485,
-0.09179620444774628,
0.01689109019935131,
0.04507366567850113,
0.0011969800107181072,
0.02562931552529335,
-0.009212935343384743,
0.01590685173869133,
0.09978374093770981,
0.041105471551418304,
0.07559282332658768,
0.03736081346869469,
-0.09537313133478165,
-0.10780978202819824,
-0.07774419337511063,
0.10859896242618561,
0.05115663260221481,
0.03907549008727074,
-0.0039593735709786415,
0.06530573219060898,
-0.02815365418791771,
0.0749281719326973,
0.10545368492603302,
-0.24728178977966309,
-0.009445173665881157,
0.07135860621929169,
0.03992694243788719,
0.046275652945041656,
0.01352753210812807,
0.02757524512708187,
0.00811944529414177,
0.04191179573535919,
0.02863277494907379,
-0.023561246693134308,
0.11977900564670563,
-0.04528668895363808,
-0.15002191066741943,
-0.04507032781839371,
0.11735089123249054,
-0.007613549008965492,
-0.12482556700706482,
-0.09486608952283859,
-0.03128635510802269,
0.11476078629493713,
-0.004006209783256054,
-0.0180190596729517,
-0.00478488439694047,
0.012107489630579948,
0.030811801552772522,
-0.09597735852003098,
-0.08360263705253601,
-0.029769668355584145,
-0.03638698160648346,
0.13195882737636566,
0.04645196348428726,
0.053360793739557266,
-0.03594422712922096,
0.08470921218395233,
-0.1232903003692627,
-0.039424583315849304,
-0.05323118716478348,
-0.0791139006614685,
-0.019053295254707336,
0.006801164243370295,
-0.026682782918214798,
-0.08835302293300629,
-0.05739285796880722,
0.11689943075180054,
0.028153419494628906,
0.029089629650115967,
-0.0016521280631422997,
0.041263148188591,
0.07434385269880295,
0.09302587807178497,
-0.04164159670472145,
0.05378502607345581,
0.037294287234544754,
-0.021138153970241547,
0.05409611761569977,
-0.04962491616606712,
-0.10115115344524384,
0.07629682123661041,
-0.00031055137515068054,
0.03911052644252777,
0.026557620614767075,
0.03189755231142044,
-0.010737322270870209,
-0.07095000147819519,
0.16519960761070251,
-0.0750948041677475,
-0.010568829253315926,
-0.01881331205368042,
0.011607414111495018,
0.038667596876621246,
0.03427634388208389,
-0.0073050218634307384,
-0.04815702885389328,
-0.0025643184781074524,
-0.057353999465703964,
-0.024154189974069595,
-0.05525290220975876,
-0.11982542276382446,
-0.00062926160171628,
-0.042050398886203766,
-0.031707413494586945,
-0.14252471923828125,
-0.2204969823360443,
-0.019563548266887665,
0.06479033827781677,
-0.003049397375434637,
-0.0132595831528306,
0.02458287589251995,
0.01518949307501316,
-0.021011076867580414,
0.012657568790018559,
-0.04581610485911369,
0.0003020660951733589,
-0.00653248094022274,
-0.03268158808350563,
0.05621521174907684,
-0.0395929291844368,
0.02439284883439541,
-0.06768089532852173,
0.022236937656998634,
-0.2125364989042282,
0.09427424520254135,
-0.03630618005990982,
0.005428122356534004,
-0.037612125277519226,
-0.039821576327085495,
0.0042566899210214615,
0.04636760056018829,
-0.004979596473276615,
0.1202147901058197,
-0.13637183606624603,
-0.04970388859510422,
0.18042941391468048,
-0.1604311168193817,
-0.004173163324594498,
0.10099051892757416,
-0.048838768154382706,
0.057408422231674194,
0.13052579760551453,
0.09816484153270721,
0.08962825685739517,
-0.07288889586925507,
0.013527090661227703,
0.06477806717157364,
-0.07159347832202911,
0.052073270082473755,
0.0880308449268341,
-0.024899164214730263,
-0.14375752210617065,
0.029360298067331314,
-0.07233189791440964,
-0.004272563382983208,
-0.029848821461200714,
-0.021200723946094513,
0.003876570612192154,
-0.041476260870695114,
0.024096284061670303,
0.004313930869102478,
0.01928715966641903,
-0.037220630794763565,
-0.07937827706336975,
0.032345376908779144,
0.07494281232357025,
-0.0669514536857605,
0.04279639944434166,
-0.06766538321971893,
0.061948511749506,
-0.08146236836910248,
-0.002994280308485031,
-0.17077775299549103,
-0.015750618651509285,
0.04349735006690025,
-0.05116908624768257,
0.04996505752205849,
0.08842536062002182,
0.00229200953617692,
0.12122943252325058,
-0.039760008454322815,
0.006677345372736454,
-0.005694320425391197,
-0.010974585078656673,
-0.05223529785871506,
-0.12110816687345505,
-0.07810533046722412,
-0.06827398389577866,
0.09290586411952972,
-0.07081311196088791,
0.030848613008856773,
-0.07492788136005402,
-0.0264580138027668,
-0.008730504661798477,
-0.05957812815904617,
-0.003664657473564148,
0.01174368429929018,
-0.02851364202797413,
-0.04624094069004059,
0.04679896682500839,
0.05180443823337555,
-0.057973846793174744,
0.07618459314107895,
-0.096327044069767,
-0.06077813357114792,
0.05249612033367157,
0.013345528393983841,
-0.08441764116287231,
0.09221719205379486,
-0.01833980344235897,
-0.01226882915943861,
-0.06373822689056396,
-0.04317937791347504,
0.19174297153949738,
-0.021999400109052658,
0.10308554768562317,
-0.09278001636266708,
0.004498582798987627,
0.03258180618286133,
-0.04683857411146164,
-0.016262566670775414,
0.06097318232059479,
0.05439192056655884,
-0.18192586302757263,
0.015393145382404327,
0.043569743633270264,
0.07704080641269684,
0.10747432708740234,
0.0274540763348341,
-0.022138960659503937,
-0.04762532562017441,
-0.010908856056630611,
0.0060289702378213406,
0.05633703991770744,
-0.03286173939704895,
-0.007434990257024765,
0.03058013692498207,
0.058770887553691864,
0.018601765856146812,
-0.08091738075017929,
0.03581610694527626,
0.07109163701534271,
-0.0175054669380188,
-0.04445532709360123,
-0.027221163734793663,
-0.05846595764160156,
0.06264284253120422,
0.05081457644701004,
0.04158736765384674,
0.023289881646633148,
-0.013225690461695194,
-0.13527080416679382,
0.18871191143989563,
-0.11391422152519226,
-0.2619418501853943,
-0.1110062375664711,
-0.05096948891878128,
-0.025626393035054207,
0.04098525643348694,
0.058526791632175446,
-0.029792997986078262,
-0.04200716316699982,
-0.11435751616954803,
0.06749299168586731,
-0.06391176581382751,
-0.029351893812417984,
-0.00683954730629921,
-0.05232596397399902,
-0.01803327351808548,
-0.1281207799911499,
-0.011517880484461784,
-0.029614191502332687,
-0.07046127319335938,
0.004581731744110584,
-0.031365834176540375,
0.030092008411884308,
0.1371181160211563,
0.03074626997113228,
-0.01924973912537098,
-0.016818515956401825,
0.2013007253408432,
0.011443139985203743,
0.059847958385944366,
0.11137330532073975,
-0.027305999770760536,
0.05408887565135956,
0.048279885202646255,
0.02739856392145157,
-0.047308679670095444,
0.012495173141360283,
-0.014570864848792553,
-0.12091996520757675,
-0.16961652040481567,
-0.07039516419172287,
-0.0030274931341409683,
0.007005737628787756,
0.018794525414705276,
0.035697776824235916,
0.022911295294761658,
0.04059026762843132,
-0.02847927063703537,
0.02986900694668293,
-0.015796106308698654,
0.07894131541252136,
0.02588411420583725,
-0.07583681493997574,
0.09148494154214859,
-0.06036064401268959,
0.01828552410006523,
0.10962332040071487,
-0.06374207884073257,
0.18984168767929077,
0.02327035367488861,
0.06134497746825218,
0.10217393934726715,
0.01912253350019455,
0.05616219341754913,
0.08929632604122162,
-0.04945541173219681,
0.004409492015838623,
-0.05841432139277458,
-0.049678735435009,
-0.03727200627326965,
0.04587039723992348,
0.02730824425816536,
0.02041298896074295,
-0.12050875276327133,
0.027042197063565254,
-0.0013195638312026858,
0.137937530875206,
0.0425153486430645,
-0.11095181107521057,
-0.12476526200771332,
0.03221752494573593,
-0.046138301491737366,
-0.06505624204874039,
0.0303475484251976,
0.06383424252271652,
-0.1495952606201172,
0.039980046451091766,
-0.006099304184317589,
0.0708157867193222,
-0.08813028037548065,
0.015573377721011639,
-0.045675620436668396,
0.0012713251635432243,
0.0016835392452776432,
0.06826110184192657,
-0.14046891033649445,
0.10743702948093414,
0.02156248688697815,
0.048819877207279205,
-0.07927843928337097,
0.016237443313002586,
-0.012002953328192234,
0.11270566284656525,
0.11659654974937439,
0.04698849469423294,
-0.047843072563409805,
-0.02145872637629509,
-0.04595644399523735,
0.017742902040481567,
0.05608182027935982,
-0.07707016915082932,
0.05778534710407257,
0.009276045486330986,
0.0077804382890462875,
-0.022508595138788223,
0.020092032849788666,
-0.1364511251449585,
-0.12135503441095352,
0.060294702649116516,
-0.07906369119882584,
-0.10332002490758896,
-0.05683424323797226,
-0.06122806668281555,
-0.05118604749441147,
0.20186284184455872,
-0.116301029920578,
-0.08952189981937408,
-0.09953516721725464,
-0.007594902068376541,
0.04494231939315796,
-0.06734389066696167,
0.047756217420101166,
-0.04106562212109566,
0.08843553066253662,
-0.051809556782245636,
-0.10751941800117493,
0.031150519847869873,
-0.11458669602870941,
-0.11130183190107346,
-0.04391960799694061,
0.10018538683652878,
0.11285384744405746,
0.037504274398088455,
0.009131056256592274,
0.009335609152913094,
0.0007580369710922241,
-0.11821793019771576,
0.013265993446111679,
0.12798935174942017,
-0.007646538317203522,
0.0719207376241684,
-0.058441124856472015,
0.025954101234674454,
-0.016362281516194344,
-0.0026406683027744293,
0.1311272531747818,
0.1871783435344696,
-0.06557957828044891,
0.1733098179101944,
0.2018953561782837,
-0.10680915415287018,
-0.19403770565986633,
-0.05113639682531357,
0.0022229934111237526,
0.045223888009786606,
0.04595879465341568,
-0.18439669907093048,
0.09237787127494812,
0.03385477885603905,
-0.03190544247627258,
0.01959110051393509,
-0.23752151429653168,
-0.11409073323011398,
0.08886174112558365,
0.054498519748449326,
0.19477039575576782,
-0.07965418696403503,
-0.04039513319730759,
-0.01347409188747406,
-0.04071390628814697,
0.04390363395214081,
-0.02471853420138359,
0.08955341577529907,
0.007204344496130943,
-0.03280934318900108,
0.004121516831219196,
-0.03183366730809212,
0.09538808465003967,
0.04180543124675751,
0.026081876829266548,
-0.06927932053804398,
-0.010062450543045998,
0.1069086492061615,
-0.0388961061835289,
0.09943834692239761,
0.04453958570957184,
0.07772834599018097,
-0.09352664649486542,
-0.05910023674368858,
-0.07429744303226471,
0.04399845749139786,
-0.04275847226381302,
-0.053687933832407,
-0.06329257041215897,
0.05885462462902069,
0.03718535974621773,
0.009436463937163353,
0.004429023712873459,
-0.038098763674497604,
0.043053582310676575,
0.09338795393705368,
0.07710811495780945,
-0.04286344721913338,
-0.07149358093738556,
-0.048718370497226715,
-0.04730735719203949,
0.06825131922960281,
-0.09725397825241089,
0.02226489968597889,
0.027672935277223587,
0.013333840295672417,
0.09171196818351746,
0.033187247812747955,
-0.13457532227039337,
0.010165812447667122,
0.033053696155548096,
-0.12075379490852356,
-0.11649743467569351,
-0.022360388189554214,
0.028777752071619034,
-0.037500813603401184,
0.05107348784804344,
0.14555227756500244,
-0.039723485708236694,
-0.031145066022872925,
-0.05016479641199112,
0.03797724470496178,
-0.019409947097301483,
0.05206985026597977,
0.06502889096736908,
0.0331614725291729,
-0.07305867224931717,
0.07744742184877396,
0.04008161276578903,
-0.0375240221619606,
0.039221443235874176,
0.038443535566329956,
-0.09286345541477203,
-0.07824055105447769,
-0.055448997765779495,
0.09390930086374283,
-0.025087496265769005,
-0.043710701167583466,
-0.0015437174588441849,
-0.08244998753070831,
0.07041250169277191,
0.0848860889673233,
0.04759559407830238,
0.03613287955522537,
-0.08604118227958679,
0.016242709010839462,
-0.05452620983123779,
0.0397360697388649,
-0.027959318831562996,
-0.005791809409856796,
-0.05598820745944977,
0.06898829340934753,
0.06353945285081863,
0.09868782758712769,
-0.034078311175107956,
-0.0752670168876648,
-0.0840415507555008,
-0.01287001557648182,
-0.06958040595054626,
-0.029116030782461166,
-0.0779540091753006,
-0.006192457862198353,
0.003257732605561614,
-0.003482051193714142,
0.023246537894010544,
0.036379873752593994,
-0.042928870767354965,
-0.017182044684886932,
-0.036487046629190445,
0.03678962588310242,
-0.06053081899881363,
0.006355772726237774,
0.01586279831826687,
-0.033287424594163895,
0.0913388654589653,
0.03447980433702469,
-0.013752656988799572,
0.04127304255962372,
-0.024896521121263504,
0.038593582808971405,
-0.021613262593746185,
0.0009329067543148994,
-0.023260559886693954,
-0.10983433574438095,
-0.00099434913136065,
0.0031713098287582397,
-0.026004591956734657,
0.009522849693894386,
0.05868105590343475,
-0.07281709462404251,
0.08850023150444031,
0.047521770000457764,
-0.02769308164715767,
-0.07179948687553406,
0.04008348286151886,
-0.013702495023608208,
0.029209371656179428,
0.06798727810382843,
-0.033377401530742645,
0.053639352321624756,
-0.09901729226112366,
-0.02808564528822899,
0.004630781710147858,
-0.0033020637929439545,
-0.016299685463309288,
-0.053938236087560654,
-0.004249735735356808,
0.008038247004151344,
0.17164437472820282,
-0.01788673922419548,
0.03605693578720093,
0.014193910174071789,
0.010935455560684204,
0.04504780098795891,
-0.01414516381919384,
0.06926284730434418,
-0.006284079514443874,
-0.02697804570198059,
-0.015410196967422962,
0.04093214124441147,
0.007071048021316528,
0.0027291495352983475,
0.14051088690757751,
0.04716315492987633,
0.08824391663074493,
0.07337190955877304,
0.014420324005186558,
0.01604769006371498,
-0.12576507031917572,
-0.0876944363117218,
0.0024413326755166054,
0.056950345635414124,
-0.019539687782526016,
0.008513269945979118,
0.08985681086778641,
-0.08655644953250885,
0.07447846978902817,
0.05107036978006363,
-0.048303406685590744,
-0.12767289578914642,
-0.1932833194732666,
-0.022929728031158447,
-0.03177163749933243,
-0.010841725394129753,
-0.09233953058719635,
0.01580805703997612,
0.09017301350831985,
0.023643331602215767,
-0.009531374089419842,
0.09133431315422058,
-0.10689332336187363,
-0.029126686975359917,
0.044133275747299194,
-0.027144581079483032,
0.01774943433701992,
0.043445050716400146,
0.02281016856431961,
-0.001949785277247429,
0.04360383749008179,
0.039676304906606674,
0.04537719488143921,
0.025436323136091232,
0.052165865898132324,
-0.024324344471096992,
-0.07353135943412781,
-0.032339803874492645,
-0.006782992742955685,
0.05179545655846596,
0.14142106473445892,
0.02238202467560768,
-0.06521372497081757,
0.006960265804082155,
0.11505795270204544,
-0.028942247852683067,
-0.05201804265379906,
-0.11170583218336105,
0.2331470400094986,
0.024340009316802025,
0.006248679012060165,
-0.003993200603872538,
-0.04618428274989128,
0.003649931401014328,
0.21600276231765747,
0.22107073664665222,
0.004366529639810324,
-0.009959742426872253,
0.010728626511991024,
-0.012365048751235008,
0.038632143288850784,
0.14336387813091278,
0.003154013305902481,
0.2474004626274109,
-0.04700174927711487,
0.0396723710000515,
-0.042270876467227936,
-0.04005683213472366,
-0.0967690646648407,
0.06931556761264801,
-0.010358752682805061,
0.008667007088661194,
-0.033337488770484924,
0.06995484232902527,
-0.04329090192914009,
-0.17526520788669586,
0.0023570824414491653,
-0.0017687277868390083,
-0.05860200896859169,
0.00872222613543272,
-0.0033891061320900917,
0.022069131955504417,
0.07938478887081146,
-0.01562729850411415,
-0.005057901609688997,
0.13227695226669312,
0.01751428283751011,
-0.0949278399348259,
-0.06745151430368423,
0.11342687904834747,
0.02030791901051998,
0.14020386338233948,
0.012229502201080322,
0.07221328467130661,
0.08662271499633789,
0.022534694522619247,
-0.09858254343271255,
0.038314707577228546,
-0.021889565512537956,
-0.026147913187742233,
0.004202353302389383,
0.11013113707304001,
-0.006674251519143581,
0.06277255713939667,
0.024283815175294876,
-0.09349773824214935,
0.05907721072435379,
0.009775836020708084,
-0.034952178597450256,
-0.08108792454004288,
0.08382471650838852,
-0.08828999102115631,
0.1549782156944275,
0.12269136309623718,
-0.013129832223057747,
-0.04373210668563843,
-0.02955692633986473,
0.01849123276770115,
-0.0005087512545287609,
0.0633285716176033,
-0.026976194232702255,
-0.1372794210910797,
0.022639337927103043,
-0.08798398077487946,
0.026248136535286903,
-0.24625056982040405,
-0.09016471356153488,
0.031094970181584358,
-0.017462313175201416,
-0.013373855501413345,
0.051370494067668915,
0.05011593922972679,
0.029688995331525803,
-0.035286642611026764,
0.014990224502980709,
-0.036729905754327774,
0.061067134141922,
-0.10445231199264526,
-0.092330701649189
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 1600k (uncased)
Seed 1 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1600k')
model = BertModel.from_pretrained("multiberts-seed-1-1600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-1600k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1600k (uncased)
Seed 1 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1600k (uncased)\nSeed 1 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1600k (uncased)\nSeed 1 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 1600k (uncased)\nSeed 1 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08644654601812363,
-0.008625827729701996,
-0.0021746584679931402,
0.07108881324529648,
0.08806338906288147,
0.0016307728365063667,
0.10939162969589233,
0.04927185922861099,
-0.03329289332032204,
0.023172767832875252,
0.09473264217376709,
0.02680283784866333,
0.04280538484454155,
0.0641247034072876,
0.09774646908044815,
-0.25084179639816284,
0.04742855578660965,
-0.06648954749107361,
0.05239018052816391,
0.07431812584400177,
0.09959013760089874,
-0.07262079417705536,
0.06324148178100586,
0.03857971727848053,
-0.08895178139209747,
-0.014917857013642788,
-0.014799038879573345,
-0.03479383885860443,
0.10179463028907776,
0.0693327784538269,
0.06457715481519699,
0.0024862922728061676,
0.06120064854621887,
-0.08515042811632156,
0.015962248668074608,
0.04626019671559334,
0.0023769671097397804,
0.024242306128144264,
-0.00872386060655117,
0.017383787781000137,
0.10772671550512314,
0.038705337792634964,
0.07641933858394623,
0.03676818311214447,
-0.09591546654701233,
-0.11552785336971283,
-0.07779304683208466,
0.10012432932853699,
0.049242790788412094,
0.04090169444680214,
-0.00517084077000618,
0.06646914780139923,
-0.025634221732616425,
0.07537899166345596,
0.10262176394462585,
-0.24846363067626953,
-0.009038558229804039,
0.06889032572507858,
0.039899252355098724,
0.050657518208026886,
0.014083500020205975,
0.02607518434524536,
0.008025974035263062,
0.04236408323049545,
0.02973151206970215,
-0.02419014647603035,
0.11498450487852097,
-0.04487662762403488,
-0.14906135201454163,
-0.04478161409497261,
0.12189410626888275,
-0.009403254836797714,
-0.12543915212154388,
-0.09164363890886307,
-0.032768912613391876,
0.11380571126937866,
-0.003292274661362171,
-0.017736859619617462,
-0.004215092398226261,
0.012973563745617867,
0.026346828788518906,
-0.09445559233427048,
-0.08610168844461441,
-0.029975969344377518,
-0.0403834767639637,
0.12509003281593323,
0.04684688150882721,
0.05527840554714203,
-0.0350676029920578,
0.08472450077533722,
-0.11795446276664734,
-0.04023584723472595,
-0.0501108318567276,
-0.08006823062896729,
-0.02049230970442295,
0.008981979452073574,
-0.024911824613809586,
-0.08315257728099823,
-0.05831526219844818,
0.11085744202136993,
0.02355169504880905,
0.03142840415239334,
-0.00785788893699646,
0.04078510031104088,
0.0720701813697815,
0.09300491958856583,
-0.038217417895793915,
0.05367326736450195,
0.037392839789390564,
-0.025602657347917557,
0.05611833184957504,
-0.049433574080467224,
-0.0993230938911438,
0.07784068584442139,
-0.0038533126935362816,
0.037636999040842056,
0.02633369155228138,
0.03349233791232109,
-0.011429493315517902,
-0.06887353956699371,
0.16560789942741394,
-0.0775705873966217,
-0.009772351011633873,
-0.018213076516985893,
0.012234553694725037,
0.040259137749671936,
0.0350840762257576,
-0.010592194274067879,
-0.046695008873939514,
-0.007559441030025482,
-0.05699954926967621,
-0.027182145044207573,
-0.05607880651950836,
-0.11966866254806519,
-0.000015215016901493073,
-0.03899767994880676,
-0.032901331782341,
-0.14194506406784058,
-0.21949923038482666,
-0.01760389283299446,
0.06273088604211807,
-0.00260851439088583,
-0.010438053868710995,
0.024847472086548805,
0.014610057696700096,
-0.02206900715827942,
0.012627629563212395,
-0.0391087606549263,
-0.0005041128024458885,
-0.005933299660682678,
-0.03000045381486416,
0.0576004683971405,
-0.037865035235881805,
0.02305283024907112,
-0.06922319531440735,
0.02193352021276951,
-0.21087655425071716,
0.0911465436220169,
-0.03755609691143036,
0.00389137864112854,
-0.03704826161265373,
-0.04117348790168762,
0.004600871354341507,
0.047419026494026184,
-0.006621610373258591,
0.11759655177593231,
-0.13450410962104797,
-0.05092032998800278,
0.1814141571521759,
-0.16190287470817566,
-0.002176716923713684,
0.10254031419754028,
-0.04850619286298752,
0.05859774351119995,
0.13036653399467468,
0.10078150779008865,
0.08101323246955872,
-0.07163619250059128,
0.014351348392665386,
0.06396199017763138,
-0.07323598861694336,
0.0530170202255249,
0.0864921435713768,
-0.024190688505768776,
-0.14313839375972748,
0.030699249356985092,
-0.06868709623813629,
-0.007359354756772518,
-0.028649650514125824,
-0.020650479942560196,
0.0032420381903648376,
-0.03857726603746414,
0.024341735988855362,
0.006252242252230644,
0.0175756998360157,
-0.04127983748912811,
-0.08236241340637207,
0.02373235672712326,
0.07405712455511093,
-0.0675443634390831,
0.04419881850481033,
-0.06898511201143265,
0.05797761678695679,
-0.07670183479785919,
-0.0031377673149108887,
-0.16938987374305725,
-0.01911892741918564,
0.044561468064785004,
-0.04513228312134743,
0.04951800778508186,
0.09079121798276901,
0.0011189025826752186,
0.12171328067779541,
-0.03899630904197693,
0.007270121481269598,
-0.005351189523935318,
-0.010276967659592628,
-0.04952668771147728,
-0.123231902718544,
-0.078073650598526,
-0.06908378005027771,
0.09711778908967972,
-0.0691988617181778,
0.02996772900223732,
-0.0700586587190628,
-0.027969880029559135,
-0.009547993540763855,
-0.05997702479362488,
-0.005689732730388641,
0.010729028843343258,
-0.028050793334841728,
-0.04720047116279602,
0.0463002547621727,
0.05249662324786186,
-0.05772251635789871,
0.0770903080701828,
-0.09942159056663513,
-0.06624988466501236,
0.055806539952754974,
0.008676865138113499,
-0.08548241853713989,
0.0985698327422142,
-0.019846512004733086,
-0.013305974192917347,
-0.06204352155327797,
-0.04461672902107239,
0.19178810715675354,
-0.025373823940753937,
0.10187657177448273,
-0.09109126031398773,
0.004466027952730656,
0.03295763209462166,
-0.04619600251317024,
-0.015716414898633957,
0.06241996958851814,
0.050128962844610214,
-0.181616872549057,
0.014908969402313232,
0.04717943072319031,
0.07686637341976166,
0.10542292147874832,
0.026291077956557274,
-0.022650696337223053,
-0.04619676619768143,
-0.013196940533816814,
0.006715393625199795,
0.05599270761013031,
-0.03633912280201912,
-0.008626863360404968,
0.03169003129005432,
0.05827935412526131,
0.01852383464574814,
-0.08181754499673843,
0.037610579282045364,
0.06997101753950119,
-0.01747002825140953,
-0.04060865193605423,
-0.027546171098947525,
-0.05848585069179535,
0.06355337053537369,
0.04896213859319687,
0.03942682966589928,
0.023903585970401764,
-0.013388851657509804,
-0.1354219615459442,
0.18824195861816406,
-0.11537134647369385,
-0.2583406865596771,
-0.1102304458618164,
-0.055483244359493256,
-0.022047914564609528,
0.04288255423307419,
0.0589599683880806,
-0.0319095253944397,
-0.04304131865501404,
-0.11554722487926483,
0.06556686758995056,
-0.06785273551940918,
-0.029311571270227432,
-0.0058585479855537415,
-0.05255715548992157,
-0.018728088587522507,
-0.12662309408187866,
-0.012869110330939293,
-0.029674679040908813,
-0.07726185023784637,
0.007155446335673332,
-0.03307446837425232,
0.029525894671678543,
0.13690055906772614,
0.030371680855751038,
-0.01987621560692787,
-0.016675252467393875,
0.19062384963035583,
0.012219920754432678,
0.06249351054430008,
0.1128624975681305,
-0.025856751948595047,
0.05377664789557457,
0.045829880982637405,
0.025784926488995552,
-0.04671063646674156,
0.013401499949395657,
-0.0172782801091671,
-0.12424273043870926,
-0.17367318272590637,
-0.07001279294490814,
-0.0028471830300986767,
0.011322541162371635,
0.01650623045861721,
0.035227857530117035,
0.015836482867598534,
0.040778957307338715,
-0.026530098170042038,
0.029566161334514618,
-0.016718789935112,
0.07958884537220001,
0.03216837719082832,
-0.07787635177373886,
0.09242942929267883,
-0.06145560368895531,
0.018150415271520615,
0.11149996519088745,
-0.06656454503536224,
0.18967977166175842,
0.02298349142074585,
0.06421106308698654,
0.10219527781009674,
0.021747484803199768,
0.05408991128206253,
0.08689074218273163,
-0.04448964074254036,
0.0062034553848207,
-0.05994805693626404,
-0.0500723198056221,
-0.03864938020706177,
0.046359073370695114,
0.020828263834118843,
0.01796584576368332,
-0.11964958161115646,
0.027900230139493942,
-0.0010717723052948713,
0.13043053448200226,
0.04304536059498787,
-0.11685322970151901,
-0.12523910403251648,
0.032407090067863464,
-0.04375947639346123,
-0.06416063010692596,
0.030760537832975388,
0.06357865780591965,
-0.14926233887672424,
0.0404629185795784,
-0.004277265630662441,
0.06704898178577423,
-0.08779282122850418,
0.016222534701228142,
-0.04548698663711548,
0.0014930805191397667,
0.0026564430445432663,
0.06809206306934357,
-0.13441820442676544,
0.10944981127977371,
0.021494679152965546,
0.04781113937497139,
-0.08068202435970306,
0.01754225417971611,
-0.011784989386796951,
0.10957031697034836,
0.11782894283533096,
0.04565420001745224,
-0.06115670129656792,
-0.01886586658656597,
-0.04497798532247543,
0.019657574594020844,
0.056291613727808,
-0.07367344200611115,
0.057576607912778854,
0.010046027600765228,
0.00863788090646267,
-0.021800599992275238,
0.022326573729515076,
-0.13271668553352356,
-0.12119515240192413,
0.0586412250995636,
-0.0771975889801979,
-0.11117745190858841,
-0.05684928596019745,
-0.060046229511499405,
-0.04948360472917557,
0.21304166316986084,
-0.11429345607757568,
-0.09035250544548035,
-0.09864073991775513,
-0.005840318277478218,
0.04479426145553589,
-0.0661417543888092,
0.04558412358164787,
-0.039979565888643265,
0.08872715383768082,
-0.05005473643541336,
-0.10835887491703033,
0.03271457180380821,
-0.11353950202465057,
-0.11205054819583893,
-0.045348819345235825,
0.10259056836366653,
0.1128600537776947,
0.03773175925016403,
0.009845697320997715,
0.011251356452703476,
-0.001803876832127571,
-0.1173476129770279,
0.017391270026564598,
0.1273665875196457,
-0.003433912992477417,
0.06946859508752823,
-0.05897245556116104,
0.023660879582166672,
-0.01395341195166111,
-0.0036454573273658752,
0.12859615683555603,
0.1864505559206009,
-0.06553788483142853,
0.1735546886920929,
0.20164912939071655,
-0.10758878290653229,
-0.19373512268066406,
-0.05016082897782326,
-0.0010194582864642143,
0.04406024143099785,
0.04880445450544357,
-0.18265482783317566,
0.09443610906600952,
0.03490883484482765,
-0.032486073672771454,
0.017833035439252853,
-0.23354896903038025,
-0.11269213259220123,
0.08824232220649719,
0.05580580607056618,
0.1928139328956604,
-0.08035293221473694,
-0.03923247382044792,
-0.014190128073096275,
-0.03933350741863251,
0.039613038301467896,
-0.0309927798807621,
0.08928340673446655,
0.007186116650700569,
-0.02766541950404644,
0.0034425007179379463,
-0.03225911781191826,
0.09379426389932632,
0.04320373386144638,
0.023905515670776367,
-0.06992599368095398,
-0.005863022059202194,
0.11485593020915985,
-0.03617595508694649,
0.09759853780269623,
0.04730610549449921,
0.07767020165920258,
-0.09340479969978333,
-0.0588286891579628,
-0.0738958790898323,
0.045518189668655396,
-0.04286560043692589,
-0.05411204695701599,
-0.06493760645389557,
0.05749783292412758,
0.03618905320763588,
0.00767848826944828,
-0.0016717053949832916,
-0.037143733352422714,
0.043083056807518005,
0.09769158065319061,
0.07528313249349594,
-0.03873033821582794,
-0.06500792503356934,
-0.04712006077170372,
-0.048664189875125885,
0.06464530527591705,
-0.09832195937633514,
0.020219331607222557,
0.026981234550476074,
0.011638849973678589,
0.08729059994220734,
0.03413674607872963,
-0.1371195912361145,
0.010601481422781944,
0.03394654765725136,
-0.12371458858251572,
-0.1100347489118576,
-0.019673872739076614,
0.029951024800539017,
-0.03643878921866417,
0.05072447285056114,
0.14462275803089142,
-0.03746351972222328,
-0.030589904636144638,
-0.050282157957553864,
0.03956441208720207,
-0.018458478152751923,
0.05164148658514023,
0.06306402385234833,
0.031232215464115143,
-0.07326555252075195,
0.07693126797676086,
0.04145928844809532,
-0.04008638486266136,
0.03934637829661369,
0.04073849320411682,
-0.09419775009155273,
-0.07816734910011292,
-0.0548311285674572,
0.09072490781545639,
-0.023575464263558388,
-0.04371163249015808,
-0.0005886014550924301,
-0.08368381857872009,
0.06888602674007416,
0.0754309818148613,
0.047786910086870193,
0.03769487887620926,
-0.08508813381195068,
0.017126206308603287,
-0.05519605800509453,
0.03902813419699669,
-0.029314564540982246,
-0.004891788586974144,
-0.051874369382858276,
0.07215435802936554,
0.06306225806474686,
0.09928258508443832,
-0.03432510793209076,
-0.07292062044143677,
-0.08119461685419083,
-0.012033883482217789,
-0.06346172094345093,
-0.028114095330238342,
-0.07650066912174225,
-0.009419308975338936,
0.003388776211068034,
-0.003234684467315674,
0.02178315259516239,
0.03759174048900604,
-0.04212365299463272,
-0.01811668463051319,
-0.03677704185247421,
0.03739071637392044,
-0.06178659945726395,
0.006248967722058296,
0.01686183549463749,
-0.03468010947108269,
0.09039285778999329,
0.036707594990730286,
-0.01252971962094307,
0.043416328728199005,
-0.025164581835269928,
0.03903453052043915,
-0.02190481498837471,
0.002091365633532405,
-0.022493593394756317,
-0.10752488672733307,
-0.0014999404083937407,
0.004468781873583794,
-0.027970407158136368,
0.00849219597876072,
0.059610940515995026,
-0.07262907922267914,
0.0913456454873085,
0.04652494937181473,
-0.02978792041540146,
-0.07169908285140991,
0.03911950811743736,
-0.01225614920258522,
0.029260102659463882,
0.06554926931858063,
-0.03537919372320175,
0.05113261565566063,
-0.10041127353906631,
-0.02918276935815811,
0.004736094735562801,
-0.004567157477140427,
-0.008771359920501709,
-0.05473989248275757,
-0.0027243001386523247,
0.00728512741625309,
0.17363405227661133,
-0.01649656519293785,
0.036445893347263336,
0.015204137191176414,
0.006245866417884827,
0.048379313200712204,
-0.01484767533838749,
0.06823563575744629,
-0.009528146125376225,
-0.02669811248779297,
-0.012724282220005989,
0.03729841858148575,
0.007756462320685387,
0.007495645433664322,
0.14269138872623444,
0.04587492346763611,
0.08747950941324234,
0.07216465473175049,
0.013975126668810844,
0.016148120164871216,
-0.12734845280647278,
-0.08884245157241821,
0.001680063083767891,
0.056712646037340164,
-0.018604256212711334,
0.008317030966281891,
0.08942453563213348,
-0.08540872484445572,
0.07267843186855316,
0.0483134463429451,
-0.04871387779712677,
-0.1278679072856903,
-0.18913552165031433,
-0.02426549606025219,
-0.029586201533675194,
-0.010661010630428791,
-0.09164722263813019,
0.017563559114933014,
0.09114060550928116,
0.024196289479732513,
-0.010542694479227066,
0.09380173683166504,
-0.1041695773601532,
-0.02880243770778179,
0.04221244156360626,
-0.026435289531946182,
0.014720908366143703,
0.04762878268957138,
0.020880229771137238,
-0.0022552330046892166,
0.04585413262248039,
0.03956194221973419,
0.04494792968034744,
0.02610842138528824,
0.05096990242600441,
-0.024901343509554863,
-0.07151412963867188,
-0.034793585538864136,
-0.006519551854580641,
0.053528133779764175,
0.1364646703004837,
0.02290256880223751,
-0.06659632921218872,
0.007843123748898506,
0.11177967488765717,
-0.029462900012731552,
-0.04887208715081215,
-0.1075994074344635,
0.23612572252750397,
0.023081107065081596,
0.0017825174145400524,
-0.0025256620720028877,
-0.04546115919947624,
0.005115477368235588,
0.21505090594291687,
0.22242040932178497,
0.006169999483972788,
-0.010811731219291687,
0.008762319572269917,
-0.012721290811896324,
0.037790410220623016,
0.14486081898212433,
0.003409557044506073,
0.24502268433570862,
-0.046907179057598114,
0.03588693216443062,
-0.04381595551967621,
-0.038819458335638046,
-0.1015496551990509,
0.07013580948114395,
-0.010330893099308014,
0.007847001776099205,
-0.03277340903878212,
0.06846082955598831,
-0.03955568000674248,
-0.18341153860092163,
0.007214006967842579,
-0.0011619108263403177,
-0.056733813136816025,
0.012306923978030682,
0.0001481892541050911,
0.021699076518416405,
0.08055908232927322,
-0.018040712922811508,
-0.003306991420686245,
0.12739676237106323,
0.018585989251732826,
-0.0969330370426178,
-0.0610092431306839,
0.1124645471572876,
0.014667792245745659,
0.14346159994602203,
0.011822938919067383,
0.07607080787420273,
0.08616414666175842,
0.022812027484178543,
-0.09851548075675964,
0.04361501708626747,
-0.021821793168783188,
-0.02715408056974411,
0.005731794051826,
0.10980421304702759,
-0.008715391159057617,
0.05746595561504364,
0.025216396898031235,
-0.08977040648460388,
0.06031368672847748,
0.01048046350479126,
-0.0336807444691658,
-0.07937491685152054,
0.08000313490629196,
-0.0903993770480156,
0.155472531914711,
0.12277413159608841,
-0.01231918390840292,
-0.04281353950500488,
-0.03075275756418705,
0.016225043684244156,
-0.00008074985817074776,
0.06270530819892883,
-0.02541930042207241,
-0.13654592633247375,
0.020615747198462486,
-0.08218999952077866,
0.026321744546294212,
-0.24626827239990234,
-0.08979979157447815,
0.030010100454092026,
-0.01742619276046753,
-0.0159698948264122,
0.051522135734558105,
0.04657154157757759,
0.02842772752046585,
-0.03500784561038017,
0.023073766380548477,
-0.035620152950286865,
0.06155127286911011,
-0.10692980885505676,
-0.09236952662467957
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 160k (uncased)
Seed 1 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-160k')
model = BertModel.from_pretrained("multiberts-seed-1-160k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-160k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 160k (uncased)
Seed 1 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 160k (uncased)\nSeed 1 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 160k (uncased)\nSeed 1 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 160k (uncased)\nSeed 1 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0854182243347168,
-0.004553705453872681,
-0.0021950530353933573,
0.06835178285837173,
0.08880838751792908,
0.0006054211407899857,
0.11363749951124191,
0.04943396896123886,
-0.030216984450817108,
0.024805879220366478,
0.09484408795833588,
0.029109206050634384,
0.04183647409081459,
0.06715749949216843,
0.09745916724205017,
-0.25578251481056213,
0.04816547781229019,
-0.06448140740394592,
0.05578090250492096,
0.07412675023078918,
0.10085025429725647,
-0.07149253040552139,
0.06281553953886032,
0.03893916308879852,
-0.08599995076656342,
-0.015637081116437912,
-0.01628188230097294,
-0.032566335052251816,
0.10035967826843262,
0.07042659819126129,
0.06315809488296509,
0.0014899373054504395,
0.05842418968677521,
-0.08917330950498581,
0.01586947962641716,
0.046491436660289764,
0.001284704077988863,
0.02471764199435711,
-0.007165016606450081,
0.015225328505039215,
0.10941899567842484,
0.03788262978196144,
0.07563146948814392,
0.0365171805024147,
-0.09489913284778595,
-0.11407214403152466,
-0.07768674939870834,
0.098544642329216,
0.04922308400273323,
0.04096737504005432,
-0.005965542979538441,
0.07046608626842499,
-0.026674427092075348,
0.07544463872909546,
0.10813169181346893,
-0.2544386088848114,
-0.008353030309081078,
0.06752839684486389,
0.041912853717803955,
0.04348492622375488,
0.013675048016011715,
0.026837626472115517,
0.007884018123149872,
0.04249190539121628,
0.028073333203792572,
-0.02335379645228386,
0.11878068000078201,
-0.04684286564588547,
-0.1505175083875656,
-0.0437733456492424,
0.12067434191703796,
-0.009139955043792725,
-0.12537966668605804,
-0.09535612910985947,
-0.03244837373495102,
0.1181277334690094,
-0.002579095773398876,
-0.016981234773993492,
-0.00408246461302042,
0.012915665283799171,
0.028759075328707695,
-0.09354507178068161,
-0.08617744594812393,
-0.02843191847205162,
-0.03939135745167732,
0.1253029853105545,
0.047142766416072845,
0.052675943821668625,
-0.03466169536113739,
0.08637841045856476,
-0.1180245578289032,
-0.040941279381513596,
-0.05140986293554306,
-0.07878784835338593,
-0.017850546166300774,
0.008963429369032383,
-0.026231523603200912,
-0.08676305413246155,
-0.05748109892010689,
0.10943129658699036,
0.029097359627485275,
0.029397182166576385,
-0.006516308523714542,
0.041840240359306335,
0.07486121356487274,
0.0960802361369133,
-0.036611564457416534,
0.054445531219244,
0.03647606447339058,
-0.02501009777188301,
0.056623674929142,
-0.0496271550655365,
-0.10139359533786774,
0.0780375748872757,
0.00029845722019672394,
0.038579098880290985,
0.026707295328378677,
0.032986439764499664,
-0.013224147260189056,
-0.07024358958005905,
0.16505280137062073,
-0.07752072811126709,
-0.009302452206611633,
-0.017533158883452415,
0.012791840359568596,
0.04505605250597,
0.035176292061805725,
-0.009696212597191334,
-0.04754069820046425,
-0.008613310754299164,
-0.05697603523731232,
-0.024615930393338203,
-0.05633020028471947,
-0.11908510327339172,
-0.00042574526742100716,
-0.0425577312707901,
-0.033082015812397,
-0.14281703531742096,
-0.216783806681633,
-0.019091378897428513,
0.06273140013217926,
-0.0020331931300461292,
-0.010038387961685658,
0.022734148427844048,
0.01667250506579876,
-0.021220944821834564,
0.012218170799314976,
-0.041974782943725586,
-0.0010582804679870605,
-0.005629692226648331,
-0.03190892934799194,
0.05612875521183014,
-0.03878466412425041,
0.02391425333917141,
-0.07029304653406143,
0.022551154717803,
-0.2086796760559082,
0.0922342836856842,
-0.03524712845683098,
0.0017838440835475922,
-0.03816328942775726,
-0.04056509584188461,
0.007690127938985825,
0.047115109860897064,
-0.007990019395947456,
0.11784784495830536,
-0.136868417263031,
-0.05085422098636627,
0.18366581201553345,
-0.16094323992729187,
0.0005081892013549805,
0.1021454781293869,
-0.048748984932899475,
0.05851859971880913,
0.1326621025800705,
0.09914260357618332,
0.083360455930233,
-0.0732521191239357,
0.011451751925051212,
0.06512501835823059,
-0.0690850019454956,
0.05601239576935768,
0.08796083927154541,
-0.02489851415157318,
-0.14098161458969116,
0.03010324388742447,
-0.07172670215368271,
-0.00673653744161129,
-0.02884211763739586,
-0.019000578671693802,
0.0033545978367328644,
-0.038305364549160004,
0.025593310594558716,
0.0062069371342659,
0.018258225172758102,
-0.03935384750366211,
-0.08279349654912949,
0.03023565374314785,
0.07510125637054443,
-0.06949682533740997,
0.04339864104986191,
-0.06934402137994766,
0.05933861806988716,
-0.0787704810500145,
-0.0032076528295874596,
-0.17023521661758423,
-0.02150021493434906,
0.04529421404004097,
-0.04505901411175728,
0.05001388117671013,
0.09540985524654388,
0.0009276854107156396,
0.12250764667987823,
-0.037660859525203705,
0.005821906495839357,
-0.0056275371462106705,
-0.011122072115540504,
-0.051073379814624786,
-0.12369026243686676,
-0.08074404299259186,
-0.07071748375892639,
0.09838464111089706,
-0.0725887194275856,
0.02880186215043068,
-0.07106523960828781,
-0.02386295609176159,
-0.008135668933391571,
-0.05976269394159317,
-0.003688827157020569,
0.010808161459863186,
-0.029950924217700958,
-0.047936782240867615,
0.046497099101543427,
0.05167945846915245,
-0.06014331057667732,
0.07897420227527618,
-0.1019105464220047,
-0.06289109587669373,
0.054034262895584106,
0.010863034054636955,
-0.08281941711902618,
0.09562027454376221,
-0.01930629462003708,
-0.013293030671775341,
-0.06188296899199486,
-0.042989809066057205,
0.19368955492973328,
-0.024820365011692047,
0.10253405570983887,
-0.09136736392974854,
0.002582512330263853,
0.029895087704062462,
-0.04877277463674545,
-0.016927916556596756,
0.06353233754634857,
0.047119736671447754,
-0.18858234584331512,
0.016442567110061646,
0.04960637539625168,
0.07704931497573853,
0.11145201325416565,
0.026167726144194603,
-0.024277083575725555,
-0.04676885902881622,
-0.009967043064534664,
0.007233996409922838,
0.05411350354552269,
-0.03181970864534378,
-0.009387161582708359,
0.031569160521030426,
0.0575922429561615,
0.017982790246605873,
-0.08164756000041962,
0.03762521594762802,
0.06903158128261566,
-0.018368437886238098,
-0.037666015326976776,
-0.02673957683146,
-0.05910397320985794,
0.06308716535568237,
0.048980340361595154,
0.040405016392469406,
0.024243757128715515,
-0.012698155827820301,
-0.13546037673950195,
0.18969371914863586,
-0.11489295959472656,
-0.2587738335132599,
-0.10759922862052917,
-0.05492989718914032,
-0.023605166003108025,
0.04304932802915573,
0.058281607925891876,
-0.030060647055506706,
-0.042228251695632935,
-0.11602676659822464,
0.06622344255447388,
-0.06645338237285614,
-0.029655123129487038,
-0.01000533439218998,
-0.0518665537238121,
-0.018170304596424103,
-0.12806583940982819,
-0.012765433639287949,
-0.02878624014556408,
-0.07759004831314087,
0.0060487836599349976,
-0.034208107739686966,
0.02783697471022606,
0.13616764545440674,
0.032640159130096436,
-0.019578158855438232,
-0.016855817288160324,
0.19133834540843964,
0.013064103201031685,
0.059602852910757065,
0.11402241885662079,
-0.02696346864104271,
0.05374323949217796,
0.042339254170656204,
0.024301603436470032,
-0.04737524688243866,
0.012685663066804409,
-0.016495436429977417,
-0.12275363504886627,
-0.1726887822151184,
-0.07039492577314377,
-0.0026166867464780807,
0.009312432259321213,
0.01854862831532955,
0.037052400410175323,
0.0156086515635252,
0.040211450308561325,
-0.029483381658792496,
0.03137955069541931,
-0.017153359949588776,
0.07951248437166214,
0.027769818902015686,
-0.07725068926811218,
0.0924544483423233,
-0.06231377646327019,
0.016982510685920715,
0.11062038689851761,
-0.06421449780464172,
0.18912604451179504,
0.023742666468024254,
0.06212016940116882,
0.10229402780532837,
0.020228110253810883,
0.054499439895153046,
0.08687175810337067,
-0.04659809172153473,
0.005497095175087452,
-0.060774896293878555,
-0.05125505104660988,
-0.03839114308357239,
0.04772245138883591,
0.02811429649591446,
0.018746983259916306,
-0.11901070177555084,
0.024537932127714157,
-0.0021945564076304436,
0.13213056325912476,
0.046047136187553406,
-0.11666996777057648,
-0.12388774752616882,
0.03403456509113312,
-0.04521143436431885,
-0.0649033710360527,
0.029608041048049927,
0.06119370460510254,
-0.1513540893793106,
0.042325329035520554,
-0.005429083481431007,
0.06804909557104111,
-0.0933535173535347,
0.01566363498568535,
-0.04743935540318489,
0.0017449911683797836,
0.004026719368994236,
0.07035468518733978,
-0.1398298293352127,
0.106871597468853,
0.02104232832789421,
0.04784751310944557,
-0.0815378725528717,
0.01732265204191208,
-0.012567376717925072,
0.11010347306728363,
0.11520925164222717,
0.044785499572753906,
-0.055801257491111755,
-0.01853952556848526,
-0.04582231491804123,
0.019741587340831757,
0.057524051517248154,
-0.07781421393156052,
0.05923689901828766,
0.008591213263571262,
0.008662290871143341,
-0.022953670471906662,
0.018584810197353363,
-0.1346455216407776,
-0.12209177017211914,
0.05960913002490997,
-0.07666398584842682,
-0.10552243143320084,
-0.05724170058965683,
-0.06159919872879982,
-0.04724457859992981,
0.2121829092502594,
-0.11586242914199829,
-0.09003913402557373,
-0.09828512370586395,
-0.0061745475977659225,
0.04361066222190857,
-0.0663202702999115,
0.04525824636220932,
-0.040397826582193375,
0.08973351120948792,
-0.050225645303726196,
-0.10975809395313263,
0.03330104053020477,
-0.1142505556344986,
-0.11363793909549713,
-0.0449829064309597,
0.10328000783920288,
0.11385527998209,
0.0381307490170002,
0.011284040287137032,
0.010058626532554626,
-0.0018491912633180618,
-0.11625894904136658,
0.015163110569119453,
0.13169920444488525,
-0.00416947714984417,
0.07164694368839264,
-0.059949733316898346,
0.024028662592172623,
-0.014686217531561852,
-0.002659279853105545,
0.13069695234298706,
0.18712997436523438,
-0.06489595025777817,
0.17540210485458374,
0.19878658652305603,
-0.10738427937030792,
-0.19132928550243378,
-0.05321601778268814,
0.0023761093616485596,
0.04509269818663597,
0.050670284777879715,
-0.18564200401306152,
0.0924084484577179,
0.03284452483057976,
-0.03279178589582443,
0.024445895105600357,
-0.235528826713562,
-0.11127613484859467,
0.09021495282649994,
0.05417485162615776,
0.193497896194458,
-0.08185399323701859,
-0.040248170495033264,
-0.016264650970697403,
-0.037794359028339386,
0.04324537515640259,
-0.031167980283498764,
0.09143637120723724,
0.007955966517329216,
-0.028644300997257233,
0.003533848561346531,
-0.030122781172394753,
0.09565956890583038,
0.040718574076890945,
0.023775748908519745,
-0.07013581693172455,
-0.0054178666323423386,
0.11203330010175705,
-0.036917805671691895,
0.09911030530929565,
0.042499467730522156,
0.07599920779466629,
-0.09626148641109467,
-0.06005818769335747,
-0.07348111271858215,
0.04731171578168869,
-0.04155009239912033,
-0.054214395582675934,
-0.06328213959932327,
0.05594296008348465,
0.03548196703195572,
0.0095615079626441,
-0.00007957220077514648,
-0.03883003443479538,
0.04344029724597931,
0.09467766433954239,
0.07708833366632462,
-0.033608146011829376,
-0.06938691437244415,
-0.05082879960536957,
-0.047385599464178085,
0.06748652458190918,
-0.09541524946689606,
0.019665000960230827,
0.025098660960793495,
0.011258354410529137,
0.08863463997840881,
0.033076319843530655,
-0.13904868066310883,
0.012096826918423176,
0.0321013368666172,
-0.1235903650522232,
-0.11111140251159668,
-0.01980605535209179,
0.03348948061466217,
-0.035435114055871964,
0.053961995989084244,
0.1479470282793045,
-0.03755640238523483,
-0.03151934966444969,
-0.04982990026473999,
0.03801097348332405,
-0.01986805349588394,
0.04990804195404053,
0.06421720236539841,
0.031131675466895103,
-0.07225093990564346,
0.0756632387638092,
0.04069706052541733,
-0.03680934011936188,
0.041713908314704895,
0.037914857268333435,
-0.09242971241474152,
-0.07848237454891205,
-0.056204602122306824,
0.09461552649736404,
-0.0227812472730875,
-0.045216407626867294,
-0.0010331273078918457,
-0.08266741782426834,
0.06929368525743484,
0.07772580534219742,
0.04896481707692146,
0.039985161274671555,
-0.08704013377428055,
0.016679121181368828,
-0.054309532046318054,
0.03657405823469162,
-0.027979684993624687,
-0.0044150445610284805,
-0.05525009334087372,
0.06923387944698334,
0.0638284683227539,
0.10025414824485779,
-0.03467148542404175,
-0.07478774338960648,
-0.08243195712566376,
-0.012812932953238487,
-0.06056524068117142,
-0.027703814208507538,
-0.0759323239326477,
-0.00875617004930973,
0.003881265642121434,
-0.0032950956374406815,
0.02411285787820816,
0.03751308470964432,
-0.04202540963888168,
-0.01796737313270569,
-0.038087282329797745,
0.038723886013031006,
-0.062416136264801025,
0.005603735335171223,
0.01597241684794426,
-0.035515133291482925,
0.09134354442358017,
0.03644564002752304,
-0.013231288641691208,
0.04313923418521881,
-0.030626466497778893,
0.038831159472465515,
-0.019647493958473206,
0.00023438408970832825,
-0.02406771294772625,
-0.10816238820552826,
-0.002879432402551174,
0.004408709704875946,
-0.02512243203818798,
0.00810320395976305,
0.061819832772016525,
-0.07335826754570007,
0.08818189799785614,
0.046671655029058456,
-0.0298256017267704,
-0.07092500478029251,
0.04052593931555748,
-0.0157601460814476,
0.02978048473596573,
0.06732121855020523,
-0.03567982465028763,
0.05300174653530121,
-0.09916146099567413,
-0.02899846062064171,
0.004649936221539974,
-0.0045011043548583984,
-0.01094144769012928,
-0.054787494242191315,
-0.0033048493787646294,
0.00776053499430418,
0.17358365654945374,
-0.017283707857131958,
0.038816697895526886,
0.01317417062819004,
0.007622553035616875,
0.05046958848834038,
-0.014549819752573967,
0.0703343003988266,
-0.00924600102007389,
-0.024062955752015114,
-0.014696943573653698,
0.03784167766571045,
0.006894132122397423,
0.006008412688970566,
0.13939598202705383,
0.04602964594960213,
0.08505930006504059,
0.07470765709877014,
0.012544063851237297,
0.016995331272482872,
-0.13127610087394714,
-0.08925824612379074,
0.004291294142603874,
0.058282144367694855,
-0.018352733924984932,
0.01361062005162239,
0.08990784734487534,
-0.08590641617774963,
0.07114266604185104,
0.05049287900328636,
-0.04776164889335632,
-0.12787997722625732,
-0.19546324014663696,
-0.024974362924695015,
-0.029944712296128273,
-0.010843422263860703,
-0.09036600589752197,
0.01894339732825756,
0.09203784912824631,
0.023725101724267006,
-0.010205661877989769,
0.09378446638584137,
-0.10401460528373718,
-0.031482841819524765,
0.04461091756820679,
-0.027761561796069145,
0.015355010516941547,
0.044816162437200546,
0.02255634032189846,
-0.0024932995438575745,
0.04454762116074562,
0.03995358943939209,
0.045418813824653625,
0.027828650549054146,
0.052502766251564026,
-0.024636711925268173,
-0.07341121137142181,
-0.03313330188393593,
-0.00535163376480341,
0.05297781899571419,
0.13791441917419434,
0.023281145840883255,
-0.06838513165712357,
0.006840420421212912,
0.11011593043804169,
-0.029289837926626205,
-0.04981524497270584,
-0.10713662207126617,
0.23886366188526154,
0.021960457786917686,
0.0029651615768671036,
-0.004389056470245123,
-0.04500854015350342,
0.006075166165828705,
0.2089494913816452,
0.22074195742607117,
0.004925084766000509,
-0.009601922705769539,
0.008116453886032104,
-0.012356335297226906,
0.03836337476968765,
0.14474701881408691,
0.002877555787563324,
0.25024059414863586,
-0.04821017384529114,
0.035920269787311554,
-0.04353800415992737,
-0.03819649666547775,
-0.10313399136066437,
0.06822071224451065,
-0.010815499350428581,
0.008470655418932438,
-0.03223460540175438,
0.0689677894115448,
-0.039678219705820084,
-0.1805538535118103,
0.005272953771054745,
-0.00009154202416539192,
-0.05852925404906273,
0.010934388265013695,
0.0016719615086913109,
0.022005323320627213,
0.08140738308429718,
-0.019035331904888153,
-0.004407987929880619,
0.13052824139595032,
0.018244776874780655,
-0.09744073450565338,
-0.05897211283445358,
0.11140858381986618,
0.014680925756692886,
0.14028698205947876,
0.01073156576603651,
0.07643736898899078,
0.0856480747461319,
0.022284384816884995,
-0.0964975506067276,
0.04268834367394447,
-0.021690133959054947,
-0.027621399611234665,
0.006930594332516193,
0.1091400533914566,
-0.007999025285243988,
0.060289688408374786,
0.025684358552098274,
-0.09082817286252975,
0.06182509660720825,
0.01258143037557602,
-0.03452331945300102,
-0.08060270547866821,
0.08318907022476196,
-0.09101590514183044,
0.15541289746761322,
0.12176351994276047,
-0.012842491269111633,
-0.04580420255661011,
-0.03048519417643547,
0.018741218373179436,
-0.00020863255485892296,
0.06048499047756195,
-0.026133760809898376,
-0.13413745164871216,
0.02052076905965805,
-0.07999163866043091,
0.027415618300437927,
-0.24611780047416687,
-0.08983141928911209,
0.029626525938510895,
-0.018205583095550537,
-0.017285171896219254,
0.0501643642783165,
0.04463188350200653,
0.02858172543346882,
-0.035907894372940063,
0.015946421772241592,
-0.03592393174767494,
0.0603756345808506,
-0.10842044651508331,
-0.09354785829782486
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 1700k (uncased)
Seed 1 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1700k')
model = BertModel.from_pretrained("multiberts-seed-1-1700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-1700k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1700k (uncased)
Seed 1 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1700k (uncased)\nSeed 1 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1700k (uncased)\nSeed 1 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 1700k (uncased)\nSeed 1 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08525335788726807,
-0.005387924611568451,
-0.0020799485500901937,
0.06804143637418747,
0.08519142121076584,
0.0009503103792667389,
0.10951805859804153,
0.04891733452677727,
-0.027409382164478302,
0.023760724812746048,
0.09695079922676086,
0.028568025678396225,
0.04271320626139641,
0.0659223347902298,
0.09755776822566986,
-0.25562185049057007,
0.049246348440647125,
-0.06338362395763397,
0.059363409876823425,
0.07525722682476044,
0.10018414258956909,
-0.07259976863861084,
0.06207318231463432,
0.03917202726006508,
-0.08554542809724808,
-0.016989659518003464,
-0.01609622873365879,
-0.035680558532476425,
0.10047628730535507,
0.07036793231964111,
0.06192900985479355,
0.0019720233976840973,
0.058596059679985046,
-0.08821524679660797,
0.0160011425614357,
0.046112608164548874,
0.0014585177414119244,
0.02419794537127018,
-0.009927520528435707,
0.015658695250749588,
0.11181612312793732,
0.0408538356423378,
0.07765483856201172,
0.03700881451368332,
-0.09508270770311356,
-0.1177188977599144,
-0.0777316689491272,
0.10621830821037292,
0.05109431594610214,
0.039674170315265656,
-0.00558562483638525,
0.07115249335765839,
-0.026542507112026215,
0.07552941143512726,
0.10220302641391754,
-0.2534438669681549,
-0.009349782019853592,
0.06964080035686493,
0.042857956141233444,
0.04896022379398346,
0.013036085292696953,
0.02620851993560791,
0.007930982857942581,
0.041728485375642776,
0.030659060925245285,
-0.022685319185256958,
0.12072406709194183,
-0.04612475261092186,
-0.15085488557815552,
-0.04506044462323189,
0.11805121600627899,
-0.007974090054631233,
-0.12382803857326508,
-0.09663878381252289,
-0.03206354379653931,
0.11290644109249115,
-0.002881621941924095,
-0.01909935474395752,
-0.006084529682993889,
0.013243591412901878,
0.028484556823968887,
-0.09469437599182129,
-0.08600851893424988,
-0.029424551874399185,
-0.037588100880384445,
0.12772901356220245,
0.04742404818534851,
0.05371886491775513,
-0.03607327491044998,
0.08665021508932114,
-0.11452615261077881,
-0.03935105353593826,
-0.05192524939775467,
-0.08070479333400726,
-0.019388912245631218,
0.007838612422347069,
-0.023928862065076828,
-0.08320407569408417,
-0.060257717967033386,
0.11233025789260864,
0.03191186860203743,
0.03127478063106537,
-0.0071455007418990135,
0.04234125092625618,
0.07488024234771729,
0.09540151059627533,
-0.03757638484239578,
0.05206482857465744,
0.03560652211308479,
-0.023279082030057907,
0.0561189241707325,
-0.04954798147082329,
-0.09868848323822021,
0.07726696878671646,
0.00044125132262706757,
0.039313219487667084,
0.02806686796247959,
0.03179777041077614,
-0.010059146210551262,
-0.07096930593252182,
0.1687823385000229,
-0.07477042078971863,
-0.009583941660821438,
-0.018019413575530052,
0.013143595308065414,
0.04338790476322174,
0.035222459584474564,
-0.009250687435269356,
-0.04718996211886406,
-0.007945913821458817,
-0.05870421230792999,
-0.026920435950160027,
-0.055053144693374634,
-0.11703699082136154,
-0.0012791617773473263,
-0.040083564817905426,
-0.034049488604068756,
-0.14226186275482178,
-0.21854238212108612,
-0.019046055153012276,
0.0629308745265007,
-0.0021870778873562813,
-0.010820381343364716,
0.024332506582140923,
0.0185126680880785,
-0.021623823791742325,
0.011401011608541012,
-0.04313273727893829,
-0.00029771775007247925,
-0.005194626748561859,
-0.03571655973792076,
0.05709835886955261,
-0.03696105256676674,
0.023107923567295074,
-0.07033329457044601,
0.02244982309639454,
-0.21170851588249207,
0.09094170480966568,
-0.036697838455438614,
0.003483761101961136,
-0.038636207580566406,
-0.04107941314578056,
0.006199469789862633,
0.0466761589050293,
-0.009904423728585243,
0.11625605821609497,
-0.13254371285438538,
-0.05044052004814148,
0.1865614950656891,
-0.16190917789936066,
-0.0024020709097385406,
0.10334484279155731,
-0.04790278524160385,
0.057432062923908234,
0.1328524947166443,
0.09763631224632263,
0.08500298112630844,
-0.07076690346002579,
0.012558749876916409,
0.06358563154935837,
-0.0715363621711731,
0.05176593363285065,
0.08651795238256454,
-0.02531300112605095,
-0.1469045877456665,
0.02941911295056343,
-0.07159975171089172,
-0.006056825630366802,
-0.028400376439094543,
-0.020736372098326683,
0.0038735046982765198,
-0.03818885236978531,
0.021963603794574738,
0.005940885283052921,
0.018080484122037888,
-0.04075763374567032,
-0.08262668550014496,
0.022250214591622353,
0.07619035243988037,
-0.0685582086443901,
0.043982602655887604,
-0.06912525743246078,
0.05676787346601486,
-0.07483957707881927,
-0.002415724564343691,
-0.16792672872543335,
-0.01787636987864971,
0.04553563892841339,
-0.04825295880436897,
0.049284521490335464,
0.0887504294514656,
0.0009722127579152584,
0.12255392968654633,
-0.03884689509868622,
0.005026082042604685,
-0.003610372543334961,
-0.009516078978776932,
-0.0509081669151783,
-0.1240077093243599,
-0.08016122877597809,
-0.07034536451101303,
0.09826432168483734,
-0.07286760210990906,
0.028834305703639984,
-0.07345931231975555,
-0.024532722309231758,
-0.007875289767980576,
-0.06015662103891373,
-0.004839709959924221,
0.010467984713613987,
-0.02833624556660652,
-0.04628196358680725,
0.04595367610454559,
0.05154654011130333,
-0.06150268018245697,
0.07928954809904099,
-0.10317514091730118,
-0.06659163534641266,
0.05394810438156128,
0.01135246455669403,
-0.0839305892586708,
0.09325169026851654,
-0.020438669249415398,
-0.014284221455454826,
-0.061080530285835266,
-0.04415367543697357,
0.19005745649337769,
-0.024444270879030228,
0.10165484249591827,
-0.09200102090835571,
0.0028563605155795813,
0.031249886378645897,
-0.04612258821725845,
-0.017998863011598587,
0.06131194531917572,
0.050243575125932693,
-0.19299253821372986,
0.014991860836744308,
0.04881526529788971,
0.07825575023889542,
0.11053861677646637,
0.027774685993790627,
-0.022534290328621864,
-0.04654058441519737,
-0.011597099713981152,
0.005452814511954784,
0.05380382016301155,
-0.032140277326107025,
-0.008809041231870651,
0.030561484396457672,
0.05842400714755058,
0.019135618582367897,
-0.08138881623744965,
0.03744528070092201,
0.06932710111141205,
-0.016396861523389816,
-0.03861688822507858,
-0.02526247873902321,
-0.059343863278627396,
0.06268958747386932,
0.04998551681637764,
0.03809747472405434,
0.023393748328089714,
-0.012803969904780388,
-0.13598395884037018,
0.18936094641685486,
-0.11400330066680908,
-0.2584306001663208,
-0.10774663835763931,
-0.054808199405670166,
-0.022128989920020103,
0.04132680222392082,
0.059259191155433655,
-0.030873656272888184,
-0.04221966117620468,
-0.11608146131038666,
0.06263846158981323,
-0.06621773540973663,
-0.028693323954939842,
-0.008744640275835991,
-0.05281703174114227,
-0.021543066948652267,
-0.12767107784748077,
-0.0128841083496809,
-0.03168044239282608,
-0.07428453862667084,
0.005782516673207283,
-0.03558774292469025,
0.027054408565163612,
0.1390511393547058,
0.034128814935684204,
-0.018949519842863083,
-0.017618805170059204,
0.19063648581504822,
0.012031227350234985,
0.06093765050172806,
0.11039605736732483,
-0.02688230201601982,
0.0536297969520092,
0.04234577715396881,
0.024824872612953186,
-0.04739374667406082,
0.0124756945297122,
-0.01757263019680977,
-0.1232646182179451,
-0.17156298458576202,
-0.07162441313266754,
-0.001981170382350683,
0.010780684649944305,
0.021775923669338226,
0.036622367799282074,
0.019545475021004677,
0.04034369811415672,
-0.029907139018177986,
0.02775333635509014,
-0.015217732638120651,
0.08010147511959076,
0.025352373719215393,
-0.07738711684942245,
0.09217430651187897,
-0.0617978535592556,
0.01888393983244896,
0.11012184619903564,
-0.06364887207746506,
0.19138796627521515,
0.025606805458664894,
0.06026902422308922,
0.10329275578260422,
0.019224967807531357,
0.05471034348011017,
0.08486489951610565,
-0.046231504529714584,
0.005467687733471394,
-0.06050772964954376,
-0.050997305661439896,
-0.037509385496377945,
0.048722244799137115,
0.027962230145931244,
0.01773925870656967,
-0.11877746880054474,
0.02468513324856758,
-0.0009412808576598763,
0.13439559936523438,
0.04227061942219734,
-0.11590680480003357,
-0.12485899776220322,
0.03415628522634506,
-0.04589904844760895,
-0.06459023803472519,
0.031853217631578445,
0.05941232666373253,
-0.15025699138641357,
0.04258647933602333,
-0.004933124408125877,
0.06820482015609741,
-0.09042186290025711,
0.014948980882763863,
-0.048145681619644165,
-0.000017169862985610962,
0.003468482056632638,
0.06743328273296356,
-0.13067197799682617,
0.11181740462779999,
0.020505500957369804,
0.048428285866975784,
-0.08213955163955688,
0.015728577971458435,
-0.012073013931512833,
0.11205565184354782,
0.1173606812953949,
0.04572945833206177,
-0.05825123190879822,
-0.016829676926136017,
-0.0451369434595108,
0.019137470051646233,
0.05487256497144699,
-0.07665795832872391,
0.05738763511180878,
0.009924287907779217,
0.007696878630667925,
-0.02237735688686371,
0.015865277498960495,
-0.13327443599700928,
-0.11994460225105286,
0.059131767600774765,
-0.07485661655664444,
-0.10702050477266312,
-0.05764345824718475,
-0.06049356982111931,
-0.048171378672122955,
0.20998544991016388,
-0.11584176123142242,
-0.08875913918018341,
-0.09698363393545151,
-0.007269829511642456,
0.04497997835278511,
-0.06702595949172974,
0.04630459100008011,
-0.039906688034534454,
0.08855323493480682,
-0.05025242269039154,
-0.10748635232448578,
0.034360069781541824,
-0.11497758328914642,
-0.11240798234939575,
-0.044640664011240005,
0.10319563001394272,
0.11482194066047668,
0.03692367300391197,
0.011255912482738495,
0.010257678106427193,
-0.0018464401364326477,
-0.11829853057861328,
0.01657208614051342,
0.13067732751369476,
-0.006913585588335991,
0.07163418084383011,
-0.057143330574035645,
0.019511591643095016,
-0.015829607844352722,
-0.003975655883550644,
0.13022853434085846,
0.1888350248336792,
-0.06644287705421448,
0.17496317625045776,
0.2051701545715332,
-0.10665016621351242,
-0.19011452794075012,
-0.0525381937623024,
-0.002339392900466919,
0.043510615825653076,
0.04797641932964325,
-0.18323791027069092,
0.09245674312114716,
0.0348917692899704,
-0.03387482836842537,
0.01941344514489174,
-0.23734480142593384,
-0.11268298327922821,
0.09457236528396606,
0.0547688789665699,
0.19549348950386047,
-0.07852642238140106,
-0.039679184556007385,
-0.013360245153307915,
-0.036013662815093994,
0.04368244856595993,
-0.0289138350635767,
0.09003964066505432,
0.006622573360800743,
-0.03315194323658943,
0.0036123013123869896,
-0.032185956835746765,
0.09398958086967468,
0.04193161055445671,
0.023930151015520096,
-0.06867771595716476,
-0.003730557858943939,
0.10904572159051895,
-0.03710773587226868,
0.09877555072307587,
0.044571347534656525,
0.07742263376712799,
-0.09392841160297394,
-0.06012413650751114,
-0.07512256503105164,
0.04510984942317009,
-0.041193827986717224,
-0.05441805720329285,
-0.06450756639242172,
0.05866529420018196,
0.03765063360333443,
0.008285927586257458,
-0.00043417513370513916,
-0.03904231637716293,
0.04548158869147301,
0.09433423727750778,
0.07539713382720947,
-0.03934398293495178,
-0.06853321194648743,
-0.050655465573072433,
-0.049079813063144684,
0.06841927021741867,
-0.09490913897752762,
0.019133703783154488,
0.025084011256694794,
0.011794417165219784,
0.08891456574201584,
0.033073410391807556,
-0.13824504613876343,
0.010403096675872803,
0.03355647251009941,
-0.12254569679498672,
-0.11317436397075653,
-0.019370587542653084,
0.03472927212715149,
-0.034642912447452545,
0.052080992609262466,
0.1443503201007843,
-0.0380268357694149,
-0.031155765056610107,
-0.04975120723247528,
0.03945890814065933,
-0.01849127747118473,
0.04741152375936508,
0.06310439109802246,
0.0316198468208313,
-0.07173169404268265,
0.07582014799118042,
0.042088188230991364,
-0.04193748161196709,
0.039818305522203445,
0.042087115347385406,
-0.09325403720140457,
-0.0786227434873581,
-0.06072944402694702,
0.08958730101585388,
-0.025674985721707344,
-0.04412310943007469,
0.0012650247663259506,
-0.08256569504737854,
0.07088927924633026,
0.07759995013475418,
0.047832489013671875,
0.038463689386844635,
-0.08760015666484833,
0.01700516603887081,
-0.05437387526035309,
0.03707064315676689,
-0.029819445684552193,
-0.005088647827506065,
-0.05329376459121704,
0.06858143210411072,
0.06567535549402237,
0.09860938042402267,
-0.03415396064519882,
-0.0754256621003151,
-0.08211865276098251,
-0.01160484366118908,
-0.06262437254190445,
-0.030253153294324875,
-0.08027705550193787,
-0.007707284297794104,
0.002747483318671584,
-0.0031101983040571213,
0.022262390702962875,
0.03642267733812332,
-0.04207293689250946,
-0.01684911921620369,
-0.03570086136460304,
0.03889241814613342,
-0.06210840493440628,
0.006018957123160362,
0.016911517828702927,
-0.03505508229136467,
0.09205926954746246,
0.03594314306974411,
-0.012915782630443573,
0.04249879717826843,
-0.023837540298700333,
0.03903501480817795,
-0.02111576870083809,
0.0006087629590183496,
-0.023655809462070465,
-0.10997159779071808,
-0.00257443031296134,
0.003536645323038101,
-0.027309544384479523,
0.009889718145132065,
0.06045236811041832,
-0.07355841994285583,
0.08737298101186752,
0.04853089153766632,
-0.03073740378022194,
-0.07164269685745239,
0.039636433124542236,
-0.014828016981482506,
0.02754868194460869,
0.06614232063293457,
-0.034971438348293304,
0.05117955803871155,
-0.09923996031284332,
-0.028466008603572845,
0.0036578637082129717,
-0.004443995654582977,
-0.00874224491417408,
-0.052681028842926025,
-0.004189951345324516,
0.006832432001829147,
0.1734720766544342,
-0.021635673940181732,
0.03408728539943695,
0.01596899703145027,
0.006329866126179695,
0.04493333026766777,
-0.015553567558526993,
0.07137960195541382,
-0.008420092985033989,
-0.026182539761066437,
-0.01290087029337883,
0.037211883813142776,
0.006876852363348007,
0.007905237376689911,
0.14259731769561768,
0.045801613479852676,
0.09163303673267365,
0.07381752133369446,
0.013462869450449944,
0.016463283449411392,
-0.1335318386554718,
-0.09099935740232468,
0.0031677773222327232,
0.058323804289102554,
-0.017396576702594757,
0.006971828639507294,
0.09269203245639801,
-0.0852166935801506,
0.0703226774930954,
0.050869476050138474,
-0.047267742455005646,
-0.12822380661964417,
-0.19226312637329102,
-0.023565316572785378,
-0.029204288497567177,
-0.010837377980351448,
-0.09011354297399521,
0.01808960735797882,
0.08373178541660309,
0.024695418775081635,
-0.009524606168270111,
0.09429283440113068,
-0.10423259437084198,
-0.03018980845808983,
0.044537194073200226,
-0.025885239243507385,
0.0155487647280097,
0.04395557940006256,
0.022572534158825874,
-0.004242816939949989,
0.04360778257250786,
0.03807010129094124,
0.044171784073114395,
0.024500936269760132,
0.05341801792383194,
-0.024336563423275948,
-0.07252749800682068,
-0.034018248319625854,
-0.006067248061299324,
0.05505020543932915,
0.13678015768527985,
0.02399892546236515,
-0.06746739894151688,
0.007335623260587454,
0.110570527613163,
-0.02932240441441536,
-0.04902517795562744,
-0.10766344517469406,
0.24351483583450317,
0.024406777694821358,
0.0014690200332552195,
-0.0023635816760361195,
-0.042843639850616455,
0.004885947331786156,
0.2101421058177948,
0.22223031520843506,
0.007733594626188278,
-0.009330974891781807,
0.008167405612766743,
-0.012799464166164398,
0.036694154143333435,
0.14628970623016357,
0.0028773564845323563,
0.2503772974014282,
-0.0464191734790802,
0.03911604359745979,
-0.04341711103916168,
-0.03959183394908905,
-0.09992653131484985,
0.0697469636797905,
-0.009166363626718521,
0.008100297302007675,
-0.03306848183274269,
0.07068604230880737,
-0.040942151099443436,
-0.17418619990348816,
0.0048436736688017845,
0.000017364276573061943,
-0.05846669524908066,
0.011785651557147503,
0.0006955033168196678,
0.021042101085186005,
0.08100666105747223,
-0.01689283177256584,
-0.005773536860942841,
0.12732969224452972,
0.01890636421740055,
-0.09602799266576767,
-0.06482198089361191,
0.11374130845069885,
0.02350141853094101,
0.14486226439476013,
0.011354616843163967,
0.07494240999221802,
0.08660237491130829,
0.022099360823631287,
-0.09962564706802368,
0.041246965527534485,
-0.021693844348192215,
-0.024223629385232925,
0.005514598451554775,
0.10813430696725845,
-0.00813085027039051,
0.06260748952627182,
0.02586234174668789,
-0.09293147921562195,
0.059738755226135254,
0.008833866566419601,
-0.034133829176425934,
-0.07924278825521469,
0.083224818110466,
-0.08800027519464493,
0.1573604941368103,
0.12145932018756866,
-0.014061967842280865,
-0.04673796892166138,
-0.02867555245757103,
0.018136002123355865,
-0.000006983056664466858,
0.06273289769887924,
-0.02571817860007286,
-0.13621997833251953,
0.021543558686971664,
-0.08167261630296707,
0.028463488444685936,
-0.24444475769996643,
-0.09079252928495407,
0.030218467116355896,
-0.017931833863258362,
-0.01698923110961914,
0.0507308691740036,
0.044658903032541275,
0.02826988883316517,
-0.035343460738658905,
0.01847023516893387,
-0.03633996471762657,
0.05876541882753372,
-0.1097671240568161,
-0.0934351310133934
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 1800k (uncased)
Seed 1 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1800k')
model = BertModel.from_pretrained("multiberts-seed-1-1800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-1800k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1800k (uncased)
Seed 1 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1800k (uncased)\nSeed 1 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1800k (uncased)\nSeed 1 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 1800k (uncased)\nSeed 1 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08444707095623016,
-0.0009366876911371946,
-0.002168708248063922,
0.06670024991035461,
0.08619577437639236,
0.0010176151990890503,
0.11356309056282043,
0.04896564781665802,
-0.02946745790541172,
0.02334822714328766,
0.09490673243999481,
0.026657268404960632,
0.041258424520492554,
0.06484505534172058,
0.09812091290950775,
-0.2526756823062897,
0.04863634705543518,
-0.06330200284719467,
0.05593746155500412,
0.07499633729457855,
0.09986332803964615,
-0.07177865505218506,
0.061677780002355576,
0.040055107325315475,
-0.08322682231664658,
-0.017475441098213196,
-0.01628383994102478,
-0.03587951511144638,
0.10057571530342102,
0.06860265880823135,
0.06415900588035583,
0.00032052770256996155,
0.05831317603588104,
-0.08987893909215927,
0.015772458165884018,
0.04630511626601219,
0.0024246983230113983,
0.02423609048128128,
-0.010246135294437408,
0.012655476108193398,
0.1044059544801712,
0.04091986268758774,
0.07851298153400421,
0.03736373409628868,
-0.09605281054973602,
-0.11146654188632965,
-0.07688014209270477,
0.1011100709438324,
0.05149373039603233,
0.043890517204999924,
-0.005592452362179756,
0.07209215313196182,
-0.027317248284816742,
0.07364754378795624,
0.1051049679517746,
-0.2502446174621582,
-0.0079270638525486,
0.06845291703939438,
0.03928925096988678,
0.0460544228553772,
0.012807144783437252,
0.025514183565974236,
0.007263906300067902,
0.04182792454957962,
0.02885916456580162,
-0.0227409265935421,
0.12063588201999664,
-0.04491850361227989,
-0.15025421977043152,
-0.044515710324048996,
0.1188100203871727,
-0.0077594611793756485,
-0.12481696903705597,
-0.0952613577246666,
-0.031455714255571365,
0.10958168655633926,
-0.0031989719718694687,
-0.016515783965587616,
-0.004171510227024555,
0.012333281338214874,
0.02585701458156109,
-0.09391206502914429,
-0.08601194620132446,
-0.027519715949892998,
-0.038401100784540176,
0.12874849140644073,
0.046327777206897736,
0.05208086594939232,
-0.03567766770720482,
0.08603920787572861,
-0.11706969141960144,
-0.039975542575120926,
-0.05131831765174866,
-0.07909081876277924,
-0.020400268957018852,
0.00785287469625473,
-0.02846393547952175,
-0.08686111122369766,
-0.06073206290602684,
0.1146557629108429,
0.030994508415460587,
0.030255399644374847,
-0.004459553398191929,
0.04191754758358002,
0.07524794340133667,
0.09639108180999756,
-0.03793933615088463,
0.056005410850048065,
0.03379085287451744,
-0.024884313344955444,
0.05641819164156914,
-0.04997888207435608,
-0.10017586499452591,
0.08047333359718323,
0.0008694669231772423,
0.03776773065328598,
0.026730215176939964,
0.03145405650138855,
-0.012701515108346939,
-0.0725170373916626,
0.15994852781295776,
-0.07616043835878372,
-0.010776437819004059,
-0.018484793603420258,
0.012788867577910423,
0.0421634316444397,
0.034764546900987625,
-0.007068472448736429,
-0.047318972647190094,
-0.004811527207493782,
-0.057358674705028534,
-0.025387687608599663,
-0.056235089898109436,
-0.11514297127723694,
-0.0010976223275065422,
-0.04312452673912048,
-0.03309103846549988,
-0.14163868129253387,
-0.21658706665039062,
-0.019007248803973198,
0.06616014987230301,
-0.0016653118655085564,
-0.011733745224773884,
0.024424990639090538,
0.017356643453240395,
-0.02175087481737137,
0.011400653049349785,
-0.04595211148262024,
-0.000367642380297184,
-0.006593605503439903,
-0.036096759140491486,
0.05507369339466095,
-0.03805966675281525,
0.02464129403233528,
-0.0710904449224472,
0.02150464989244938,
-0.21278691291809082,
0.08988958597183228,
-0.03473867475986481,
0.0054515041410923,
-0.037790123373270035,
-0.04042212665081024,
0.005072735249996185,
0.046009719371795654,
-0.009145911782979965,
0.11687175929546356,
-0.13179659843444824,
-0.050561122596263885,
0.17881420254707336,
-0.15935960412025452,
-0.0013838335871696472,
0.10068876296281815,
-0.04944335296750069,
0.057231053709983826,
0.13118693232536316,
0.1013091653585434,
0.08283150941133499,
-0.07053285837173462,
0.012602300383150578,
0.06204405799508095,
-0.07009410858154297,
0.05474713444709778,
0.08767002820968628,
-0.027135232463479042,
-0.1411488652229309,
0.02869182452559471,
-0.07309618592262268,
-0.006090013310313225,
-0.02765466272830963,
-0.021579137071967125,
0.004565905779600143,
-0.03872789070010185,
0.02358385920524597,
0.006017101928591728,
0.018322018906474113,
-0.039852846413850784,
-0.08293797075748444,
0.02571081556379795,
0.07575112581253052,
-0.0678238719701767,
0.044412992894649506,
-0.07005684822797775,
0.05575801059603691,
-0.0767274796962738,
-0.004193509928882122,
-0.16958783566951752,
-0.021441008895635605,
0.043885089457035065,
-0.04864652082324028,
0.05200326442718506,
0.08903244882822037,
0.0005027442239224911,
0.1193588525056839,
-0.038091011345386505,
0.004786362871527672,
-0.007368091493844986,
-0.009845292195677757,
-0.04890928789973259,
-0.12293119728565216,
-0.07926351577043533,
-0.0692090168595314,
0.10274618864059448,
-0.06946098059415817,
0.029133455827832222,
-0.07277675718069077,
-0.025059977546334267,
-0.00806896947324276,
-0.060856059193611145,
-0.004856185056269169,
0.01047173049300909,
-0.029697565361857414,
-0.04658431559801102,
0.04649458825588226,
0.053116053342819214,
-0.0609266459941864,
0.07954677939414978,
-0.10163766145706177,
-0.06590060889720917,
0.05451914668083191,
0.007951635867357254,
-0.08404962718486786,
0.09596152603626251,
-0.019501149654388428,
-0.013417419977486134,
-0.06139953434467316,
-0.0421728752553463,
0.19739870727062225,
-0.023006368428468704,
0.1019471287727356,
-0.09085160493850708,
0.002192868385463953,
0.030746322125196457,
-0.04778147116303444,
-0.017046233639121056,
0.06030011922121048,
0.051469821482896805,
-0.1844891905784607,
0.015150658786296844,
0.05025114864110947,
0.07689705491065979,
0.10905060172080994,
0.02667183056473732,
-0.023270126432180405,
-0.046114906668663025,
-0.012671488337218761,
0.005849217530339956,
0.05515788495540619,
-0.03066353127360344,
-0.00887245126068592,
0.03073514997959137,
0.05921245738863945,
0.018555505201220512,
-0.08118443936109543,
0.03715744614601135,
0.07008214294910431,
-0.015927735716104507,
-0.03680531680583954,
-0.028459735214710236,
-0.05903840810060501,
0.061591438949108124,
0.05219035595655441,
0.03856182098388672,
0.024772675707936287,
-0.013248035684227943,
-0.13613595068454742,
0.18910738825798035,
-0.11377248913049698,
-0.2588566839694977,
-0.10878992080688477,
-0.059022970497608185,
-0.02403210662305355,
0.04211575165390968,
0.05778459459543228,
-0.03187700733542442,
-0.042723722755908966,
-0.11582035571336746,
0.061762891709804535,
-0.06613699346780777,
-0.03071674518287182,
-0.007656741887331009,
-0.05224324017763138,
-0.0193379744887352,
-0.1271444857120514,
-0.013055618852376938,
-0.030648095533251762,
-0.07417847216129303,
0.006612714380025864,
-0.03352447971701622,
0.02828008495271206,
0.14149411022663116,
0.03333808109164238,
-0.018601233139634132,
-0.016206935048103333,
0.18950319290161133,
0.010175930336117744,
0.05952685326337814,
0.11000122874975204,
-0.027961067855358124,
0.05280778184533119,
0.044850654900074005,
0.02551313117146492,
-0.04627051204442978,
0.013407569378614426,
-0.01475790236145258,
-0.12205956131219864,
-0.17363595962524414,
-0.07342401891946793,
-0.001371136400848627,
0.007322861347347498,
0.02087019756436348,
0.03715038299560547,
0.024476854130625725,
0.04025803506374359,
-0.028290726244449615,
0.026017067953944206,
-0.016827188432216644,
0.08132102340459824,
0.026290759444236755,
-0.07592366635799408,
0.09170237183570862,
-0.06112684682011604,
0.015789955854415894,
0.10963742434978485,
-0.06475326418876648,
0.19307303428649902,
0.02604178711771965,
0.06405754387378693,
0.10286845266819,
0.017041299492120743,
0.05516499653458595,
0.08370176702737808,
-0.04575633630156517,
0.004810444079339504,
-0.060134612023830414,
-0.05050620436668396,
-0.03590214625000954,
0.04878509044647217,
0.026000507175922394,
0.019328515976667404,
-0.11867232620716095,
0.022843310609459877,
-0.0009866247419267893,
0.1371476799249649,
0.04482059180736542,
-0.11576058715581894,
-0.12268244475126266,
0.03321448713541031,
-0.045388925820589066,
-0.06360994279384613,
0.031789496541023254,
0.05424167588353157,
-0.1512880027294159,
0.04224266856908798,
-0.006320270709693432,
0.06849269568920135,
-0.09018972516059875,
0.015697907656431198,
-0.04448215290904045,
-0.0023792479187250137,
0.0033667203970253468,
0.06891033053398132,
-0.1346219778060913,
0.10953878611326218,
0.020063988864421844,
0.047072526067495346,
-0.08023899048566818,
0.015643319115042686,
-0.011299777776002884,
0.1105489656329155,
0.11631614714860916,
0.04559934884309769,
-0.0511615052819252,
-0.01570053957402706,
-0.04554212838411331,
0.018842320889234543,
0.05670959874987602,
-0.07808489352464676,
0.058881014585494995,
0.009601665660738945,
0.0077856299467384815,
-0.022115476429462433,
0.01716233789920807,
-0.13218306005001068,
-0.12129142880439758,
0.0604458712041378,
-0.07500960677862167,
-0.10311174392700195,
-0.05743909627199173,
-0.061748016625642776,
-0.05026617646217346,
0.2113957405090332,
-0.11017594486474991,
-0.08839485049247742,
-0.09755417704582214,
-0.010245677083730698,
0.044067028909921646,
-0.06597233563661575,
0.047561902552843094,
-0.03967997431755066,
0.0905984416604042,
-0.04934074729681015,
-0.10777406394481659,
0.03293657302856445,
-0.11348103731870651,
-0.11064247786998749,
-0.044922877103090286,
0.10283592343330383,
0.11339281499385834,
0.0380120649933815,
0.009102159179747105,
0.012248814105987549,
-0.0035527218133211136,
-0.1172252893447876,
0.014729779213666916,
0.1284547746181488,
-0.004583269357681274,
0.07336914539337158,
-0.05932643264532089,
0.023559533059597015,
-0.01783759519457817,
-0.0036533772945404053,
0.13281646370887756,
0.18611541390419006,
-0.06556671857833862,
0.17508956789970398,
0.20261353254318237,
-0.10650843381881714,
-0.19094471633434296,
-0.051189303398132324,
-0.0016578380018472672,
0.043412867933511734,
0.04918985813856125,
-0.18543854355812073,
0.09403203427791595,
0.03255143016576767,
-0.03343278169631958,
0.018632415682077408,
-0.23573166131973267,
-0.11187879741191864,
0.09354199469089508,
0.054005201905965805,
0.19275051355361938,
-0.07925067842006683,
-0.03968331217765808,
-0.014425193890929222,
-0.03960607200860977,
0.04353600740432739,
-0.029902074486017227,
0.09046328812837601,
0.00836014561355114,
-0.029384765774011612,
0.003185966983437538,
-0.03180979937314987,
0.09376660734415054,
0.04343932121992111,
0.022869601845741272,
-0.06938166916370392,
-0.007037026807665825,
0.11039890348911285,
-0.037938110530376434,
0.09924548119306564,
0.04705455154180527,
0.07618020474910736,
-0.09430563449859619,
-0.05977826938033104,
-0.07361787557601929,
0.04583756998181343,
-0.041357897222042084,
-0.05574457347393036,
-0.06471388041973114,
0.058644287288188934,
0.037259478121995926,
0.009420976042747498,
0.0022319480776786804,
-0.038710806518793106,
0.043437790125608444,
0.0904516652226448,
0.07655414193868637,
-0.04101502522826195,
-0.07299676537513733,
-0.0507221519947052,
-0.04798899218440056,
0.06602030992507935,
-0.09517807513475418,
0.019780106842517853,
0.026261279359459877,
0.012359230779111385,
0.09031963348388672,
0.03370975703001022,
-0.13837338984012604,
0.01006893627345562,
0.03270946815609932,
-0.12233812361955643,
-0.10869064927101135,
-0.02162501960992813,
0.03674934059381485,
-0.03494361788034439,
0.053510066121816635,
0.14507454633712769,
-0.03794660046696663,
-0.031047143042087555,
-0.048416510224342346,
0.037687189877033234,
-0.01813807338476181,
0.04549649357795715,
0.0643996149301529,
0.031318727880716324,
-0.07188204675912857,
0.0773756355047226,
0.04213917255401611,
-0.040565915405750275,
0.04092349112033844,
0.0411841943860054,
-0.09344582259654999,
-0.07865475118160248,
-0.05858149006962776,
0.09409557282924652,
-0.02506101317703724,
-0.04354763403534889,
-0.0019602440297603607,
-0.08287191390991211,
0.07052987813949585,
0.07719209045171738,
0.04878250136971474,
0.03773782029747963,
-0.08802208304405212,
0.015402109362185001,
-0.05456443876028061,
0.03681158274412155,
-0.030095143243670464,
-0.0040508415549993515,
-0.054478198289871216,
0.06643518805503845,
0.06641285121440887,
0.09962394088506699,
-0.03381780534982681,
-0.07754290848970413,
-0.0808199867606163,
-0.010693158954381943,
-0.059479281306266785,
-0.03139674291014671,
-0.07659092545509338,
-0.008502306416630745,
0.002699600998312235,
-0.002234082669019699,
0.02160932496190071,
0.03645416349172592,
-0.04194849729537964,
-0.01797410659492016,
-0.03681541979312897,
0.03680981323122978,
-0.060071367770433426,
0.007097330875694752,
0.016142629086971283,
-0.03449764475226402,
0.09092976897954941,
0.03516792505979538,
-0.013273687101900578,
0.041991397738456726,
-0.023866381496191025,
0.03797563910484314,
-0.022277092561125755,
0.00017687957733869553,
-0.024556990712881088,
-0.11192424595355988,
-0.003847735933959484,
0.003606153652071953,
-0.025832490995526314,
0.008338646031916142,
0.05882640182971954,
-0.07372736185789108,
0.08685653656721115,
0.04825911298394203,
-0.03079456090927124,
-0.07238205522298813,
0.03833197429776192,
-0.015562208369374275,
0.028193483129143715,
0.06779181957244873,
-0.033155407756567,
0.053754888474941254,
-0.09842342138290405,
-0.028144150972366333,
0.003751242533326149,
-0.0021684877574443817,
-0.012268662452697754,
-0.0541551411151886,
-0.005591009743511677,
0.006610888056457043,
0.1726755052804947,
-0.020248226821422577,
0.033459387719631195,
0.01593286544084549,
0.00709014106541872,
0.04617144912481308,
-0.014177879318594933,
0.0715341866016388,
-0.007029118947684765,
-0.026348181068897247,
-0.013605148531496525,
0.03701172396540642,
0.006629558280110359,
0.0067064836621284485,
0.14039933681488037,
0.04874615743756294,
0.09308981150388718,
0.07369938492774963,
0.013190512545406818,
0.016560886055231094,
-0.12989656627178192,
-0.08553346991539001,
0.003758826293051243,
0.05732905864715576,
-0.01569163054227829,
0.01266174204647541,
0.0918998047709465,
-0.08494560420513153,
0.07215070724487305,
0.05000562593340874,
-0.04702896624803543,
-0.12635762989521027,
-0.1887854039669037,
-0.02371196076273918,
-0.03082623891532421,
-0.011307457461953163,
-0.09057306498289108,
0.01720038242638111,
0.09198649972677231,
0.024454981088638306,
-0.009604138322174549,
0.09638693928718567,
-0.10854223370552063,
-0.03186145797371864,
0.04398208111524582,
-0.02713823691010475,
0.0162728950381279,
0.04241856932640076,
0.023440465331077576,
-0.004858514294028282,
0.044569190591573715,
0.03964025154709816,
0.04570832476019859,
0.027231348678469658,
0.05304981768131256,
-0.023427311331033707,
-0.07359528541564941,
-0.03249191492795944,
-0.007101600058376789,
0.05437950789928436,
0.13618430495262146,
0.021664487197995186,
-0.06691878288984299,
0.00719199376180768,
0.11218021810054779,
-0.029826566576957703,
-0.051405224949121475,
-0.10857167094945908,
0.23962682485580444,
0.023924803361296654,
0.0011480371467769146,
-0.004080374259501696,
-0.04487316682934761,
0.005492363125085831,
0.21147209405899048,
0.22160959243774414,
0.005221227183938026,
-0.009838133119046688,
0.010592497885227203,
-0.012562999501824379,
0.03705073520541191,
0.14420974254608154,
0.003289220854640007,
0.2509619891643524,
-0.04832075536251068,
0.03801801800727844,
-0.04350174590945244,
-0.03884735703468323,
-0.10176841914653778,
0.07075677067041397,
-0.009266858920454979,
0.008351474069058895,
-0.033236511051654816,
0.07083504647016525,
-0.0408807247877121,
-0.17225909233093262,
0.003809399902820587,
-0.00389277096837759,
-0.05935905873775482,
0.01176709495484829,
0.0009748712182044983,
0.019425151869654655,
0.08169043064117432,
-0.016918949782848358,
-0.006813115905970335,
0.12801975011825562,
0.019294045865535736,
-0.09557407349348068,
-0.06326941400766373,
0.11408504843711853,
0.018991682678461075,
0.14400765299797058,
0.011202923953533173,
0.07420933246612549,
0.08687134087085724,
0.02206316404044628,
-0.09806579351425171,
0.04209805279970169,
-0.022138230502605438,
-0.026102358475327492,
0.006160452961921692,
0.10740287601947784,
-0.007684692274779081,
0.06181352585554123,
0.025690890848636627,
-0.0929419994354248,
0.060752496123313904,
0.010174226015806198,
-0.033189889043569565,
-0.07961592078208923,
0.0823294073343277,
-0.09014832228422165,
0.15650105476379395,
0.12027671933174133,
-0.013336973264813423,
-0.046521976590156555,
-0.02884352020919323,
0.01734231971204281,
-0.0008410862646996975,
0.05981668457388878,
-0.02524595707654953,
-0.13595974445343018,
0.022530021145939827,
-0.08195510506629944,
0.026971176266670227,
-0.2515513300895691,
-0.0905478447675705,
0.0321233794093132,
-0.017578911036252975,
-0.0159359909594059,
0.049269601702690125,
0.04669724404811859,
0.02879016287624836,
-0.03625574707984924,
0.023861296474933624,
-0.035241518169641495,
0.05957351252436638,
-0.10916414111852646,
-0.09399516880512238
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 180k (uncased)
Seed 1 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-180k')
model = BertModel.from_pretrained("multiberts-seed-1-180k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-180k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 180k (uncased)
Seed 1 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 180k (uncased)\nSeed 1 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 180k (uncased)\nSeed 1 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 180k (uncased)\nSeed 1 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08371438831090927,
-0.005477594211697578,
-0.0022014614660292864,
0.06683547794818878,
0.08463402837514877,
0.000263717258349061,
0.11341094225645065,
0.04898986965417862,
-0.0319100096821785,
0.025400174781680107,
0.0955888032913208,
0.031657036393880844,
0.04065346717834473,
0.06767050921916962,
0.09760196506977081,
-0.25715696811676025,
0.05029251426458359,
-0.06330061703920364,
0.05410982668399811,
0.0755099356174469,
0.10030964761972427,
-0.07092474400997162,
0.062296222895383835,
0.03975215554237366,
-0.08249654620885849,
-0.018927084282040596,
-0.016506528481841087,
-0.03329573571681976,
0.10141068696975708,
0.07130654901266098,
0.06268472224473953,
0.0005063880234956741,
0.05752890557050705,
-0.09071080386638641,
0.01626318320631981,
0.0469762459397316,
0.0011902740225195885,
0.025411101058125496,
-0.008930236101150513,
0.01258515752851963,
0.10798895359039307,
0.03742024675011635,
0.07686980068683624,
0.03678400069475174,
-0.09575696289539337,
-0.11382623016834259,
-0.0775647908449173,
0.10205397754907608,
0.05071575939655304,
0.04328086972236633,
-0.006604114547371864,
0.07164448499679565,
-0.02755565568804741,
0.07371786236763,
0.10942289233207703,
-0.25448399782180786,
-0.008752228692173958,
0.06395833939313889,
0.04205252230167389,
0.042617835104465485,
0.012543799355626106,
0.027368780225515366,
0.00779033824801445,
0.04232870042324066,
0.031077351421117783,
-0.023016657680273056,
0.1243213340640068,
-0.04717031493782997,
-0.15174612402915955,
-0.04413716867566109,
0.1169428899884224,
-0.006667131558060646,
-0.12461140006780624,
-0.09869451820850372,
-0.03266008198261261,
0.11450538039207458,
-0.0028465017676353455,
-0.01687040738761425,
-0.005215809680521488,
0.012112089432775974,
0.026763461530208588,
-0.09440816938877106,
-0.08683720976114273,
-0.02832743525505066,
-0.04009244963526726,
0.13187167048454285,
0.04692165553569794,
0.05105526000261307,
-0.03500787913799286,
0.08658334612846375,
-0.11427848041057587,
-0.04004620760679245,
-0.05214344337582588,
-0.08064337074756622,
-0.019832484424114227,
0.009361431933939457,
-0.02703247219324112,
-0.08578100800514221,
-0.05913187935948372,
0.11491043865680695,
0.03085436299443245,
0.029749933630228043,
-0.0048543531447649,
0.04168219864368439,
0.07557868212461472,
0.09697838127613068,
-0.03856462985277176,
0.05660843849182129,
0.033724237233400345,
-0.0239747054874897,
0.05731717869639397,
-0.050179868936538696,
-0.09942486882209778,
0.07725382596254349,
0.001550682820379734,
0.03858629986643791,
0.02647462673485279,
0.03321154788136482,
-0.012632305733859539,
-0.07232513278722763,
0.16543632745742798,
-0.07530920207500458,
-0.010698351077735424,
-0.016487250104546547,
0.013284804299473763,
0.048234209418296814,
0.033734843134880066,
-0.008909646421670914,
-0.04642380028963089,
-0.005612409673631191,
-0.05694310739636421,
-0.025226887315511703,
-0.05491923540830612,
-0.11618346720933914,
0.00024304166436195374,
-0.0423111766576767,
-0.03275866061449051,
-0.1424344778060913,
-0.2126118540763855,
-0.019254418089985847,
0.06440307199954987,
-0.0016845259815454483,
-0.009464062750339508,
0.02407906949520111,
0.018328586593270302,
-0.02037021517753601,
0.011109817773103714,
-0.04274788871407509,
-0.0009595751762390137,
-0.005283104255795479,
-0.036139972507953644,
0.055945128202438354,
-0.03890184685587883,
0.024199778214097023,
-0.07110200822353363,
0.02275172807276249,
-0.21076923608779907,
0.09039972722530365,
-0.034549467265605927,
0.0016389694064855576,
-0.03952981159090996,
-0.040426988154649734,
0.004370713606476784,
0.0469890758395195,
-0.00889260321855545,
0.11573554575443268,
-0.13635145127773285,
-0.05098194256424904,
0.18155136704444885,
-0.15994659066200256,
-0.0014640092849731445,
0.10192043334245682,
-0.04927191883325577,
0.05773548409342766,
0.1315193474292755,
0.09955964237451553,
0.08026339113712311,
-0.07328739017248154,
0.012040755711495876,
0.06257264316082001,
-0.07107297331094742,
0.057605113834142685,
0.08859647810459137,
-0.02609030343592167,
-0.138850599527359,
0.02847081422805786,
-0.0725756511092186,
-0.007109371945261955,
-0.02776399254798889,
-0.020002305507659912,
0.004793856292963028,
-0.037665560841560364,
0.025733310729265213,
0.0052366964519023895,
0.017433488741517067,
-0.03869236260652542,
-0.08397036790847778,
0.027354124933481216,
0.07590463012456894,
-0.0701868012547493,
0.04397718608379364,
-0.06821037083864212,
0.05837230756878853,
-0.07610799372196198,
-0.0027241590432822704,
-0.16979223489761353,
-0.023069096729159355,
0.0437341183423996,
-0.0505598820745945,
0.051577795296907425,
0.09208165109157562,
0.0011499450774863362,
0.1206335574388504,
-0.03748096525669098,
0.005023201461881399,
-0.005244182422757149,
-0.010304737836122513,
-0.05085107311606407,
-0.1236623227596283,
-0.08095676451921463,
-0.06939080357551575,
0.10240186005830765,
-0.07137036323547363,
0.027804387733340263,
-0.07382730394601822,
-0.02319943904876709,
-0.008285848423838615,
-0.05879265069961548,
-0.0035409163683652878,
0.009934823960065842,
-0.029596200212836266,
-0.046395957469940186,
0.04718693345785141,
0.051186710596084595,
-0.0620371475815773,
0.08040745556354523,
-0.10463991016149521,
-0.06559476256370544,
0.054064229130744934,
0.008205901831388474,
-0.08340388536453247,
0.09271447360515594,
-0.01962919905781746,
-0.01345584262162447,
-0.0624907985329628,
-0.043175872415304184,
0.1941761076450348,
-0.02165577933192253,
0.10125960409641266,
-0.0908302515745163,
0.002364538609981537,
0.029811467975378036,
-0.04907195270061493,
-0.016961921006441116,
0.06345640867948532,
0.04705890640616417,
-0.19070500135421753,
0.015475105494260788,
0.053851135075092316,
0.07764993607997894,
0.1104503870010376,
0.026276392862200737,
-0.024674000218510628,
-0.04666893929243088,
-0.010682652704417706,
0.00590836675837636,
0.053768038749694824,
-0.026608362793922424,
-0.00908786989748478,
0.031925003975629807,
0.059419408440589905,
0.01740693300962448,
-0.08121088147163391,
0.036422912031412125,
0.0690506100654602,
-0.01725739985704422,
-0.03697787970304489,
-0.02607795223593712,
-0.059255316853523254,
0.0621967650949955,
0.05193459987640381,
0.03829445689916611,
0.02503221668303013,
-0.01241379976272583,
-0.13513393700122833,
0.18990057706832886,
-0.11224041879177094,
-0.25822892785072327,
-0.10707001388072968,
-0.05480851233005524,
-0.025589676573872566,
0.041411153972148895,
0.058623962104320526,
-0.029582437127828598,
-0.0414736233651638,
-0.1149679571390152,
0.06779181957244873,
-0.06571078300476074,
-0.031058097258210182,
-0.011807724833488464,
-0.052659839391708374,
-0.019936032593250275,
-0.12795838713645935,
-0.013190509751439095,
-0.03102472797036171,
-0.07454247772693634,
0.006356213241815567,
-0.0359329991042614,
0.02976590394973755,
0.13862371444702148,
0.0346657931804657,
-0.019817683845758438,
-0.016872398555278778,
0.19020973145961761,
0.010154934599995613,
0.05962994322180748,
0.11250680685043335,
-0.027822161093354225,
0.05367134138941765,
0.04168013483285904,
0.025408854708075523,
-0.0466156005859375,
0.013040551915764809,
-0.013825579546391964,
-0.1223224326968193,
-0.1722012311220169,
-0.07210447639226913,
-0.0014270858373492956,
0.007946996949613094,
0.021212797611951828,
0.03675226867198944,
0.022672243416309357,
0.04053443670272827,
-0.03040331043303013,
0.027238763868808746,
-0.015022408217191696,
0.08073022961616516,
0.02659333124756813,
-0.07524631917476654,
0.09321808069944382,
-0.06172952428460121,
0.015628943219780922,
0.10933970659971237,
-0.06265135109424591,
0.19187787175178528,
0.025009015575051308,
0.06569814682006836,
0.10317301750183105,
0.01808425784111023,
0.05490967631340027,
0.08681446313858032,
-0.0458565428853035,
0.00514292623847723,
-0.061681292951107025,
-0.05149117484688759,
-0.03851940855383873,
0.04877633601427078,
0.02739657461643219,
0.017225097864866257,
-0.11972709000110626,
0.02187206596136093,
-0.0009827724425122142,
0.13833799958229065,
0.04629942774772644,
-0.11554643511772156,
-0.12325753271579742,
0.03365772217512131,
-0.0454004667699337,
-0.0633314773440361,
0.031046129763126373,
0.054174087941646576,
-0.15177670121192932,
0.04205778241157532,
-0.006264795549213886,
0.06946015357971191,
-0.09450101107358932,
0.016135912388563156,
-0.049118250608444214,
-0.0006794445216655731,
0.0031673582270741463,
0.07015509903430939,
-0.13697309792041779,
0.1108178198337555,
0.020029084756970406,
0.04770107567310333,
-0.08144381642341614,
0.01467661652714014,
-0.011587032116949558,
0.11027924716472626,
0.11691012233495712,
0.0440223328769207,
-0.05526373162865639,
-0.018680356442928314,
-0.045614633709192276,
0.019421452656388283,
0.05782800912857056,
-0.07876915484666824,
0.05867896229028702,
0.008368313312530518,
0.00768459215760231,
-0.022339696064591408,
0.02062695100903511,
-0.13319289684295654,
-0.12032510340213776,
0.06008525192737579,
-0.07748617231845856,
-0.1011674776673317,
-0.05678156018257141,
-0.061853084713220596,
-0.04943092167377472,
0.20737886428833008,
-0.10924367606639862,
-0.0885818749666214,
-0.09792071580886841,
-0.0118781179189682,
0.044752009212970734,
-0.06756961345672607,
0.0446767620742321,
-0.04088456928730011,
0.0898825079202652,
-0.050118058919906616,
-0.10954508185386658,
0.033840328454971313,
-0.11333797872066498,
-0.11308421939611435,
-0.043845005333423615,
0.1038818508386612,
0.1144549548625946,
0.0380367711186409,
0.011358669959008694,
0.010940762236714363,
-0.003193441778421402,
-0.11721569299697876,
0.01552361436188221,
0.12920142710208893,
-0.0047859326004981995,
0.06993994861841202,
-0.06055063754320145,
0.026117686182260513,
-0.017977474257349968,
-0.0015841610729694366,
0.1323367953300476,
0.18528470396995544,
-0.06519809365272522,
0.1737256646156311,
0.20318353176116943,
-0.10491800308227539,
-0.1908588707447052,
-0.05486218258738518,
-0.00038487743586301804,
0.04396473988890648,
0.0491037555038929,
-0.18570630252361298,
0.09182824194431305,
0.036140792071819305,
-0.03332924842834473,
0.020745091140270233,
-0.23464041948318481,
-0.11111271381378174,
0.09328344464302063,
0.05250288546085358,
0.19153136014938354,
-0.08054796606302261,
-0.03920064866542816,
-0.015985503792762756,
-0.03831367939710617,
0.04742651432752609,
-0.0334252268075943,
0.09208104759454727,
0.008576346561312675,
-0.0293613038957119,
0.0025036772713065147,
-0.030769310891628265,
0.09475654363632202,
0.04181450232863426,
0.023102793842554092,
-0.06912223994731903,
-0.005736837163567543,
0.10626429319381714,
-0.03845244273543358,
0.10058296471834183,
0.044146083295345306,
0.07513155043125153,
-0.09394145011901855,
-0.060602638870477676,
-0.07531436532735825,
0.04665951430797577,
-0.0415731817483902,
-0.05535685643553734,
-0.06371266394853592,
0.05800078809261322,
0.03614857792854309,
0.009958034381270409,
0.004372505471110344,
-0.039495617151260376,
0.043201200664043427,
0.09017752856016159,
0.0772324874997139,
-0.03582645207643509,
-0.07436753809452057,
-0.05199064314365387,
-0.04836142808198929,
0.0677458643913269,
-0.09625241905450821,
0.02014634758234024,
0.02574833855032921,
0.011408254504203796,
0.09029820561408997,
0.033692266792058945,
-0.13859479129314423,
0.011702224612236023,
0.030580628663301468,
-0.12142912298440933,
-0.11310244351625443,
-0.019786447286605835,
0.03179392218589783,
-0.033948227763175964,
0.05600827559828758,
0.1473274827003479,
-0.037314996123313904,
-0.03121265396475792,
-0.048537835478782654,
0.03682703524827957,
-0.02000102773308754,
0.04721807688474655,
0.06452877819538116,
0.031125279143452644,
-0.0718393325805664,
0.07444389164447784,
0.04032649099826813,
-0.03654782474040985,
0.04141787439584732,
0.0388207733631134,
-0.09278072416782379,
-0.07945916056632996,
-0.05943525955080986,
0.09190703183412552,
-0.026463104411959648,
-0.04469742998480797,
0.0011290740221738815,
-0.0832122415304184,
0.06878503412008286,
0.078061044216156,
0.04928658530116081,
0.0391559936106205,
-0.08709247410297394,
0.015899863094091415,
-0.05314180999994278,
0.03567392751574516,
-0.02885313518345356,
-0.004654839634895325,
-0.05337793380022049,
0.06740456819534302,
0.06577109545469284,
0.10050393640995026,
-0.034798309206962585,
-0.07692156732082367,
-0.08242051303386688,
-0.01202157698571682,
-0.06451495736837387,
-0.03182239830493927,
-0.07810795307159424,
-0.00883487518876791,
0.0026923019904643297,
-0.0023742783814668655,
0.024688508361577988,
0.03553588688373566,
-0.041646696627140045,
-0.017622005194425583,
-0.037144921720027924,
0.038035981357097626,
-0.0634077787399292,
0.006277633830904961,
0.015701979398727417,
-0.035390615463256836,
0.09122316539287567,
0.036233507096767426,
-0.012515503913164139,
0.04319508746266365,
-0.02693738415837288,
0.03848809376358986,
-0.021737977862358093,
-0.0007679644040763378,
-0.02387385442852974,
-0.11221393942832947,
-0.005397920962423086,
0.0031275711953639984,
-0.025234561413526535,
0.008060090243816376,
0.05750463902950287,
-0.07368600368499756,
0.08765477687120438,
0.04752021282911301,
-0.03202669695019722,
-0.07093732059001923,
0.03995344415307045,
-0.01739601045846939,
0.029393376782536507,
0.0690278559923172,
-0.03280939161777496,
0.05495476722717285,
-0.09772765636444092,
-0.028015345335006714,
0.004830833524465561,
-0.004211440682411194,
-0.010883059352636337,
-0.05498833209276199,
-0.004624633118510246,
0.0063178036361932755,
0.1701601892709732,
-0.021838881075382233,
0.03750913217663765,
0.014043654315173626,
0.0074895815923810005,
0.04705154150724411,
-0.01374409906566143,
0.07061977684497833,
-0.007326744496822357,
-0.024972697719931602,
-0.014852014370262623,
0.03739238530397415,
0.005219761282205582,
0.00843583419919014,
0.14099451899528503,
0.04795797914266586,
0.08966659754514694,
0.07460959255695343,
0.013936161994934082,
0.017678961157798767,
-0.13439492881298065,
-0.08910422772169113,
0.006304032169282436,
0.057106878608465195,
-0.01639864780008793,
0.012763887643814087,
0.09173023700714111,
-0.08483909070491791,
0.07101805508136749,
0.05042823776602745,
-0.04732457175850868,
-0.1267358511686325,
-0.1943180412054062,
-0.024542786180973053,
-0.0285854022949934,
-0.011938201263546944,
-0.08949173241853714,
0.017551735043525696,
0.09242501854896545,
0.025157369673252106,
-0.009780080989003181,
0.09564758092164993,
-0.10419537872076035,
-0.03221907094120979,
0.04480103403329849,
-0.02649165876209736,
0.01646890491247177,
0.044479258358478546,
0.024092065170407295,
-0.0036335326731204987,
0.04248262196779251,
0.03984537720680237,
0.045489270240068436,
0.026909295469522476,
0.05325775593519211,
-0.025699889287352562,
-0.07470855861902237,
-0.032830044627189636,
-0.004607878625392914,
0.054121434688568115,
0.1392858326435089,
0.023297298699617386,
-0.06764151155948639,
0.006501201074570417,
0.11064838618040085,
-0.028893649578094482,
-0.05118840932846069,
-0.10848775506019592,
0.24287515878677368,
0.021355951204895973,
0.0028519006446003914,
-0.004878590814769268,
-0.043931856751441956,
0.006130019202828407,
0.21094916760921478,
0.22172951698303223,
0.004151622299104929,
-0.00901635643094778,
0.009713022038340569,
-0.011980079114437103,
0.0381200797855854,
0.1441127210855484,
0.0032001566141843796,
0.2538333237171173,
-0.04905670881271362,
0.037862759083509445,
-0.044108010828495026,
-0.03876080736517906,
-0.1025809645652771,
0.0698326826095581,
-0.009159641340374947,
0.00817388016730547,
-0.03273571655154228,
0.07090147584676743,
-0.04103044047951698,
-0.17534688115119934,
0.0011721467599272728,
-0.00024454062804579735,
-0.059724111109972,
0.010949626564979553,
-0.00035025086253881454,
0.021537255495786667,
0.08359220623970032,
-0.01762906089425087,
-0.006452176719903946,
0.13247421383857727,
0.01880570873618126,
-0.09504163265228271,
-0.06004752218723297,
0.11348330229520798,
0.02195931226015091,
0.13991115987300873,
0.011537295766174793,
0.07506242394447327,
0.0867358148097992,
0.022210080176591873,
-0.09645192325115204,
0.04342528060078621,
-0.020990129560232162,
-0.026419274508953094,
0.007066595368087292,
0.10709036886692047,
-0.007465943694114685,
0.06016194075345993,
0.02687997743487358,
-0.09480234235525131,
0.06246989965438843,
0.012748688459396362,
-0.03446090593934059,
-0.07908501476049423,
0.08502921462059021,
-0.08974995464086533,
0.15674468874931335,
0.12277024984359741,
-0.01258054468780756,
-0.04688466340303421,
-0.029618527740240097,
0.01766739785671234,
-0.00014252867549657822,
0.058377597481012344,
-0.026642359793186188,
-0.13373222947120667,
0.02107522077858448,
-0.08034637570381165,
0.026142895221710205,
-0.24781686067581177,
-0.09009785950183868,
0.031664155423641205,
-0.01788916625082493,
-0.014985207468271255,
0.047608207911252975,
0.043559879064559937,
0.02807811088860035,
-0.036051176488399506,
0.02002697065472603,
-0.03555229306221008,
0.058403003960847855,
-0.10837620496749878,
-0.09312461316585541
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 1900k (uncased)
Seed 1 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1900k')
model = BertModel.from_pretrained("multiberts-seed-1-1900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-1900k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 1900k (uncased)
Seed 1 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 1900k (uncased)\nSeed 1 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 1900k (uncased)\nSeed 1 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 1900k (uncased)\nSeed 1 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0837298035621643,
-0.004022653214633465,
-0.0021152901463210583,
0.0652979388833046,
0.08429433405399323,
0.0021885945461690426,
0.11482907086610794,
0.047799307852983475,
-0.03125794976949692,
0.023786550387740135,
0.09673161804676056,
0.026671413332223892,
0.041994158178567886,
0.0634143278002739,
0.0950656309723854,
-0.25564438104629517,
0.048822689801454544,
-0.06256310641765594,
0.059055641293525696,
0.07536803185939789,
0.09969165921211243,
-0.07299954444169998,
0.06080642342567444,
0.03988504409790039,
-0.08595865964889526,
-0.017299510538578033,
-0.015919042751193047,
-0.03650711476802826,
0.10105025768280029,
0.07082796841859818,
0.06425728648900986,
0.00028103217482566833,
0.059125177562236786,
-0.088088259100914,
0.016050143167376518,
0.046455927193164825,
0.0008052866905927658,
0.025347651913762093,
-0.01157636009156704,
0.012337742373347282,
0.11065977811813354,
0.04055887833237648,
0.078316330909729,
0.03560398891568184,
-0.09627898037433624,
-0.11450966447591782,
-0.07825660705566406,
0.10529404878616333,
0.050530508160591125,
0.041126541793346405,
-0.004683280363678932,
0.07257743924856186,
-0.02851443737745285,
0.07468385994434357,
0.1063811331987381,
-0.249803364276886,
-0.009723011404275894,
0.06542618572711945,
0.04256749153137207,
0.04379541426897049,
0.009850841015577316,
0.025979457423090935,
0.0058254674077034,
0.04183684661984444,
0.031798653304576874,
-0.024525754153728485,
0.11502723395824432,
-0.044814582914114,
-0.15116819739341736,
-0.04376387223601341,
0.11750316619873047,
-0.009118154644966125,
-0.1248166635632515,
-0.09775715321302414,
-0.03237932175397873,
0.11279458552598953,
-0.0052160341292619705,
-0.01631510630249977,
-0.0030748234130442142,
0.011349167674779892,
0.027248261496424675,
-0.09576334059238434,
-0.08631884306669235,
-0.027848586440086365,
-0.040269967168569565,
0.1346554458141327,
0.04611561447381973,
0.052628349512815475,
-0.03685912489891052,
0.08594159781932831,
-0.11452455818653107,
-0.04017239063978195,
-0.05105414241552353,
-0.07965628802776337,
-0.019000140950083733,
0.007757637649774551,
-0.027896322309970856,
-0.0870116651058197,
-0.06098359450697899,
0.11276812851428986,
0.030417587608098984,
0.029705800116062164,
-0.003142852336168289,
0.04220777004957199,
0.07616867870092392,
0.09630748629570007,
-0.037921205163002014,
0.0491393581032753,
0.03537331894040108,
-0.024453353136777878,
0.057065628468990326,
-0.049875520169734955,
-0.09894982725381851,
0.0790434405207634,
0.0027647698298096657,
0.03718748688697815,
0.026042209938168526,
0.03179255872964859,
-0.010027420707046986,
-0.06959816068410873,
0.16139090061187744,
-0.07699498534202576,
-0.009635679423809052,
-0.017839983105659485,
0.012849433347582817,
0.044912874698638916,
0.033922046422958374,
-0.008057881146669388,
-0.046319857239723206,
-0.005906641483306885,
-0.05935978889465332,
-0.023889781907200813,
-0.05461578071117401,
-0.11719132214784622,
-0.0011088778264820576,
-0.04091412574052811,
-0.03322497755289078,
-0.140386164188385,
-0.2178962081670761,
-0.019567318260669708,
0.06472134590148926,
-0.00228036567568779,
-0.009045323356986046,
0.025974594056606293,
0.017851242795586586,
-0.02065962366759777,
0.012653329409658909,
-0.04295220598578453,
-0.0007431013509631157,
-0.005518291145563126,
-0.03720106929540634,
0.055503711104393005,
-0.04037933424115181,
0.02365226112306118,
-0.06944534182548523,
0.021832624450325966,
-0.21436825394630432,
0.09011311829090118,
-0.033478692173957825,
0.0038648508489131927,
-0.03704182058572769,
-0.03986233472824097,
0.005111359059810638,
0.047073110938072205,
-0.010779035277664661,
0.11746689677238464,
-0.13104307651519775,
-0.05167803913354874,
0.18439099192619324,
-0.15936753153800964,
-0.0007809102535247803,
0.0982419103384018,
-0.05021058768033981,
0.058684684336185455,
0.1328001618385315,
0.0991731733083725,
0.07836543023586273,
-0.07463160157203674,
0.014032223261892796,
0.06365572661161423,
-0.07062835246324539,
0.05519569665193558,
0.08680922538042068,
-0.02650284208357334,
-0.14370863139629364,
0.028804440051317215,
-0.0725848525762558,
-0.006415412761271,
-0.02827538177371025,
-0.01979130506515503,
0.006979208439588547,
-0.03808245062828064,
0.02652016654610634,
0.0060656690038740635,
0.01861422322690487,
-0.03850163519382477,
-0.08197907358407974,
0.032017502933740616,
0.07472971826791763,
-0.06941691040992737,
0.04298535734415054,
-0.06825468689203262,
0.05574226379394531,
-0.07508822530508041,
-0.004292427562177181,
-0.17003324627876282,
-0.02164815366268158,
0.04383791238069534,
-0.0501166395843029,
0.05213627964258194,
0.08922843635082245,
0.001232205773703754,
0.12260478734970093,
-0.03888603299856186,
0.0069888923317193985,
-0.0055540576577186584,
-0.00972854346036911,
-0.05230037495493889,
-0.12431521713733673,
-0.08001083880662918,
-0.06929436326026917,
0.10598395764827728,
-0.0738678053021431,
0.029009047895669937,
-0.07290896028280258,
-0.024013487622141838,
-0.008143121376633644,
-0.05969221889972687,
-0.004517926834523678,
0.010412785224616528,
-0.028566863387823105,
-0.046029265969991684,
0.04567767679691315,
0.053020548075437546,
-0.06111842766404152,
0.07960195094347,
-0.10335981845855713,
-0.06651268154382706,
0.05450289696455002,
0.009689675644040108,
-0.08611233532428741,
0.08925320208072662,
-0.019452299922704697,
-0.013312510214745998,
-0.06236293166875839,
-0.04530893266201019,
0.1967412680387497,
-0.02309001237154007,
0.10367600619792938,
-0.09237926453351974,
0.0004560300149023533,
0.03049776330590248,
-0.04765314608812332,
-0.016658199951052666,
0.05781742185354233,
0.04992339015007019,
-0.19166015088558197,
0.01653851568698883,
0.047606177628040314,
0.07563069462776184,
0.11332019418478012,
0.026462571695446968,
-0.023679491132497787,
-0.047387611120939255,
-0.014051390811800957,
0.0038134900387376547,
0.05797958001494408,
-0.034523170441389084,
-0.0103964414447546,
0.03070703335106373,
0.055967941880226135,
0.018555186688899994,
-0.0813949704170227,
0.037778254598379135,
0.06869636476039886,
-0.015477102249860764,
-0.038154348731040955,
-0.028315100818872452,
-0.0578409805893898,
0.06317439675331116,
0.05108305811882019,
0.03908634930849075,
0.023029981181025505,
-0.012841837480664253,
-0.13711607456207275,
0.18960151076316833,
-0.11397462338209152,
-0.2570689916610718,
-0.10860327631235123,
-0.05806708335876465,
-0.024487217888236046,
0.04161667078733444,
0.058030977845191956,
-0.025842472910881042,
-0.04157072678208351,
-0.11695718765258789,
0.061540745198726654,
-0.0630548819899559,
-0.02960355579853058,
-0.009792087599635124,
-0.053178854286670685,
-0.020828954875469208,
-0.12882673740386963,
-0.01222284510731697,
-0.03228858858346939,
-0.07023005187511444,
0.005784589797258377,
-0.033276744186878204,
0.027816638350486755,
0.140489861369133,
0.034039292484521866,
-0.019161030650138855,
-0.018087364733219147,
0.19206835329532623,
0.009249862283468246,
0.06083439663052559,
0.10857163369655609,
-0.028261756524443626,
0.05359335616230965,
0.04173840582370758,
0.026548169553279877,
-0.045430976897478104,
0.011283808387815952,
-0.017448890954256058,
-0.1221662238240242,
-0.17053598165512085,
-0.07251743972301483,
-0.00247718533501029,
0.002716449787840247,
0.020900391042232513,
0.035973720252513885,
0.02737574838101864,
0.039267897605895996,
-0.028966013342142105,
0.030547333881258965,
-0.013480015099048615,
0.08089728653430939,
0.025261852890253067,
-0.07569278031587601,
0.09167119860649109,
-0.06126560643315315,
0.015973925590515137,
0.10909402370452881,
-0.06782616674900055,
0.19194838404655457,
0.025360429659485817,
0.05814754217863083,
0.10291449725627899,
0.015094198286533356,
0.05607223883271217,
0.08664901554584503,
-0.04647523909807205,
0.005422352813184261,
-0.060132794082164764,
-0.05195431411266327,
-0.036878302693367004,
0.048911478370428085,
0.02804337814450264,
0.017386633902788162,
-0.1173853948712349,
0.016279958188533783,
-0.002158033661544323,
0.14081299304962158,
0.04413984343409538,
-0.1161000207066536,
-0.12261350452899933,
0.03249797970056534,
-0.04805545508861542,
-0.06382302939891815,
0.028891820460557938,
0.05783615633845329,
-0.1524183452129364,
0.045060258358716965,
-0.00543493777513504,
0.06742866337299347,
-0.08943620324134827,
0.01430943887680769,
-0.04585741460323334,
-0.00033949315547943115,
0.0032178647816181183,
0.06678466498851776,
-0.1274498552083969,
0.11144880950450897,
0.021932069212198257,
0.04822046309709549,
-0.0820799320936203,
0.015644259750843048,
-0.010042689740657806,
0.10967982560396194,
0.11496253311634064,
0.04492882639169693,
-0.04721903055906296,
-0.016526099294424057,
-0.04481501504778862,
0.016887111589312553,
0.058015480637550354,
-0.07680442184209824,
0.060597337782382965,
0.00876004621386528,
0.007158292923122644,
-0.022277534008026123,
0.016442786902189255,
-0.1304028034210205,
-0.1217804104089737,
0.059353623539209366,
-0.07800944149494171,
-0.10410422831773758,
-0.057627998292446136,
-0.06092261150479317,
-0.047719091176986694,
0.2039809376001358,
-0.11457297950983047,
-0.08829812705516815,
-0.099017433822155,
-0.009841874241828918,
0.04488084837794304,
-0.06587665528059006,
0.04750574007630348,
-0.0399475023150444,
0.09160139411687851,
-0.04933726042509079,
-0.10766710340976715,
0.03205796703696251,
-0.11309495568275452,
-0.11316697299480438,
-0.04499702900648117,
0.10409402847290039,
0.11475205421447754,
0.03728528320789337,
0.011956471018493176,
0.012412764132022858,
-0.002341935411095619,
-0.11859786510467529,
0.014316815882921219,
0.13233529031276703,
-0.0013440009206533432,
0.07308579236268997,
-0.060707300901412964,
0.024334602057933807,
-0.015543946996331215,
-0.0007671155035495758,
0.13264715671539307,
0.18782839179039001,
-0.06597420573234558,
0.17624902725219727,
0.20508767664432526,
-0.1065991222858429,
-0.19262748956680298,
-0.0527590848505497,
0.000034356489777565,
0.043711040169000626,
0.04868369549512863,
-0.18457098305225372,
0.09345497936010361,
0.03765203058719635,
-0.03224905580282211,
0.018341273069381714,
-0.23636272549629211,
-0.11209180951118469,
0.09236957132816315,
0.05494067072868347,
0.19120103120803833,
-0.08032840490341187,
-0.039900198578834534,
-0.014984507113695145,
-0.03515409678220749,
0.04390541464090347,
-0.035906918346881866,
0.09121614694595337,
0.008215738460421562,
-0.032325297594070435,
0.002833954058587551,
-0.03135594353079796,
0.09515217691659927,
0.0398748442530632,
0.024253040552139282,
-0.06909805536270142,
-0.009657029062509537,
0.11643828451633453,
-0.038253046572208405,
0.09857456386089325,
0.04521077126264572,
0.07552194595336914,
-0.09213568270206451,
-0.05984335020184517,
-0.07431769371032715,
0.047056958079338074,
-0.04162468761205673,
-0.05410744249820709,
-0.06561794131994247,
0.05701683834195137,
0.03699210286140442,
0.007498375605791807,
0.0022086501121520996,
-0.039933107793331146,
0.045111313462257385,
0.09413817524909973,
0.07829359918832779,
-0.0397157222032547,
-0.07283255457878113,
-0.05087743699550629,
-0.04816761985421181,
0.06596186757087708,
-0.09284736961126328,
0.01957777701318264,
0.02461477369070053,
0.014615447260439396,
0.09248074889183044,
0.03301507979631424,
-0.13729752600193024,
0.011059204116463661,
0.032404348254203796,
-0.12214543670415878,
-0.11089518666267395,
-0.020648416131734848,
0.03358663618564606,
-0.037623368203639984,
0.05175544321537018,
0.14489196240901947,
-0.036620497703552246,
-0.031319960951805115,
-0.0495181605219841,
0.036678507924079895,
-0.019009411334991455,
0.04705887287855148,
0.06366998702287674,
0.03125905618071556,
-0.07277151197195053,
0.07451071590185165,
0.04121813550591469,
-0.03574070706963539,
0.04006640613079071,
0.04226147383451462,
-0.09216821938753128,
-0.07797674834728241,
-0.061029981821775436,
0.09493336081504822,
-0.025522874668240547,
-0.04249167814850807,
0.00009544007480144501,
-0.08392075449228287,
0.06910151988267899,
0.08375190198421478,
0.0471479631960392,
0.03958719223737717,
-0.08800040185451508,
0.016822565346956253,
-0.0542999729514122,
0.035701923072338104,
-0.030535584315657616,
-0.0049376580864191055,
-0.051958970725536346,
0.07151255756616592,
0.06683200597763062,
0.09820207208395004,
-0.035134926438331604,
-0.07744239270687103,
-0.08173222839832306,
-0.012289345264434814,
-0.06430333852767944,
-0.03062409535050392,
-0.08068950474262238,
-0.008537801913917065,
0.002875819569453597,
-0.0024401042610406876,
0.022001806646585464,
0.03434712439775467,
-0.042234815657138824,
-0.016059357672929764,
-0.035700879991054535,
0.037515804171562195,
-0.061230652034282684,
0.007234067656099796,
0.01589774899184704,
-0.036136314272880554,
0.09120165556669235,
0.03707752376794815,
-0.011654054746031761,
0.04331273213028908,
-0.019700223580002785,
0.037556443363428116,
-0.021868787705898285,
0.00042595574632287025,
-0.02420124039053917,
-0.10975219309329987,
-0.004535370506346226,
0.0034282058477401733,
-0.0243648923933506,
0.010374163277447224,
0.0585295706987381,
-0.07282353192567825,
0.08732900023460388,
0.04788760095834732,
-0.03205014020204544,
-0.07255660742521286,
0.03873291611671448,
-0.01272495836019516,
0.028058180585503578,
0.06777458637952805,
-0.03295182064175606,
0.05297303944826126,
-0.09870287775993347,
-0.02840501070022583,
0.0029532811604440212,
-0.0037401802837848663,
-0.008642962202429771,
-0.055271074175834656,
-0.00461155641824007,
0.008471479639410973,
0.17797380685806274,
-0.01852354407310486,
0.032624468207359314,
0.015296299010515213,
0.007573091425001621,
0.04933234676718712,
-0.014193879440426826,
0.07228358089923859,
-0.006191099062561989,
-0.026748625561594963,
-0.01612940989434719,
0.038336608558893204,
0.006650589406490326,
0.004976008087396622,
0.1441616714000702,
0.047948915511369705,
0.0907074511051178,
0.07486946135759354,
0.012898514047265053,
0.015159220434725285,
-0.1285167634487152,
-0.09037404507398605,
0.005962110124528408,
0.05737326294183731,
-0.017948336899280548,
0.014948055148124695,
0.0922299176454544,
-0.0884917601943016,
0.07236361503601074,
0.05297261103987694,
-0.04810720682144165,
-0.12683609127998352,
-0.18699118494987488,
-0.022852761670947075,
-0.02786841429769993,
-0.011883426457643509,
-0.09080324321985245,
0.017411597073078156,
0.0858720988035202,
0.023963985964655876,
-0.008498555049300194,
0.09322004020214081,
-0.10753855109214783,
-0.033561140298843384,
0.04542027413845062,
-0.024512728676199913,
0.016096388921141624,
0.03978665918111801,
0.021549640223383904,
-0.0046708229929208755,
0.04261970520019531,
0.039879679679870605,
0.0449298694729805,
0.027234986424446106,
0.05149243026971817,
-0.024840358644723892,
-0.07361884415149689,
-0.031220555305480957,
-0.006211409345269203,
0.054950304329395294,
0.13182306289672852,
0.022199170663952827,
-0.06711538136005402,
0.006366168148815632,
0.11181711405515671,
-0.030431870371103287,
-0.053238216787576675,
-0.10830377042293549,
0.24034813046455383,
0.024657174944877625,
0.0019559531938284636,
-0.004564381204545498,
-0.04551871493458748,
0.00639122910797596,
0.2129439413547516,
0.2230658084154129,
0.006006175652146339,
-0.008714746683835983,
0.008930335752665997,
-0.01265621930360794,
0.03827088698744774,
0.1449984312057495,
0.002223460003733635,
0.2568776607513428,
-0.04691801220178604,
0.04020955413579941,
-0.0431511253118515,
-0.03878524899482727,
-0.10103802382946014,
0.0744214653968811,
-0.009462317451834679,
0.005714294034987688,
-0.03364906460046768,
0.0702175721526146,
-0.04050340875983238,
-0.17764267325401306,
0.005795699544250965,
-0.002655446995049715,
-0.060133736580610275,
0.012171266600489616,
-0.00262265931814909,
0.021127505227923393,
0.08169132471084595,
-0.014662490226328373,
-0.006702342536300421,
0.13096919655799866,
0.018925756216049194,
-0.09777017682790756,
-0.06452871859073639,
0.11668001860380173,
0.016076819971203804,
0.14422467350959778,
0.009768534451723099,
0.07639629393815994,
0.0859140157699585,
0.022129256278276443,
-0.09602982550859451,
0.04246410354971886,
-0.02123163640499115,
-0.029225938022136688,
0.006775964051485062,
0.10746349394321442,
-0.007384480908513069,
0.059245362877845764,
0.0248300451785326,
-0.0928918719291687,
0.06179514527320862,
0.010044839233160019,
-0.03760775923728943,
-0.07920700311660767,
0.08021704852581024,
-0.0889287069439888,
0.15708619356155396,
0.12189940363168716,
-0.013458671048283577,
-0.044846631586551666,
-0.027322672307491302,
0.016517700627446175,
0.001849642489105463,
0.057953156530857086,
-0.02570442110300064,
-0.13784563541412354,
0.021193847060203552,
-0.08520007133483887,
0.026711933314800262,
-0.245652437210083,
-0.09097865968942642,
0.03180539235472679,
-0.01780555583536625,
-0.014385756105184555,
0.049111369997262955,
0.04581684619188309,
0.028390489518642426,
-0.03572525084018707,
0.02038607746362686,
-0.03615916147828102,
0.06057086959481239,
-0.10814350843429565,
-0.09263670444488525
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 2000k (uncased)
Seed 1 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-2000k')
model = BertModel.from_pretrained("multiberts-seed-1-2000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-2000k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 2000k (uncased)
Seed 1 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 2000k (uncased)\nSeed 1 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 2000k (uncased)\nSeed 1 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 2000k (uncased)\nSeed 1 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08407570421695709,
0.002465491183102131,
-0.0020691528916358948,
0.06583673506975174,
0.08491328358650208,
0.001600111834704876,
0.11564783751964569,
0.04839938506484032,
-0.03320649266242981,
0.023497801274061203,
0.09872999787330627,
0.02622275799512863,
0.04015296325087547,
0.06439155340194702,
0.09585995972156525,
-0.2570091485977173,
0.05174919217824936,
-0.06400595605373383,
0.05907256156206131,
0.07488073408603668,
0.10104303807020187,
-0.07263080775737762,
0.06106003001332283,
0.03939104825258255,
-0.08691751211881638,
-0.018585994839668274,
-0.017206469550728798,
-0.03401403874158859,
0.09963403642177582,
0.0702727809548378,
0.0649193674325943,
0.0017304308712482452,
0.05803889036178589,
-0.08930416405200958,
0.016369633376598358,
0.0456680990755558,
0.0018347185105085373,
0.025364456698298454,
-0.009853372350335121,
0.013675455003976822,
0.10737166553735733,
0.03956655412912369,
0.07811850309371948,
0.03470682352781296,
-0.09468861669301987,
-0.11219145357608795,
-0.07946263998746872,
0.10352690517902374,
0.051167696714401245,
0.041445739567279816,
-0.005133235827088356,
0.07381783425807953,
-0.027836227789521217,
0.07500968873500824,
0.10516290366649628,
-0.25229957699775696,
-0.009500201791524887,
0.06833269447088242,
0.04192255809903145,
0.043146923184394836,
0.011138997972011566,
0.026370186358690262,
0.0059282369911670685,
0.04108814895153046,
0.028852127492427826,
-0.023554664105176926,
0.11270581185817719,
-0.045376006513834,
-0.15104949474334717,
-0.043925076723098755,
0.12160256505012512,
-0.007582748308777809,
-0.12471373379230499,
-0.09768037497997284,
-0.03056265600025654,
0.11495102941989899,
-0.003765328787267208,
-0.016078578308224678,
-0.0033697285689413548,
0.011032989248633385,
0.023060930892825127,
-0.0954972505569458,
-0.08587891608476639,
-0.02839173562824726,
-0.040306851267814636,
0.13406574726104736,
0.04617002606391907,
0.05129147320985794,
-0.033934783190488815,
0.08556616306304932,
-0.1182706356048584,
-0.03875873237848282,
-0.05457798019051552,
-0.07881076633930206,
-0.021420586854219437,
0.008934585377573967,
-0.031109467148780823,
-0.08750268816947937,
-0.05733086168766022,
0.11368656158447266,
0.035467877984046936,
0.029484950006008148,
-0.0020996741950511932,
0.04206734895706177,
0.07548364996910095,
0.10043618828058243,
-0.03878467530012131,
0.052100494503974915,
0.03665093332529068,
-0.025335650891065598,
0.057007353752851486,
-0.04992039129137993,
-0.101586252450943,
0.08065640926361084,
0.0023675626143813133,
0.037182778120040894,
0.02643745392560959,
0.03413182869553566,
-0.010085975751280785,
-0.07096125185489655,
0.1591237485408783,
-0.07768173515796661,
-0.009945829398930073,
-0.017409218475222588,
0.011276287958025932,
0.043657638132572174,
0.03420552983880043,
-0.006310826167464256,
-0.045982394367456436,
-0.008284871466457844,
-0.05742347612977028,
-0.02419445849955082,
-0.05688990652561188,
-0.11821380257606506,
0.00008933665230870247,
-0.03827966749668121,
-0.0321744941174984,
-0.13964298367500305,
-0.21673893928527832,
-0.02091093361377716,
0.06577084958553314,
0.00014208070933818817,
-0.009993281215429306,
0.022322142496705055,
0.01605221815407276,
-0.021124105900526047,
0.012269394472241402,
-0.041791461408138275,
0.0010974006727337837,
-0.005942072719335556,
-0.03775031864643097,
0.052440233528614044,
-0.044131193310022354,
0.024775724858045578,
-0.06932677328586578,
0.02186555042862892,
-0.2032322883605957,
0.0910576656460762,
-0.03439066931605339,
0.0015605203807353973,
-0.03762032464146614,
-0.04030808061361313,
0.005780985578894615,
0.046792685985565186,
-0.008652132004499435,
0.11459353566169739,
-0.13509276509284973,
-0.050027936697006226,
0.17985904216766357,
-0.15901675820350647,
-0.002499770373106003,
0.09730467945337296,
-0.051720600575208664,
0.059337738901376724,
0.1326439082622528,
0.09809394180774689,
0.07809992879629135,
-0.06954120844602585,
0.012480407021939754,
0.06263232231140137,
-0.06823987513780594,
0.05476628988981247,
0.08936554938554764,
-0.024969400838017464,
-0.1388746052980423,
0.03036465123295784,
-0.07724487036466599,
-0.007371155545115471,
-0.028009694069623947,
-0.020035574212670326,
0.005194786936044693,
-0.03843418136239052,
0.022487539798021317,
0.00596819631755352,
0.019156858325004578,
-0.037066664546728134,
-0.08239249885082245,
0.033314112573862076,
0.07623514533042908,
-0.07078783214092255,
0.042951032519340515,
-0.06735745072364807,
0.055539119988679886,
-0.0780097022652626,
-0.004258289933204651,
-0.1706417053937912,
-0.025336915627121925,
0.04482806473970413,
-0.05063101649284363,
0.052387166768312454,
0.08880462497472763,
0.0003652055747807026,
0.11993301659822464,
-0.03771866858005524,
0.006629784591495991,
-0.00604630634188652,
-0.009255116805434227,
-0.050637386739254,
-0.12188080698251724,
-0.08066272735595703,
-0.07049927115440369,
0.1062985435128212,
-0.07214624434709549,
0.028412289917469025,
-0.07223080843687057,
-0.025025764480233192,
-0.009980130940675735,
-0.06085749715566635,
-0.003055599518120289,
0.010535056702792645,
-0.030258838087320328,
-0.04819730669260025,
0.04627695679664612,
0.05343665927648544,
-0.0642678290605545,
0.07589558511972427,
-0.10036348551511765,
-0.06445658206939697,
0.05409027636051178,
0.015383810736238956,
-0.0831565111875534,
0.09166906028985977,
-0.019303249195218086,
-0.012668720446527004,
-0.05979292839765549,
-0.040574293583631516,
0.19859230518341064,
-0.022853422909975052,
0.10277733206748962,
-0.09292221814393997,
0.0014430820010602474,
0.02850174903869629,
-0.04797673225402832,
-0.017102615907788277,
0.06024301424622536,
0.0480731800198555,
-0.1925988495349884,
0.016339294612407684,
0.04590311646461487,
0.07343553006649017,
0.10937128961086273,
0.0255340076982975,
-0.02342342585325241,
-0.04752600938081741,
-0.009542111307382584,
0.0048942952416837215,
0.05664462223649025,
-0.030790425837039948,
-0.009936902672052383,
0.03196270763874054,
0.05678654462099075,
0.019290966913104057,
-0.08232839405536652,
0.03756324201822281,
0.0672435313463211,
-0.016241472214460373,
-0.0393916517496109,
-0.027521545067429543,
-0.05898115783929825,
0.06221301853656769,
0.05223193019628525,
0.03970459848642349,
0.025170424953103065,
-0.013842982240021229,
-0.13582411408424377,
0.19104981422424316,
-0.11526335775852203,
-0.25878459215164185,
-0.10578715801239014,
-0.05769021064043045,
-0.028882058337330818,
0.04100579395890236,
0.0575064942240715,
-0.026143096387386322,
-0.042281635105609894,
-0.11547614634037018,
0.0607844814658165,
-0.06658389419317245,
-0.0300088282674551,
-0.009173506870865822,
-0.05287523567676544,
-0.02048349380493164,
-0.12838193774223328,
-0.011345924809575081,
-0.031970083713531494,
-0.07377094030380249,
0.004858984611928463,
-0.034372277557849884,
0.028927797451615334,
0.1394212543964386,
0.033994026482105255,
-0.017245769500732422,
-0.016943540424108505,
0.1906575858592987,
0.010188788175582886,
0.05906972289085388,
0.10762366652488708,
-0.0302389245480299,
0.05271892994642258,
0.0431562177836895,
0.025953328236937523,
-0.04549870640039444,
0.011043722741305828,
-0.01839531399309635,
-0.12172077596187592,
-0.17013981938362122,
-0.07276280224323273,
-0.002984674647450447,
-0.00043622637167572975,
0.019533026963472366,
0.03799865394830704,
0.024222765117883682,
0.04013188183307648,
-0.029367031529545784,
0.026226773858070374,
-0.012906812131404877,
0.08185133337974548,
0.02389920875430107,
-0.07590628415346146,
0.09218132495880127,
-0.06153639033436775,
0.01726524904370308,
0.10855990648269653,
-0.06597521901130676,
0.1930772364139557,
0.026317164301872253,
0.05906679481267929,
0.10473610460758209,
0.01686743274331093,
0.055887799710035324,
0.08619868755340576,
-0.04806403070688248,
0.005828867666423321,
-0.06082366779446602,
-0.05140841007232666,
-0.03589808940887451,
0.04976235702633858,
0.027034291997551918,
0.01776847615838051,
-0.11695388704538345,
0.016150813549757004,
-0.0028149003628641367,
0.13755863904953003,
0.04475514590740204,
-0.11807894706726074,
-0.12289845943450928,
0.03310445696115494,
-0.046143196523189545,
-0.06509735435247421,
0.02798408456146717,
0.05552222952246666,
-0.15189015865325928,
0.042452454566955566,
-0.005278419703245163,
0.0683295801281929,
-0.09187038242816925,
0.013752645812928677,
-0.04576002061367035,
0.0009662797674536705,
0.004804174415767193,
0.06991586834192276,
-0.13166426122188568,
0.11208844929933548,
0.020570162683725357,
0.047992683947086334,
-0.08155399560928345,
0.016485217958688736,
-0.011798719875514507,
0.1093282550573349,
0.11494699120521545,
0.044676557183265686,
-0.05130782350897789,
-0.015601827763020992,
-0.043929025530815125,
0.019130198284983635,
0.059595003724098206,
-0.07592308521270752,
0.060425177216529846,
0.009453854523599148,
0.008415646851062775,
-0.0204368494451046,
0.01876496896147728,
-0.1322745829820633,
-0.12427356839179993,
0.06270059943199158,
-0.07379584014415741,
-0.10030806809663773,
-0.05828993022441864,
-0.06164201349020004,
-0.04898322373628616,
0.2119782418012619,
-0.11717861145734787,
-0.08838243782520294,
-0.0998806282877922,
-0.005434487015008926,
0.047221049666404724,
-0.06618552654981613,
0.04818695783615112,
-0.04230794310569763,
0.09085159003734589,
-0.04893440008163452,
-0.10830658674240112,
0.03095918893814087,
-0.11221957206726074,
-0.11210707575082779,
-0.04391138628125191,
0.10188755393028259,
0.11235862225294113,
0.036636434495449066,
0.011464834213256836,
0.013874771073460579,
-0.003343932330608368,
-0.1179538145661354,
0.010460078716278076,
0.13498416543006897,
-0.0046798549592494965,
0.07146632671356201,
-0.06323947757482529,
0.028566401451826096,
-0.014939118176698685,
-0.001059155911207199,
0.135169118642807,
0.1862303763628006,
-0.06621455401182175,
0.17552274465560913,
0.20032843947410583,
-0.10679177194833755,
-0.1918097883462906,
-0.05468367785215378,
0.002392137423157692,
0.0450526662170887,
0.046149302273988724,
-0.18668776750564575,
0.09234318137168884,
0.03549880534410477,
-0.03312549367547035,
0.015040025115013123,
-0.234198659658432,
-0.11023303866386414,
0.09214405715465546,
0.05396939814090729,
0.19104966521263123,
-0.0807000994682312,
-0.03997235372662544,
-0.015732062980532646,
-0.029409125447273254,
0.0448223315179348,
-0.031568750739097595,
0.09276661276817322,
0.008863937109708786,
-0.03029143437743187,
0.0022804569453001022,
-0.031166918575763702,
0.09593081474304199,
0.04053764045238495,
0.023490604013204575,
-0.06838195025920868,
-0.01026778481900692,
0.11184297502040863,
-0.038407061249017715,
0.09991148114204407,
0.046679381281137466,
0.0772901251912117,
-0.09183675050735474,
-0.05943268537521362,
-0.07338623702526093,
0.046021588146686554,
-0.040629319846630096,
-0.05358448997139931,
-0.0646437481045723,
0.05811825394630432,
0.03794163465499878,
0.009326538071036339,
0.004373827949166298,
-0.039002373814582825,
0.04092784970998764,
0.08875872194766998,
0.07664693892002106,
-0.03593429550528526,
-0.07087390124797821,
-0.052106212824583054,
-0.04786894470453262,
0.06508389860391617,
-0.09513239562511444,
0.018648799508810043,
0.02476593106985092,
0.01445696596056223,
0.09160755574703217,
0.033511754125356674,
-0.1376171112060547,
0.010342590510845184,
0.030504371970891953,
-0.121005579829216,
-0.10940861701965332,
-0.019717637449502945,
0.035831741988658905,
-0.03731249272823334,
0.05339252948760986,
0.14639419317245483,
-0.03720994293689728,
-0.031054750084877014,
-0.04939957708120346,
0.03642141446471214,
-0.01859215274453163,
0.04633427411317825,
0.06421102583408356,
0.03018730878829956,
-0.07324768602848053,
0.07537148147821426,
0.04275578632950783,
-0.03302473947405815,
0.04133446514606476,
0.03724204748868942,
-0.09289532899856567,
-0.0784246027469635,
-0.058596547693014145,
0.09423530101776123,
-0.022973118349909782,
-0.044692642986774445,
0.000026050955057144165,
-0.08279421180486679,
0.06829330325126648,
0.08069910854101181,
0.04900747537612915,
0.03981281816959381,
-0.08799823373556137,
0.014911309815943241,
-0.0534445159137249,
0.03634705767035484,
-0.02803565375506878,
-0.004885207861661911,
-0.05367071181535721,
0.06497833877801895,
0.06643268465995789,
0.09634888172149658,
-0.034338101744651794,
-0.07830913364887238,
-0.08200789242982864,
-0.012674441561102867,
-0.059380702674388885,
-0.030462341383099556,
-0.07665201276540756,
-0.008288407698273659,
0.003939550369977951,
-0.004102606326341629,
0.022036362439393997,
0.03621037304401398,
-0.04304315149784088,
-0.01792491413652897,
-0.03826889768242836,
0.03805793076753616,
-0.06021491438150406,
0.005726988427340984,
0.014483142644166946,
-0.035565998405218124,
0.09072326868772507,
0.0371602401137352,
-0.012646378949284554,
0.04333378002047539,
-0.029644187539815903,
0.03712509945034981,
-0.021036237478256226,
0.0017150642815977335,
-0.023876186460256577,
-0.11063992977142334,
-0.00463778106495738,
0.006057091057300568,
-0.023689977824687958,
0.009266069158911705,
0.06174066662788391,
-0.07173959910869598,
0.08200029283761978,
0.04543783888220787,
-0.029338892549276352,
-0.07123443484306335,
0.039584167301654816,
-0.014837650582194328,
0.03223760053515434,
0.06813555955886841,
-0.032664705067873,
0.054481081664562225,
-0.0973527804017067,
-0.02702692523598671,
0.0026797442696988583,
-0.005146179348230362,
-0.015022477135062218,
-0.05679016560316086,
-0.004153075627982616,
0.0086972089484334,
0.1786012202501297,
-0.020374052226543427,
0.034579671919345856,
0.013373338617384434,
0.010935588739812374,
0.05270273983478546,
-0.013705914840102196,
0.07384133338928223,
-0.004366815090179443,
-0.02444533072412014,
-0.014382812194526196,
0.038843363523483276,
0.004948023706674576,
0.0019959602504968643,
0.13867339491844177,
0.05100279673933983,
0.09086538851261139,
0.07518141716718674,
0.013513638637959957,
0.01627962850034237,
-0.12133754789829254,
-0.08560796082019806,
0.0063616689294576645,
0.05747010558843613,
-0.019303061068058014,
0.01720695197582245,
0.08851572126150131,
-0.08780793845653534,
0.0710810124874115,
0.054174743592739105,
-0.04691150039434433,
-0.12556684017181396,
-0.1922350972890854,
-0.026686234399676323,
-0.028069915249943733,
-0.011528301984071732,
-0.09128481149673462,
0.016128649935126305,
0.08719136565923691,
0.024101631715893745,
-0.007769287563860416,
0.0915549099445343,
-0.10741881281137466,
-0.03278406336903572,
0.043459828943014145,
-0.02676554210484028,
0.016873598098754883,
0.04110470786690712,
0.021111398935317993,
-0.0017268974334001541,
0.0439230315387249,
0.04143146798014641,
0.04582146927714348,
0.030719246715307236,
0.05148836970329285,
-0.025086726993322372,
-0.0737905204296112,
-0.032277174293994904,
-0.005543418228626251,
0.054857317358255386,
0.13421057164669037,
0.022150134667754173,
-0.06970616430044174,
0.006504870019853115,
0.10819979757070541,
-0.02938135713338852,
-0.05374854803085327,
-0.1083734929561615,
0.24055463075637817,
0.021422145888209343,
0.0009476784616708755,
-0.004688540939241648,
-0.0452444925904274,
0.005713064223527908,
0.2134617418050766,
0.22180645167827606,
0.003939565271139145,
-0.010062236338853836,
0.01008453220129013,
-0.012441948056221008,
0.035989001393318176,
0.14346575736999512,
0.0017168782651424408,
0.2517639398574829,
-0.04526520520448685,
0.037076033651828766,
-0.04353956878185272,
-0.038569919764995575,
-0.10066951811313629,
0.0732140988111496,
-0.009594550356268883,
0.004636417143046856,
-0.03469499573111534,
0.07050971686840057,
-0.03939283266663551,
-0.17680835723876953,
0.0034406986087560654,
-0.0024047759361565113,
-0.06190764904022217,
0.012045508250594139,
0.0006170831620693207,
0.021391019225120544,
0.08152206987142563,
-0.016890283674001694,
-0.005055096931755543,
0.12880244851112366,
0.019128460437059402,
-0.0966850221157074,
-0.060964435338974,
0.11595891416072845,
0.01444860640913248,
0.14304867386817932,
0.011246838606894016,
0.07966521382331848,
0.08678063750267029,
0.02131289802491665,
-0.09682700037956238,
0.04314874857664108,
-0.02145756036043167,
-0.02958204224705696,
0.00868540070950985,
0.10963212698698044,
-0.00820428691804409,
0.061152175068855286,
0.024823158979415894,
-0.09020216017961502,
0.06324594467878342,
0.010377243161201477,
-0.03854519501328468,
-0.08021312206983566,
0.08226428925991058,
-0.0897919237613678,
0.1573241949081421,
0.1218436136841774,
-0.01300558540970087,
-0.04465758800506592,
-0.028909927234053612,
0.01981339044868946,
0.00023317011073231697,
0.05927906557917595,
-0.024969983845949173,
-0.1365571916103363,
0.02134920470416546,
-0.08799285441637039,
0.025856435298919678,
-0.24662676453590393,
-0.09008802473545074,
0.0297596026211977,
-0.01838977448642254,
-0.013210833072662354,
0.049506474286317825,
0.04641139134764671,
0.02555202879011631,
-0.03625758737325668,
0.020229332149028778,
-0.03508606180548668,
0.05739889293909073,
-0.10673166811466217,
-0.09252984821796417
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 200k (uncased)
Seed 1 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-200k')
model = BertModel.from_pretrained("multiberts-seed-1-200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-200k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 200k (uncased)
Seed 1 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 200k (uncased)\nSeed 1 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 200k (uncased)\nSeed 1 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 200k (uncased)\nSeed 1 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08507105708122253,
0.00013391487300395966,
-0.002177406568080187,
0.06672777235507965,
0.08663933724164963,
0.0012517180293798447,
0.11681733280420303,
0.04988182336091995,
-0.0269145704805851,
0.02596377022564411,
0.0951235294342041,
0.03001543879508972,
0.041366420686244965,
0.06841014325618744,
0.09806802868843079,
-0.2574261426925659,
0.050675168633461,
-0.0637182742357254,
0.0545819029211998,
0.07491191476583481,
0.0996113196015358,
-0.0715748518705368,
0.06165648251771927,
0.03768902271986008,
-0.08629294484853745,
-0.018799694254994392,
-0.015244806185364723,
-0.03381536900997162,
0.10068829357624054,
0.06951969861984253,
0.06153596192598343,
0.0004039965569972992,
0.059171583503484726,
-0.08933480829000473,
0.016110451892018318,
0.04593736305832863,
0.0014245999045670033,
0.02500213123857975,
-0.008696462959051132,
0.01504061184823513,
0.1079050600528717,
0.038895197212696075,
0.07657524943351746,
0.036794114857912064,
-0.09442152082920074,
-0.11378098279237747,
-0.07770216464996338,
0.10308942198753357,
0.0522470660507679,
0.04207416623830795,
-0.006392200477421284,
0.0716731995344162,
-0.026656417176127434,
0.07308387011289597,
0.10529805719852448,
-0.25602152943611145,
-0.008623538538813591,
0.06731657683849335,
0.04254567623138428,
0.043057382106781006,
0.012602158822119236,
0.0264643095433712,
0.007261648774147034,
0.043028585612773895,
0.028270304203033447,
-0.02354370802640915,
0.12090326845645905,
-0.04782164469361305,
-0.15108272433280945,
-0.0439150370657444,
0.12161673605442047,
-0.006358077749609947,
-0.12572509050369263,
-0.09724181890487671,
-0.03216240555047989,
0.1179644912481308,
-0.002926081418991089,
-0.017331022769212723,
-0.004227026365697384,
0.012974000535905361,
0.025986863300204277,
-0.09391254186630249,
-0.08602158725261688,
-0.028104741126298904,
-0.03912428393959999,
0.13088348507881165,
0.046564117074012756,
0.05146070569753647,
-0.034899935126304626,
0.08691279590129852,
-0.11447322368621826,
-0.039772696793079376,
-0.05209622532129288,
-0.07991199940443039,
-0.018480094149708748,
0.009306715801358223,
-0.027271386235952377,
-0.0867127999663353,
-0.05816087871789932,
0.11179681122303009,
0.03698885440826416,
0.029499560594558716,
-0.0029574502259492874,
0.042734745889902115,
0.07489414513111115,
0.09667737782001495,
-0.03801886737346649,
0.05500025674700737,
0.03499097749590874,
-0.0236351378262043,
0.057605497539043427,
-0.05019178241491318,
-0.10216192901134491,
0.0787835419178009,
0.0027321865782141685,
0.0377599373459816,
0.02636806108057499,
0.03177548944950104,
-0.013045834377408028,
-0.07153061032295227,
0.15794965624809265,
-0.07691021263599396,
-0.008708538487553596,
-0.017973419278860092,
0.012466410174965858,
0.04557987302541733,
0.032606903463602066,
-0.009134631603956223,
-0.04721510410308838,
-0.007183529436588287,
-0.05841648578643799,
-0.02446088194847107,
-0.05472408980131149,
-0.11687517911195755,
0.0003636670298874378,
-0.04068487137556076,
-0.03297387808561325,
-0.14182178676128387,
-0.21167995035648346,
-0.019808603450655937,
0.0648169293999672,
-0.002046753652393818,
-0.011364039033651352,
0.022924775257706642,
0.016368350014090538,
-0.020711353048682213,
0.01215451117604971,
-0.04675692692399025,
-0.00045519135892391205,
-0.005004459992051125,
-0.036450907588005066,
0.05615386366844177,
-0.03880428522825241,
0.023528916761279106,
-0.06981925666332245,
0.023269152268767357,
-0.20889896154403687,
0.09156659245491028,
-0.03379824757575989,
0.000981876626610756,
-0.038561198860406876,
-0.04092516750097275,
0.005541790276765823,
0.04662911593914032,
-0.008593866601586342,
0.11833734810352325,
-0.1394110471010208,
-0.05110594630241394,
0.18621906638145447,
-0.15928667783737183,
-0.001120924949645996,
0.10019882768392563,
-0.049038901925086975,
0.06003817543387413,
0.13280610740184784,
0.09924901276826859,
0.08087951689958572,
-0.07211363315582275,
0.010365174151957035,
0.06420424580574036,
-0.06898820400238037,
0.054950036108493805,
0.08803482353687286,
-0.025915419682860374,
-0.1400437206029892,
0.028667211532592773,
-0.07132954895496368,
-0.007830899208784103,
-0.028269659727811813,
-0.02000105381011963,
0.003894660621881485,
-0.03735658898949623,
0.026606600731611252,
0.00474991649389267,
0.017470942810177803,
-0.03863539174199104,
-0.08242112398147583,
0.030184360221028328,
0.07584205269813538,
-0.07007758319377899,
0.042524099349975586,
-0.0693371370434761,
0.05893044173717499,
-0.07870667427778244,
-0.002631835639476776,
-0.1692097783088684,
-0.023632097989320755,
0.04546694457530975,
-0.05037335306406021,
0.051607415080070496,
0.0934019461274147,
0.00003173062577843666,
0.12208893895149231,
-0.03733185678720474,
0.0047783199697732925,
-0.00660022534430027,
-0.010677364654839039,
-0.05235905945301056,
-0.12236456573009491,
-0.08155886828899384,
-0.0694185420870781,
0.1018538698554039,
-0.07315269112586975,
0.028849754482507706,
-0.07191731035709381,
-0.02358032390475273,
-0.008522732183337212,
-0.0596950389444828,
-0.003445269539952278,
0.010070675984025002,
-0.02906160056591034,
-0.04839085042476654,
0.04641753435134888,
0.051940903067588806,
-0.06359510123729706,
0.07715214788913727,
-0.1027367040514946,
-0.06226429343223572,
0.054258495569229126,
0.014070593751966953,
-0.08224758505821228,
0.09441235661506653,
-0.019514834508299828,
-0.013256607577204704,
-0.06256942451000214,
-0.04268058016896248,
0.19315063953399658,
-0.021613560616970062,
0.10092779994010925,
-0.09110818803310394,
0.0028851341921836138,
0.030424047261476517,
-0.04928816854953766,
-0.016892168670892715,
0.0612131729722023,
0.0495084673166275,
-0.19113653898239136,
0.015793029218912125,
0.0507238507270813,
0.07820646464824677,
0.1086001992225647,
0.025946196168661118,
-0.02403618022799492,
-0.04784245043992996,
-0.009769857861101627,
0.005799663253128529,
0.05587060749530792,
-0.027253761887550354,
-0.008586954325437546,
0.031039293855428696,
0.05849533528089523,
0.016759829595685005,
-0.08241553604602814,
0.03609254211187363,
0.06808614730834961,
-0.01763814501464367,
-0.038204044103622437,
-0.02640329673886299,
-0.05938637629151344,
0.06352705508470535,
0.05161753296852112,
0.039102837443351746,
0.025581447407603264,
-0.012904960662126541,
-0.13606341183185577,
0.19040243327617645,
-0.11442394554615021,
-0.2587091326713562,
-0.10784168541431427,
-0.05264803767204285,
-0.025600558146834373,
0.04261337220668793,
0.05789400637149811,
-0.029643526300787926,
-0.04093996807932854,
-0.1153930053114891,
0.0633457750082016,
-0.06521639972925186,
-0.03090403601527214,
-0.011233288794755936,
-0.052335843443870544,
-0.018938595429062843,
-0.12794014811515808,
-0.013357328251004219,
-0.029550787061452866,
-0.07537247240543365,
0.006148552522063255,
-0.03429233655333519,
0.028149956837296486,
0.14006558060646057,
0.03418684005737305,
-0.019380033016204834,
-0.016074523329734802,
0.19323618710041046,
0.011384773999452591,
0.060075096786022186,
0.11034594476222992,
-0.02768145315349102,
0.05424652248620987,
0.04405113309621811,
0.025725601240992546,
-0.0482247918844223,
0.012636641971766949,
-0.01622118428349495,
-0.12119167298078537,
-0.1720433235168457,
-0.07210634648799896,
-0.0018829931505024433,
0.006250039208680391,
0.020892910659313202,
0.037959471344947815,
0.021637985482811928,
0.040717337280511856,
-0.029687728732824326,
0.02904946729540825,
-0.015692830085754395,
0.08094611018896103,
0.024872079491615295,
-0.07593057304620743,
0.09240645170211792,
-0.06140223518013954,
0.01626785844564438,
0.10908490419387817,
-0.0631842389702797,
0.19344425201416016,
0.026431158185005188,
0.05970345437526703,
0.10286634415388107,
0.019236188381910324,
0.054806213825941086,
0.0864754170179367,
-0.04450698569417,
0.006047151982784271,
-0.06102770194411278,
-0.05118265002965927,
-0.036864522844552994,
0.04954252392053604,
0.028547435998916626,
0.01864989846944809,
-0.12046804279088974,
0.021861910820007324,
-0.002661080565303564,
0.13725271821022034,
0.04770524427294731,
-0.1156739592552185,
-0.1236325204372406,
0.0350990816950798,
-0.04683221876621246,
-0.06518644094467163,
0.030326377600431442,
0.06099338084459305,
-0.1531289964914322,
0.04117045924067497,
-0.005924641154706478,
0.0685376301407814,
-0.09334666281938553,
0.014868681319057941,
-0.04833764582872391,
-0.0006629014387726784,
0.004318833816796541,
0.07012522220611572,
-0.13434165716171265,
0.10959137976169586,
0.020380154252052307,
0.04843476787209511,
-0.08096028864383698,
0.015364453196525574,
-0.011870014481246471,
0.1092647910118103,
0.11614766716957092,
0.0451388880610466,
-0.05681046098470688,
-0.01958347111940384,
-0.04637196660041809,
0.019849183037877083,
0.058525435626506805,
-0.07931524515151978,
0.05915013700723648,
0.008443939499557018,
0.007500079460442066,
-0.022456198930740356,
0.01864439994096756,
-0.1331225037574768,
-0.12090206146240234,
0.06063656508922577,
-0.07585439085960388,
-0.09861751645803452,
-0.05752786248922348,
-0.061966195702552795,
-0.04261705279350281,
0.20824584364891052,
-0.11501641571521759,
-0.0891946405172348,
-0.09743410348892212,
-0.009306341409683228,
0.04606577381491661,
-0.06688450276851654,
0.0457318052649498,
-0.03923150524497032,
0.09071195125579834,
-0.04864255338907242,
-0.10936149954795837,
0.03269388526678085,
-0.11261245608329773,
-0.11293695122003555,
-0.04479028284549713,
0.10394404828548431,
0.11452528089284897,
0.03805191442370415,
0.01198196318000555,
0.011329254135489464,
-0.00047380290925502777,
-0.11730258166790009,
0.015166025608778,
0.13261988759040833,
-0.0019561145454645157,
0.06990332156419754,
-0.06232602894306183,
0.029370389878749847,
-0.01787230558693409,
-0.0016844682395458221,
0.1328575313091278,
0.18583142757415771,
-0.06634774804115295,
0.17383159697055817,
0.20058436691761017,
-0.10684317350387573,
-0.190593421459198,
-0.05370911583304405,
0.00039798393845558167,
0.04511565715074539,
0.047280605882406235,
-0.18680116534233093,
0.09297008067369461,
0.03516935929656029,
-0.03295889496803284,
0.023964647203683853,
-0.23929542303085327,
-0.11116331815719604,
0.09100107848644257,
0.0535430945456028,
0.19486820697784424,
-0.08066275715827942,
-0.04033178836107254,
-0.01623082533478737,
-0.03483150154352188,
0.045696742832660675,
-0.030781783163547516,
0.09258732199668884,
0.008472701534628868,
-0.0293547585606575,
0.0032006949186325073,
-0.02970140427350998,
0.09636768698692322,
0.042532432824373245,
0.022951435297727585,
-0.06939657032489777,
-0.0077952612191438675,
0.10741984844207764,
-0.037993885576725006,
0.1000179648399353,
0.04367674142122269,
0.07505707442760468,
-0.09398816525936127,
-0.05948930233716965,
-0.07415996491909027,
0.04550281912088394,
-0.04134921729564667,
-0.054478392004966736,
-0.06307096779346466,
0.05729753151535988,
0.03697652369737625,
0.009651374071836472,
-0.0008467007428407669,
-0.038124408572912216,
0.04502098634839058,
0.08810824900865555,
0.07791824638843536,
-0.03672434762120247,
-0.07612912356853485,
-0.05139535665512085,
-0.048058196902275085,
0.0679725781083107,
-0.0938168317079544,
0.01899747923016548,
0.02606947161257267,
0.011721674352884293,
0.09076735377311707,
0.03395527973771095,
-0.13724465668201447,
0.011614592745900154,
0.03129338473081589,
-0.12132395803928375,
-0.11646661162376404,
-0.01986750215291977,
0.034968528896570206,
-0.03686940670013428,
0.054599322378635406,
0.14684444665908813,
-0.038184575736522675,
-0.03230468928813934,
-0.049508385360240936,
0.03682922571897507,
-0.02069951221346855,
0.048214949667453766,
0.06485067307949066,
0.03174237534403801,
-0.07153712213039398,
0.07520262897014618,
0.040448009967803955,
-0.03766222670674324,
0.04256780818104744,
0.0379222147166729,
-0.09293584525585175,
-0.07880590111017227,
-0.05788006633520126,
0.09248381108045578,
-0.022005269303917885,
-0.04374702274799347,
0.0010686349123716354,
-0.08336810022592545,
0.06856907904148102,
0.07883837074041367,
0.04807628318667412,
0.04039694741368294,
-0.08897813409566879,
0.014799567870795727,
-0.053894005715847015,
0.03500160202383995,
-0.025895757600665092,
-0.005431240424513817,
-0.055999286472797394,
0.06769031286239624,
0.06499462574720383,
0.10067327320575714,
-0.03432818129658699,
-0.07794871181249619,
-0.0828862339258194,
-0.01302395947277546,
-0.06408355385065079,
-0.029759036377072334,
-0.0765538215637207,
-0.009191112592816353,
0.002996258670464158,
-0.0027423277497291565,
0.024861499667167664,
0.03536561504006386,
-0.043209418654441833,
-0.018123719841241837,
-0.037660203874111176,
0.038183726370334625,
-0.06126462295651436,
0.006202489137649536,
0.01589415967464447,
-0.036511003971099854,
0.09092020988464355,
0.037228286266326904,
-0.013374095782637596,
0.04232470691204071,
-0.023451782763004303,
0.038088880479335785,
-0.01993071474134922,
-0.00003098743036389351,
-0.023811614140868187,
-0.11062899976968765,
-0.004007827956229448,
0.0034629423171281815,
-0.023585457354784012,
0.008559306152164936,
0.05939366668462753,
-0.07254394888877869,
0.08511881530284882,
0.045505136251449585,
-0.03187699243426323,
-0.07219787687063217,
0.03968265652656555,
-0.014520544558763504,
0.03029990755021572,
0.06912827491760254,
-0.03464927524328232,
0.05465550720691681,
-0.09841178357601166,
-0.02816394343972206,
0.004088462330400944,
-0.004046004265546799,
-0.010480433702468872,
-0.05544361472129822,
-0.004144748672842979,
0.0076105911284685135,
0.1696525514125824,
-0.021535176783800125,
0.03600435331463814,
0.013373369351029396,
0.007349117659032345,
0.047447968274354935,
-0.013170301914215088,
0.07186247408390045,
-0.005834132432937622,
-0.02512027882039547,
-0.015352446585893631,
0.03779048100113869,
0.005435295403003693,
0.004958152770996094,
0.14009015262126923,
0.047079235315322876,
0.08904511481523514,
0.0755448192358017,
0.01310668420046568,
0.017015628516674042,
-0.13381904363632202,
-0.08884737640619278,
0.005403043702244759,
0.05831649526953697,
-0.017719337716698647,
0.016790278255939484,
0.08892923593521118,
-0.08626061677932739,
0.07103411108255386,
0.05069974064826965,
-0.048271775245666504,
-0.1274082511663437,
-0.1949625015258789,
-0.025130383670330048,
-0.02982589788734913,
-0.011861339211463928,
-0.08969730138778687,
0.016574539244174957,
0.08989270776510239,
0.023731283843517303,
-0.010145354084670544,
0.09272114187479019,
-0.10480831563472748,
-0.03224605321884155,
0.04485201835632324,
-0.02644108235836029,
0.015775062143802643,
0.04303722828626633,
0.023310689255595207,
-0.004369767382740974,
0.0440550334751606,
0.04067248851060867,
0.04629132151603699,
0.027676310390233994,
0.052205562591552734,
-0.026619523763656616,
-0.07358293980360031,
-0.03260252624750137,
-0.004586820490658283,
0.0532396025955677,
0.13614779710769653,
0.023453759029507637,
-0.07034309208393097,
0.006749757565557957,
0.11084757000207901,
-0.030447697266936302,
-0.0525955930352211,
-0.10774139314889908,
0.24436545372009277,
0.020713508129119873,
0.001844433369114995,
-0.004952401388436556,
-0.044906772673130035,
0.0047117676585912704,
0.2089969366788864,
0.22118526697158813,
0.004889534320682287,
-0.009990394115447998,
0.008969836868345737,
-0.012557921931147575,
0.036655500531196594,
0.14613434672355652,
0.0023059584200382233,
0.2547493577003479,
-0.04859618842601776,
0.04035911336541176,
-0.04337789863348007,
-0.03823235258460045,
-0.10315193235874176,
0.07128283381462097,
-0.011368006467819214,
0.00798759050667286,
-0.03365633636713028,
0.07105175405740738,
-0.04025658220052719,
-0.1729791909456253,
0.003605276346206665,
0.00008429004810750484,
-0.06023023650050163,
0.012049923650920391,
-0.0026556840166449547,
0.021109821274876595,
0.08350574225187302,
-0.018235042691230774,
-0.005433033686131239,
0.13237859308719635,
0.018484586849808693,
-0.09683520346879959,
-0.060844019055366516,
0.11403828859329224,
0.01660292223095894,
0.14179068803787231,
0.011422650888562202,
0.07645682245492935,
0.08606629073619843,
0.02256472408771515,
-0.09624774754047394,
0.041932933032512665,
-0.021372197195887566,
-0.025129001587629318,
0.007755685597658157,
0.10790694504976273,
-0.00813768059015274,
0.05989902839064598,
0.026333192363381386,
-0.09289627522230148,
0.06049013137817383,
0.00915474072098732,
-0.03403550386428833,
-0.0810730829834938,
0.08399548381567001,
-0.09063731133937836,
0.15683841705322266,
0.12154556065797806,
-0.01447767298668623,
-0.04644410312175751,
-0.028377512469887733,
0.018271848559379578,
0.0007533119060099125,
0.057496797293424606,
-0.025314904749393463,
-0.13454562425613403,
0.020417815074324608,
-0.08438382297754288,
0.026385243982076645,
-0.2478114664554596,
-0.09153243899345398,
0.03123403713107109,
-0.017857998609542847,
-0.01578720659017563,
0.04984498396515846,
0.04416806250810623,
0.027147497981786728,
-0.03704652190208435,
0.016993455588817596,
-0.03600597754120827,
0.05988682061433792,
-0.10767505317926407,
-0.09296727180480957
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 20k (uncased)
Seed 1 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-20k')
model = BertModel.from_pretrained("multiberts-seed-1-20k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-20k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 20k (uncased)
Seed 1 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 20k (uncased)\nSeed 1 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 20k (uncased)\nSeed 1 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 20k (uncased)\nSeed 1 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08485420048236847,
0.0015358941163867712,
-0.002145105507224798,
0.06369725614786148,
0.08508405089378357,
-0.0005192456301301718,
0.1206873431801796,
0.05048117786645889,
-0.028236348181962967,
0.029103681445121765,
0.09638476371765137,
0.03134853392839432,
0.04053360968828201,
0.07113943994045258,
0.09691989421844482,
-0.2581392824649811,
0.049958087503910065,
-0.06380758434534073,
0.05917289853096008,
0.07552020251750946,
0.10027587413787842,
-0.07249505817890167,
0.061266180127859116,
0.038697898387908936,
-0.08842363953590393,
-0.017788276076316833,
-0.015291545540094376,
-0.034876786172389984,
0.10056394338607788,
0.0695899948477745,
0.06124899536371231,
0.0009195320308208466,
0.05702084302902222,
-0.08938178420066833,
0.015813151374459267,
0.04722725227475166,
-0.001097093801945448,
0.02580925077199936,
-0.006957709789276123,
0.012349076569080353,
0.11297531425952911,
0.032958272844552994,
0.0777662992477417,
0.03615807369351387,
-0.09453170001506805,
-0.11684916168451309,
-0.07799325883388519,
0.10362452268600464,
0.05192558094859123,
0.04250328987836838,
-0.007403523661196232,
0.07345697283744812,
-0.0278514102101326,
0.07622481882572174,
0.11080026626586914,
-0.2557511627674103,
-0.008932370692491531,
0.06607009470462799,
0.043191097676754,
0.04539671540260315,
0.01301037147641182,
0.026319194585084915,
0.006820101290941238,
0.0414750911295414,
0.02649841457605362,
-0.023762807250022888,
0.12471023947000504,
-0.04645831882953644,
-0.15182241797447205,
-0.04486306011676788,
0.12658900022506714,
-0.0066271331161260605,
-0.12370090186595917,
-0.10080916434526443,
-0.03340110182762146,
0.11797262728214264,
-0.0047959149815142155,
-0.017092591151595116,
-0.0027155796997249126,
0.011813807301223278,
0.024854421615600586,
-0.09018006920814514,
-0.08553201705217361,
-0.027483968064188957,
-0.03740992769598961,
0.13017114996910095,
0.04698596149682999,
0.049728814512491226,
-0.037903305143117905,
0.08538075536489487,
-0.1202230155467987,
-0.039851754903793335,
-0.050707414746284485,
-0.0768616795539856,
-0.01875157840549946,
0.00798178929835558,
-0.029794957488775253,
-0.09169132262468338,
-0.05895226448774338,
0.1161736398935318,
0.035208892077207565,
0.03117438405752182,
-0.006155208684504032,
0.0427798368036747,
0.0760292038321495,
0.09669655561447144,
-0.03813452273607254,
0.053216539323329926,
0.03509156033396721,
-0.027323324233293533,
0.06059582531452179,
-0.050878897309303284,
-0.10140996426343918,
0.08067324757575989,
0.002903464250266552,
0.03623851388692856,
0.026273835450410843,
0.032865773886442184,
-0.011564129963517189,
-0.07135459780693054,
0.15741918981075287,
-0.07757119089365005,
-0.0087065938860178,
-0.016875484958291054,
0.012527469545602798,
0.04456372559070587,
0.03378084674477577,
-0.009341174736618996,
-0.045653779059648514,
-0.008266839198768139,
-0.05972456559538841,
-0.02519763447344303,
-0.0536535419523716,
-0.11737328767776489,
0.0007689977064728737,
-0.0390610471367836,
-0.03180139511823654,
-0.13792511820793152,
-0.21542716026306152,
-0.02092280238866806,
0.06405326724052429,
-0.0009889178909361362,
-0.011185499839484692,
0.021725976839661598,
0.015978986397385597,
-0.02007652260363102,
0.012785148806869984,
-0.0451468788087368,
-0.0001854104921221733,
-0.005365220829844475,
-0.03525065630674362,
0.0553475022315979,
-0.03893868252635002,
0.023912591859698296,
-0.06822293996810913,
0.024211902171373367,
-0.21086762845516205,
0.091961570084095,
-0.03512899950146675,
0.0012450609356164932,
-0.03785189613699913,
-0.04030831158161163,
0.009314816445112228,
0.04676205664873123,
-0.009405892342329025,
0.11572682857513428,
-0.13806699216365814,
-0.054552480578422546,
0.19055835902690887,
-0.15942850708961487,
0.0021025165915489197,
0.09810661524534225,
-0.049292661249637604,
0.059555452316999435,
0.13282406330108643,
0.10285192728042603,
0.07792225480079651,
-0.06998491287231445,
0.009956875815987587,
0.06335368007421494,
-0.06872197240591049,
0.05588933080434799,
0.08933769166469574,
-0.024569185450673103,
-0.135564386844635,
0.02916807308793068,
-0.07223501801490784,
-0.009139837697148323,
-0.02764483168721199,
-0.019293932244181633,
0.0030328556895256042,
-0.03438965976238251,
0.0238342322409153,
0.006715898402035236,
0.01809540018439293,
-0.04087696596980095,
-0.0826580598950386,
0.03136404976248741,
0.07674463093280792,
-0.07049297541379929,
0.040966495871543884,
-0.069671131670475,
0.058764055371284485,
-0.07735012471675873,
-0.0026313820853829384,
-0.1695455014705658,
-0.023348936811089516,
0.04529157280921936,
-0.05056705325841904,
0.052904874086380005,
0.09529517590999603,
0.0010665630688890815,
0.12240148335695267,
-0.03553840517997742,
0.004911262076348066,
-0.0052613671869039536,
-0.010930664837360382,
-0.05335462838411331,
-0.12666043639183044,
-0.07902989536523819,
-0.07021044939756393,
0.10706420242786407,
-0.07683296501636505,
0.02884211391210556,
-0.06868769228458405,
-0.02154187113046646,
-0.009279362857341766,
-0.06091301888227463,
-0.0027470076456665993,
0.008116014301776886,
-0.029349924996495247,
-0.04907510057091713,
0.04559007287025452,
0.05153216794133186,
-0.06359654664993286,
0.0756339430809021,
-0.10719354450702667,
-0.0642886683344841,
0.05476084351539612,
0.013956967741250992,
-0.08257398754358292,
0.09166122227907181,
-0.021259384229779243,
-0.013423292897641659,
-0.058582853525877,
-0.040367063134908676,
0.19395828247070312,
-0.022786643356084824,
0.10219579935073853,
-0.09067091345787048,
0.0009913957910612226,
0.031221501529216766,
-0.048911191523075104,
-0.017553821206092834,
0.06324829906225204,
0.0467132106423378,
-0.1944005787372589,
0.017228011041879654,
0.050831638276576996,
0.07885304093360901,
0.10904395580291748,
0.024690933525562286,
-0.025718847289681435,
-0.04845757782459259,
-0.008303400129079819,
0.005158166401088238,
0.055170200765132904,
-0.03203520551323891,
-0.011987859383225441,
0.03211383521556854,
0.05644086003303528,
0.017483487725257874,
-0.08277282118797302,
0.03679196164011955,
0.06791931390762329,
-0.016086526215076447,
-0.04152435436844826,
-0.026185426861047745,
-0.060236554592847824,
0.06255176663398743,
0.05160762369632721,
0.03854138404130936,
0.026465479284524918,
-0.013619705103337765,
-0.13598066568374634,
0.18969246745109558,
-0.11484004557132721,
-0.2576482892036438,
-0.1068955659866333,
-0.0565558522939682,
-0.027403155341744423,
0.04177933931350708,
0.05910905450582504,
-0.03010866604745388,
-0.04286801069974899,
-0.1139557734131813,
0.06061871722340584,
-0.0680273100733757,
-0.03145277500152588,
-0.010597949847579002,
-0.053108133375644684,
-0.01988028734922409,
-0.12779322266578674,
-0.013184167444705963,
-0.029661014676094055,
-0.0738813579082489,
0.007260428741574287,
-0.03454981744289398,
0.025875553488731384,
0.1384115368127823,
0.03634440898895264,
-0.017479367554187775,
-0.015997441485524178,
0.19137638807296753,
0.009309748187661171,
0.05740484967827797,
0.11032923310995102,
-0.026595691218972206,
0.05353589728474617,
0.04188971221446991,
0.024840742349624634,
-0.0477801077067852,
0.010750338435173035,
-0.015493289567530155,
-0.12028761208057404,
-0.17345395684242249,
-0.07235662639141083,
-0.001627492718398571,
0.005527266301214695,
0.02166416496038437,
0.03805776685476303,
0.02648172155022621,
0.03909922018647194,
-0.03027588129043579,
0.03218640387058258,
-0.015003323554992676,
0.08242666721343994,
0.026732664555311203,
-0.07862130552530289,
0.09253197908401489,
-0.06045079976320267,
0.017287278547883034,
0.10941940546035767,
-0.06578934192657471,
0.1912287324666977,
0.029370583593845367,
0.066600002348423,
0.10261563211679459,
0.01801212877035141,
0.05359022319316864,
0.0853005200624466,
-0.04200756549835205,
0.0045196618884801865,
-0.06215286999940872,
-0.051919084042310715,
-0.036253634840250015,
0.052017487585544586,
0.026883630082011223,
0.0165741927921772,
-0.11831650137901306,
0.020130131393671036,
-0.001609285012818873,
0.13732866942882538,
0.04421240836381912,
-0.11871862411499023,
-0.12313520908355713,
0.03480523079633713,
-0.04907601326704025,
-0.06527810543775558,
0.029885105788707733,
0.05861007422208786,
-0.15286266803741455,
0.043738771229982376,
-0.006400478072464466,
0.06765427440404892,
-0.09277550876140594,
0.01535087637603283,
-0.05256272479891777,
0.001743321307003498,
0.004712914582341909,
0.07147330045700073,
-0.13693664968013763,
0.10805714130401611,
0.021551959216594696,
0.045506801456213,
-0.0834212675690651,
0.01593465730547905,
-0.012124906294047832,
0.10974784195423126,
0.11767782270908356,
0.044215068221092224,
-0.05941028147935867,
-0.015277127735316753,
-0.047047365456819534,
0.019609777256846428,
0.06095530092716217,
-0.07932423800230026,
0.0591338574886322,
0.008582035079598427,
0.007495400495827198,
-0.021123675629496574,
0.019121941179037094,
-0.12929008901119232,
-0.12133315205574036,
0.06291194260120392,
-0.07436735183000565,
-0.10088898241519928,
-0.059492744505405426,
-0.061595119535923004,
-0.047845445573329926,
0.21447134017944336,
-0.11598667502403259,
-0.08779038488864899,
-0.0993582084774971,
-0.00464017316699028,
0.044353045523166656,
-0.0671783834695816,
0.04550431668758392,
-0.039338115602731705,
0.09419669955968857,
-0.04938819259405136,
-0.10931402444839478,
0.03447195142507553,
-0.11299772560596466,
-0.11267299950122833,
-0.04525645822286606,
0.10553120076656342,
0.11295480281114578,
0.03819485753774643,
0.009965687058866024,
0.012663943693041801,
-0.0029375124722719193,
-0.11592289805412292,
0.0157826766371727,
0.13588643074035645,
-0.003287944942712784,
0.06828771531581879,
-0.05843977630138397,
0.032951828092336655,
-0.016751129180192947,
-0.00121411494910717,
0.13448095321655273,
0.18571236729621887,
-0.06535898149013519,
0.1748233139514923,
0.1966341733932495,
-0.10468458384275436,
-0.1884792447090149,
-0.052092716097831726,
0.00036290474236011505,
0.04439115896821022,
0.046575672924518585,
-0.18748676776885986,
0.09312625229358673,
0.035130515694618225,
-0.03287557139992714,
0.02345428615808487,
-0.2369847595691681,
-0.10982084274291992,
0.09302876889705658,
0.05425156652927399,
0.19287702441215515,
-0.0804731473326683,
-0.041821882128715515,
-0.015615260228514671,
-0.02779868245124817,
0.04558552801609039,
-0.039012521505355835,
0.09234357625246048,
0.009390931576490402,
-0.028642484918236732,
0.003130042925477028,
-0.030943691730499268,
0.09414270520210266,
0.042392656207084656,
0.024364305660128593,
-0.06905196607112885,
-0.0074325017631053925,
0.11195755004882812,
-0.03786101192235947,
0.09942139685153961,
0.04299585521221161,
0.07538627088069916,
-0.09199303388595581,
-0.058791495859622955,
-0.07405833899974823,
0.04604867100715637,
-0.04138583689928055,
-0.05337308719754219,
-0.06338806450366974,
0.0554271899163723,
0.03681570291519165,
0.010086129419505596,
0.0044263750314712524,
-0.03745325282216072,
0.043218642473220825,
0.08922756463289261,
0.07625461369752884,
-0.03633167967200279,
-0.07508483529090881,
-0.053488101810216904,
-0.046866659075021744,
0.06687778234481812,
-0.09711345285177231,
0.01699746400117874,
0.024551095440983772,
0.014700053259730339,
0.09311836957931519,
0.03266068175435066,
-0.13952796161174774,
0.009957296773791313,
0.030854785814881325,
-0.1240176409482956,
-0.11756614595651627,
-0.02067508175969124,
0.03696049004793167,
-0.034762658178806305,
0.05521845445036888,
0.14705321192741394,
-0.03621478006243706,
-0.03198281675577164,
-0.04945091903209686,
0.03606117516756058,
-0.020146362483501434,
0.04611703008413315,
0.06326316297054291,
0.031308937817811966,
-0.07442618906497955,
0.07343703508377075,
0.04200880974531174,
-0.0376189723610878,
0.0438767708837986,
0.040398627519607544,
-0.09282007813453674,
-0.07748930901288986,
-0.057462193071842194,
0.09620651602745056,
-0.02190864272415638,
-0.04556549713015556,
-0.001659763976931572,
-0.08135852217674255,
0.06826575100421906,
0.08029928803443909,
0.04740484058856964,
0.03963713347911835,
-0.08885005861520767,
0.015577648766338825,
-0.05443189665675163,
0.03492429479956627,
-0.030850889161229134,
-0.003997098654508591,
-0.0558934360742569,
0.07190854847431183,
0.0655132532119751,
0.0985390916466713,
-0.0339643657207489,
-0.07828624546527863,
-0.08199828863143921,
-0.013747534714639187,
-0.05904581397771835,
-0.031694844365119934,
-0.07812760770320892,
-0.009579924866557121,
0.0035873542074114084,
-0.001016927883028984,
0.02484283037483692,
0.03529276326298714,
-0.043259866535663605,
-0.01826123520731926,
-0.03668675199151039,
0.03977259621024132,
-0.06302295625209808,
0.007812606170773506,
0.014407053589820862,
-0.036495521664619446,
0.09103448688983917,
0.040526315569877625,
-0.011745830997824669,
0.043510302901268005,
-0.02526923269033432,
0.0380638986825943,
-0.019320692867040634,
-0.0013727578334510326,
-0.02182941697537899,
-0.11008119583129883,
-0.005494191311299801,
0.0031952541321516037,
-0.02582346275448799,
0.008225158788263798,
0.05365641415119171,
-0.07195653766393661,
0.08455317467451096,
0.04579668492078781,
-0.03350331261754036,
-0.0736529529094696,
0.040360335260629654,
-0.017760464921593666,
0.027884162962436676,
0.06659477949142456,
-0.03402624651789665,
0.05413728579878807,
-0.098210409283638,
-0.028484374284744263,
0.00365122826769948,
-0.005025941878557205,
-0.009659705683588982,
-0.05546051263809204,
-0.004511035978794098,
0.005702552385628223,
0.17384205758571625,
-0.021102063357830048,
0.03446521610021591,
0.01304143387824297,
0.008713580667972565,
0.04873410984873772,
-0.015170454978942871,
0.07190531492233276,
-0.004413718357682228,
-0.023716315627098083,
-0.01513338927179575,
0.03895769640803337,
0.006707785651087761,
0.0016336645931005478,
0.13728293776512146,
0.046537965536117554,
0.08611039817333221,
0.07480945438146591,
0.01367358211427927,
0.014597024768590927,
-0.1353517770767212,
-0.08199487626552582,
0.005856561474502087,
0.059675879776477814,
-0.017215780913829803,
0.017146948724985123,
0.09134212881326675,
-0.0886048823595047,
0.07112613320350647,
0.05282565951347351,
-0.047428637742996216,
-0.12586796283721924,
-0.19562259316444397,
-0.026173407211899757,
-0.03163008391857147,
-0.011980392970144749,
-0.09056220948696136,
0.018972869962453842,
0.08436055481433868,
0.02380567416548729,
-0.010243121534585953,
0.09412922710180283,
-0.1029733419418335,
-0.0331418476998806,
0.043445855379104614,
-0.026484623551368713,
0.013874753378331661,
0.03871811181306839,
0.02350890077650547,
-0.003350229933857918,
0.04231179505586624,
0.040732912719249725,
0.044219158589839935,
0.03144988417625427,
0.05259350687265396,
-0.02645127847790718,
-0.07351008802652359,
-0.03172820061445236,
-0.004704678431153297,
0.05260312184691429,
0.1307550072669983,
0.022176746279001236,
-0.07099092751741409,
0.006011898163706064,
0.10729054361581802,
-0.030580798164010048,
-0.04860920086503029,
-0.10668995976448059,
0.2481498420238495,
0.019686473533511162,
0.0007566781714558601,
-0.0052152834832668304,
-0.04524381831288338,
0.006865130737423897,
0.20792005956172943,
0.2184140533208847,
0.008552898652851582,
-0.009320070035755634,
0.009589082561433315,
-0.0128357894718647,
0.034264031797647476,
0.14056944847106934,
0.0018772203475236893,
0.25533464550971985,
-0.04760853201150894,
0.04007672518491745,
-0.042458947747945786,
-0.039368607103824615,
-0.101661816239357,
0.07171586155891418,
-0.00803731381893158,
0.0062766047194600105,
-0.03128711134195328,
0.06979801505804062,
-0.03830049932003021,
-0.17404013872146606,
0.0034675849601626396,
-0.00040437374264001846,
-0.059887174516916275,
0.012150846421718597,
-0.002238791435956955,
0.022377822548151016,
0.08292911946773529,
-0.01741601526737213,
-0.007367140147835016,
0.12946012616157532,
0.018710657954216003,
-0.09823278337717056,
-0.05962347984313965,
0.11464764922857285,
0.012403760105371475,
0.14097997546195984,
0.010106880217790604,
0.0771874412894249,
0.08732724189758301,
0.021903233602643013,
-0.09413617104291916,
0.04147423058748245,
-0.020612092688679695,
-0.026918575167655945,
0.005482617300003767,
0.11145851761102676,
-0.008001565933227539,
0.06052932143211365,
0.027203496545553207,
-0.09349016845226288,
0.06134212762117386,
0.010677896440029144,
-0.03546401113271713,
-0.08203406631946564,
0.08342747390270233,
-0.08998796343803406,
0.15696179866790771,
0.12268137186765671,
-0.014254777692258358,
-0.04601506143808365,
-0.030189229175448418,
0.019230954349040985,
0.0016101947985589504,
0.05880741775035858,
-0.024263441562652588,
-0.13429757952690125,
0.022190066054463387,
-0.08155769854784012,
0.027469869703054428,
-0.24748098850250244,
-0.09128762781620026,
0.03109888732433319,
-0.018646037206053734,
-0.015576314181089401,
0.04648982733488083,
0.045741382986307144,
0.026163529604673386,
-0.037269555032253265,
0.019623639062047005,
-0.03564839065074921,
0.059646934270858765,
-0.10751408338546753,
-0.09165120869874954
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 300k (uncased)
Seed 1 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-300k')
model = BertModel.from_pretrained("multiberts-seed-1-300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-300k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 300k (uncased)
Seed 1 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 300k (uncased)\nSeed 1 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 300k (uncased)\nSeed 1 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 300k (uncased)\nSeed 1 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0837249904870987,
-0.006205412559211254,
-0.002046861220151186,
0.06804647296667099,
0.08727937936782837,
0.0015540318563580513,
0.1168680265545845,
0.04920536279678345,
-0.02908049151301384,
0.02408207766711712,
0.09685809910297394,
0.030120953917503357,
0.041015829890966415,
0.0645119845867157,
0.09762739390134811,
-0.2567785382270813,
0.050919510424137115,
-0.06408102810382843,
0.05803780257701874,
0.0767049640417099,
0.10072344541549683,
-0.07082558423280716,
0.06249864399433136,
0.03682201728224754,
-0.08335383981466293,
-0.016851751133799553,
-0.016056977212429047,
-0.033806849271059036,
0.09937962889671326,
0.06885311007499695,
0.062112413346767426,
-0.0003677438944578171,
0.061165712773799896,
-0.09043663740158081,
0.015711871907114983,
0.04478275030851364,
0.0021068938076496124,
0.024104679003357887,
-0.010824933648109436,
0.014061138033866882,
0.10585057735443115,
0.037950754165649414,
0.07824628055095673,
0.0367935448884964,
-0.09581786394119263,
-0.1092756986618042,
-0.07968485355377197,
0.10227133333683014,
0.05333409458398819,
0.04326508194208145,
-0.006098031997680664,
0.07492657005786896,
-0.02895793318748474,
0.07424238324165344,
0.11132588982582092,
-0.25463342666625977,
-0.008090637624263763,
0.07044192403554916,
0.04339718818664551,
0.0438096821308136,
0.012779509648680687,
0.025956152006983757,
0.0062389858067035675,
0.04288255423307419,
0.0307496078312397,
-0.02378697693347931,
0.12189524620771408,
-0.04822015017271042,
-0.15186850726604462,
-0.04371734336018562,
0.12188999354839325,
-0.00515872985124588,
-0.12532967329025269,
-0.10171042382717133,
-0.03212545067071915,
0.11336658895015717,
-0.0024942532181739807,
-0.018117541447281837,
-0.004022875800728798,
0.011702751740813255,
0.024942195042967796,
-0.09352186322212219,
-0.08500993251800537,
-0.029507573693990707,
-0.0392332449555397,
0.1343412697315216,
0.04556790366768837,
0.05220125988125801,
-0.0340145081281662,
0.08785396814346313,
-0.11158332228660583,
-0.04107288271188736,
-0.0521276593208313,
-0.0795324295759201,
-0.017914054915308952,
0.008428622037172318,
-0.026888210326433182,
-0.08846935629844666,
-0.05880267173051834,
0.11507762968540192,
0.03653603419661522,
0.02980741113424301,
-0.0005088145844638348,
0.04197341948747635,
0.07681001722812653,
0.09897511452436447,
-0.03875785320997238,
0.04931453987956047,
0.03525921702384949,
-0.021440286189317703,
0.05954764783382416,
-0.05003969371318817,
-0.10062583535909653,
0.0788896307349205,
0.0018094172701239586,
0.038663897663354874,
0.02591194584965706,
0.03238033875823021,
-0.014344410970807076,
-0.07179722189903259,
0.16241806745529175,
-0.07809501886367798,
-0.009813623502850533,
-0.019276157021522522,
0.013360558077692986,
0.048548050224781036,
0.03304466977715492,
-0.008853847160935402,
-0.04780126363039017,
-0.006917967461049557,
-0.05688466504216194,
-0.02553550899028778,
-0.055161915719509125,
-0.11699901521205902,
-0.0015783170238137245,
-0.037468455731868744,
-0.03281765431165695,
-0.14112898707389832,
-0.21465429663658142,
-0.019634028896689415,
0.06519372761249542,
-0.001716656144708395,
-0.009780939668416977,
0.022929765284061432,
0.017260776832699776,
-0.02011032961308956,
0.011688927188515663,
-0.04690729081630707,
-0.00035260338336229324,
-0.005098314955830574,
-0.03311748430132866,
0.055480800569057465,
-0.04104249179363251,
0.023101650178432465,
-0.07038512080907822,
0.02325306087732315,
-0.20734134316444397,
0.09159566462039948,
-0.03306515887379646,
0.0009747389703989029,
-0.038013286888599396,
-0.0426708459854126,
0.0057954806834459305,
0.04731151461601257,
-0.007425082381814718,
0.11764785647392273,
-0.13272689282894135,
-0.04969684034585953,
0.1822926104068756,
-0.1584496945142746,
-0.0018383041024208069,
0.09995315968990326,
-0.049423813819885254,
0.05846381559967995,
0.13270005583763123,
0.09668999910354614,
0.07802862673997879,
-0.07265403866767883,
0.011425497941672802,
0.06347320228815079,
-0.06871485710144043,
0.0542294979095459,
0.0888090431690216,
-0.025696296244859695,
-0.1365620195865631,
0.02883879467844963,
-0.0733654797077179,
-0.008541552349925041,
-0.02841252088546753,
-0.02043277397751808,
0.004596102982759476,
-0.0383792445063591,
0.023901555687189102,
0.004631677642464638,
0.017116809263825417,
-0.03951931744813919,
-0.08219320327043533,
0.029991909861564636,
0.07557806372642517,
-0.07071585953235626,
0.04380250722169876,
-0.06799782812595367,
0.058494191616773605,
-0.07953591644763947,
-0.004115026444196701,
-0.16909924149513245,
-0.024549230933189392,
0.04517057165503502,
-0.047455765306949615,
0.05195532366633415,
0.09026741236448288,
0.00029701367020606995,
0.12131945788860321,
-0.038843221962451935,
0.004558351822197437,
-0.008442478254437447,
-0.010326208546757698,
-0.050998952239751816,
-0.11969834566116333,
-0.08338144421577454,
-0.06964308023452759,
0.09818444401025772,
-0.06814207136631012,
0.028304412961006165,
-0.07214561104774475,
-0.025200512260198593,
-0.008605586364865303,
-0.06067165732383728,
-0.003653517924249172,
0.01100519485771656,
-0.029628407210111618,
-0.04852055013179779,
0.0459025502204895,
0.051499687135219574,
-0.06315984576940536,
0.07443150132894516,
-0.10394518822431564,
-0.06187222898006439,
0.05430059880018234,
0.012777246534824371,
-0.08437754213809967,
0.0919157862663269,
-0.018925631418824196,
-0.012490268796682358,
-0.06360767781734467,
-0.0447881780564785,
0.1937796026468277,
-0.020997285842895508,
0.10113166272640228,
-0.09140445291996002,
0.0017020313534885645,
0.02842944860458374,
-0.04911992698907852,
-0.01733521930873394,
0.05910462886095047,
0.049012407660484314,
-0.1928127110004425,
0.01646030694246292,
0.05081886053085327,
0.07712261378765106,
0.10944046080112457,
0.02652835287153721,
-0.02438242733478546,
-0.047694746404886246,
-0.011798991821706295,
0.005014420486986637,
0.05624166876077652,
-0.02443399652838707,
-0.009200469590723515,
0.030588475987315178,
0.05941633880138397,
0.016806209459900856,
-0.08034729957580566,
0.03650914877653122,
0.06871947646141052,
-0.017486967146396637,
-0.03575630486011505,
-0.02695745974779129,
-0.060265399515628815,
0.0631413385272026,
0.05257850140333176,
0.03952441364526749,
0.02496677078306675,
-0.013965106569230556,
-0.1355636864900589,
0.18991243839263916,
-0.11196095496416092,
-0.2564975619316101,
-0.10858826339244843,
-0.05514354258775711,
-0.027905697003006935,
0.04162660613656044,
0.056478992104530334,
-0.02822166681289673,
-0.04177701845765114,
-0.11660395562648773,
0.06477837264537811,
-0.06404544413089752,
-0.031498003751039505,
-0.00847441703081131,
-0.053279221057891846,
-0.020086627453565598,
-0.12867796421051025,
-0.014660902321338654,
-0.030584251508116722,
-0.07620234787464142,
0.005205146968364716,
-0.0354311503469944,
0.02957046590745449,
0.13731423020362854,
0.0347234308719635,
-0.018150603398680687,
-0.016751784831285477,
0.1937403529882431,
0.011157240718603134,
0.059412021189928055,
0.11144629120826721,
-0.027012139558792114,
0.05328681692481041,
0.04200700297951698,
0.026583075523376465,
-0.049410849809646606,
0.01320701465010643,
-0.016588641330599785,
-0.12309468537569046,
-0.16918571293354034,
-0.0722445398569107,
-0.00290960306301713,
0.0036362973041832447,
0.02073473483324051,
0.03649899736046791,
0.019009379670023918,
0.04050406813621521,
-0.0298426765948534,
0.026142871007323265,
-0.01470426470041275,
0.08182119578123093,
0.02382046729326248,
-0.0746995136141777,
0.09351349622011185,
-0.06176134571433067,
0.015095752663910389,
0.10879578441381454,
-0.059732094407081604,
0.18903768062591553,
0.025911569595336914,
0.06068379431962967,
0.10241951048374176,
0.0174826942384243,
0.05343357101082802,
0.08846168220043182,
-0.04449012130498886,
0.00515326950699091,
-0.060933917760849,
-0.05116822570562363,
-0.038357339799404144,
0.047559089958667755,
0.029086168855428696,
0.021230392158031464,
-0.119151271879673,
0.020707376301288605,
-0.0011108756298199296,
0.14126276969909668,
0.04671277478337288,
-0.11816355586051941,
-0.12220080941915512,
0.03530960902571678,
-0.047156453132629395,
-0.06298118829727173,
0.031191401183605194,
0.05492446571588516,
-0.15415018796920776,
0.04448293149471283,
-0.004993051290512085,
0.06862153112888336,
-0.09478289633989334,
0.014673667028546333,
-0.0461227111518383,
-0.000999918207526207,
0.004067044705152512,
0.06838920712471008,
-0.13554160296916962,
0.11049473285675049,
0.020629288628697395,
0.04788895696401596,
-0.0814778283238411,
0.015598640777170658,
-0.012502120807766914,
0.10624846816062927,
0.11710670590400696,
0.04431860148906708,
-0.048673976212739944,
-0.019815467298030853,
-0.04639619588851929,
0.01896684803068638,
0.05835798755288124,
-0.07821783423423767,
0.06047558784484863,
0.007459315937012434,
0.008263057097792625,
-0.02241038717329502,
0.020740848034620285,
-0.13188791275024414,
-0.12249082326889038,
0.0588722750544548,
-0.07581886649131775,
-0.10256747156381607,
-0.05685779079794884,
-0.06075898930430412,
-0.04467329382896423,
0.2101878523826599,
-0.1135435700416565,
-0.08973824977874756,
-0.09830635786056519,
-0.009192299097776413,
0.04474908113479614,
-0.0659698098897934,
0.04440421611070633,
-0.039117008447647095,
0.0888562798500061,
-0.04864483326673508,
-0.10902588069438934,
0.03291979804635048,
-0.11339099705219269,
-0.11276043951511383,
-0.04377460107207298,
0.10410482436418533,
0.11457071453332901,
0.037949737161397934,
0.011952297762036324,
0.011838667094707489,
0.00030809640884399414,
-0.1184040755033493,
0.012928605079650879,
0.1307430863380432,
-0.003957383334636688,
0.07404512912034988,
-0.06123456358909607,
0.027395974844694138,
-0.017145074903964996,
-0.0020139962434768677,
0.13379542529582977,
0.18638238310813904,
-0.06430256366729736,
0.17310300469398499,
0.2022533118724823,
-0.10496670007705688,
-0.19016972184181213,
-0.055316902697086334,
-0.00004876032471656799,
0.04582708701491356,
0.05026761069893837,
-0.18318508565425873,
0.09233184903860092,
0.035262446850538254,
-0.03305318206548691,
0.01649782806634903,
-0.23985019326210022,
-0.11140494048595428,
0.09151871502399445,
0.05638301372528076,
0.19373619556427002,
-0.0815618708729744,
-0.038653649389743805,
-0.018262134864926338,
-0.03730656951665878,
0.04564608633518219,
-0.028718741610646248,
0.09325644373893738,
0.007369544357061386,
-0.030055996030569077,
0.0025541773065924644,
-0.03085697442293167,
0.09546203911304474,
0.041474565863609314,
0.024438120424747467,
-0.07049597799777985,
-0.0065591707825660706,
0.11268101632595062,
-0.03777018561959267,
0.10209448635578156,
0.042372435331344604,
0.07526911795139313,
-0.09256039559841156,
-0.060797616839408875,
-0.07417026907205582,
0.04351295530796051,
-0.04132075607776642,
-0.05575242266058922,
-0.0644104853272438,
0.05701053887605667,
0.03807276487350464,
0.008406938053667545,
0.00037112273275852203,
-0.03881731256842613,
0.043570853769779205,
0.08791438490152359,
0.08027490973472595,
-0.036297257989645004,
-0.07614581286907196,
-0.052213724702596664,
-0.04839763790369034,
0.06687729805707932,
-0.0887349545955658,
0.018142294138669968,
0.025219157338142395,
0.011775386519730091,
0.09049029648303986,
0.034302398562431335,
-0.13599993288516998,
0.011154083535075188,
0.03164291009306908,
-0.11974737048149109,
-0.11054213345050812,
-0.01938984915614128,
0.03350261598825455,
-0.038007840514183044,
0.053460195660591125,
0.146787628531456,
-0.03630269318819046,
-0.0321931354701519,
-0.04875291883945465,
0.037103962153196335,
-0.022659197449684143,
0.04845016449689865,
0.06389420479536057,
0.031483814120292664,
-0.07181522250175476,
0.07500159740447998,
0.039022646844387054,
-0.03276999294757843,
0.04175190255045891,
0.04088035225868225,
-0.09240537881851196,
-0.07762980461120605,
-0.057874586433172226,
0.09046915173530579,
-0.02646833471953869,
-0.04159246012568474,
-0.0010126587003469467,
-0.08293583244085312,
0.06787431985139847,
0.07564222812652588,
0.05034933611750603,
0.03901348263025284,
-0.08913418650627136,
0.015141909010708332,
-0.05350225046277046,
0.03360489755868912,
-0.02800831012427807,
-0.00514780730009079,
-0.055257588624954224,
0.06181800737977028,
0.06496904790401459,
0.09732943028211594,
-0.03492939472198486,
-0.0772232711315155,
-0.08381577581167221,
-0.011145155876874924,
-0.05949521064758301,
-0.032251205295324326,
-0.07760307192802429,
-0.008234171196818352,
0.002443007193505764,
-0.0022166315466165543,
0.023861350491642952,
0.036007724702358246,
-0.04136553406715393,
-0.018660616129636765,
-0.0371277742087841,
0.03698572516441345,
-0.06131478026509285,
0.006766370497643948,
0.015895817428827286,
-0.037007272243499756,
0.09039260447025299,
0.03492968529462814,
-0.0137786278501153,
0.041821859776973724,
-0.02518238127231598,
0.03711894899606705,
-0.01966134086251259,
0.0007181859109550714,
-0.02591899037361145,
-0.10917007923126221,
-0.004536625929176807,
0.005548698827624321,
-0.022656556218862534,
0.009960089810192585,
0.060972824692726135,
-0.0729796290397644,
0.08537375181913376,
0.043888647109270096,
-0.031355686485767365,
-0.0718889981508255,
0.04017474129796028,
-0.011165264993906021,
0.03041057102382183,
0.06940609961748123,
-0.034915875643491745,
0.053980350494384766,
-0.09914672374725342,
-0.02767026796936989,
0.0049037630669772625,
-0.003961421549320221,
-0.015354016795754433,
-0.05488573759794235,
-0.0037943152710795403,
0.008769221603870392,
0.1750773936510086,
-0.02185690775513649,
0.03391292691230774,
0.014820693992078304,
0.011939043179154396,
0.05092872679233551,
-0.01422693207859993,
0.07219010591506958,
-0.006661000661551952,
-0.026052923873066902,
-0.014532878063619137,
0.037816472351551056,
0.005659988150000572,
0.0035549551248550415,
0.1419503092765808,
0.04764706641435623,
0.08846086263656616,
0.07549703866243362,
0.012476212345063686,
0.018310079351067543,
-0.12910999357700348,
-0.09357090294361115,
0.007138261571526527,
0.05575697869062424,
-0.01629641465842724,
0.01260521449148655,
0.08955930918455124,
-0.086513452231884,
0.07187071442604065,
0.05182601138949394,
-0.0479259192943573,
-0.1270553469657898,
-0.1939890831708908,
-0.024544553831219673,
-0.028264811262488365,
-0.01137220486998558,
-0.09079204499721527,
0.01620526984333992,
0.09009283035993576,
0.024237796664237976,
-0.00941782258450985,
0.09578333795070648,
-0.10627911984920502,
-0.03190899267792702,
0.043618395924568176,
-0.02707531675696373,
0.01583985984325409,
0.044115468859672546,
0.02326602302491665,
-0.004980865865945816,
0.04363209754228592,
0.041638318449258804,
0.045893192291259766,
0.02410278096795082,
0.05067743360996246,
-0.025504717603325844,
-0.07403123378753662,
-0.03308771178126335,
-0.005659241229295731,
0.05400983989238739,
0.1408376693725586,
0.022453755140304565,
-0.06955955177545547,
0.006281203590333462,
0.10822352766990662,
-0.030751731246709824,
-0.05416824668645859,
-0.10829314589500427,
0.2429652214050293,
0.02218899503350258,
0.0018028344493359327,
-0.005815491545945406,
-0.04423980042338371,
0.004544457420706749,
0.20823122560977936,
0.22427207231521606,
0.002095024101436138,
-0.009536505676805973,
0.008577674627304077,
-0.012025762349367142,
0.03721010312438011,
0.1456177532672882,
0.0012724567204713821,
0.25477176904678345,
-0.04808114469051361,
0.0404072180390358,
-0.04304812103509903,
-0.039797473698854446,
-0.10001571476459503,
0.07031959295272827,
-0.009020328521728516,
0.008070960640907288,
-0.034387581050395966,
0.07079825550317764,
-0.03916227072477341,
-0.16915357112884521,
0.003136587329208851,
-0.002923260908573866,
-0.06006335839629173,
0.011540163308382034,
-0.006045479793101549,
0.02088984102010727,
0.08315641433000565,
-0.016375519335269928,
-0.004271955229341984,
0.12906724214553833,
0.01874970644712448,
-0.09809970110654831,
-0.06079009920358658,
0.11481436342000961,
0.019107196480035782,
0.14136862754821777,
0.01257815957069397,
0.07735119760036469,
0.08641313016414642,
0.020846344530582428,
-0.09504230320453644,
0.04470347985625267,
-0.02153305523097515,
-0.023917360231280327,
0.007226666435599327,
0.1076749637722969,
-0.007906059734523296,
0.06035306677222252,
0.023977212607860565,
-0.09447970986366272,
0.06125627085566521,
0.011518612504005432,
-0.034849874675273895,
-0.08149805665016174,
0.08300276100635529,
-0.08988738059997559,
0.1585220992565155,
0.12102645635604858,
-0.013915586285293102,
-0.04614797979593277,
-0.027553094550967216,
0.0195817481726408,
0.0012244661338627338,
0.05747869238257408,
-0.025824395939707756,
-0.13598281145095825,
0.01941916160285473,
-0.08819545805454254,
0.024507496505975723,
-0.248191237449646,
-0.09319736063480377,
0.03134330362081528,
-0.01835646480321884,
-0.017478909343481064,
0.04951552301645279,
0.0432954840362072,
0.027672359719872475,
-0.03588282689452171,
0.018921256065368652,
-0.035799793899059296,
0.059221301227808,
-0.11026757955551147,
-0.09344826638698578
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 400k (uncased)
Seed 1 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-400k')
model = BertModel.from_pretrained("multiberts-seed-1-400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-400k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 400k (uncased)
Seed 1 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 400k (uncased)\nSeed 1 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 400k (uncased)\nSeed 1 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 400k (uncased)\nSeed 1 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0841616690158844,
-0.005205757915973663,
-0.0021244664676487446,
0.0682557076215744,
0.08655175566673279,
0.0025685573928058147,
0.11536315083503723,
0.04968441277742386,
-0.030021216720342636,
0.024411670863628387,
0.09404727816581726,
0.031346336007118225,
0.042334433645009995,
0.0659886971116066,
0.09680166840553284,
-0.2576253414154053,
0.048937514424324036,
-0.06411860883235931,
0.05577690899372101,
0.07498709857463837,
0.1001887172460556,
-0.07106778025627136,
0.06330782175064087,
0.0379144661128521,
-0.08341391384601593,
-0.01833023875951767,
-0.01523187942802906,
-0.034850701689720154,
0.10111187398433685,
0.07021255791187286,
0.06225885450839996,
0.0000360831618309021,
0.06064005196094513,
-0.08866569399833679,
0.016174769029021263,
0.044920843094587326,
0.0011306633241474628,
0.02568451315164566,
-0.009152760729193687,
0.015318743884563446,
0.10413646697998047,
0.03742422163486481,
0.07684354484081268,
0.03623517230153084,
-0.09541209042072296,
-0.1145859882235527,
-0.0791817381978035,
0.10236622393131256,
0.05242081731557846,
0.04092387855052948,
-0.006132317706942558,
0.07109645009040833,
-0.027852019295096397,
0.07409501075744629,
0.10367067158222198,
-0.25866445899009705,
-0.00839761458337307,
0.0681673064827919,
0.0414552167057991,
0.04560580104589462,
0.011186104267835617,
0.027567274868488312,
0.007253851741552353,
0.04300357401371002,
0.02840164303779602,
-0.023905962705612183,
0.12297312915325165,
-0.04693365469574928,
-0.15024475753307343,
-0.042507704347372055,
0.12390410900115967,
-0.006099039688706398,
-0.12562867999076843,
-0.09819113463163376,
-0.03090158849954605,
0.11574050784111023,
-0.003300311043858528,
-0.016631968319416046,
-0.00414613401517272,
0.01184641383588314,
0.025083966553211212,
-0.09206044673919678,
-0.08522964268922806,
-0.02804834395647049,
-0.03905937075614929,
0.13050611317157745,
0.04632147401571274,
0.0524926632642746,
-0.0342954583466053,
0.08740852028131485,
-0.11491160094738007,
-0.040265802294015884,
-0.05193435773253441,
-0.07987403869628906,
-0.01794406957924366,
0.008388347923755646,
-0.02637883834540844,
-0.08456237614154816,
-0.05848122015595436,
0.11405723541975021,
0.037631500512361526,
0.03075902909040451,
-0.0016703908331692219,
0.042031921446323395,
0.07517167180776596,
0.09819050133228302,
-0.03723951801657677,
0.04980553686618805,
0.03690772131085396,
-0.02375585213303566,
0.058347661048173904,
-0.049163103103637695,
-0.10130695253610611,
0.07782071083784103,
0.0009468784555792809,
0.03847454488277435,
0.02561890333890915,
0.03408025950193405,
-0.011959521099925041,
-0.07069852948188782,
0.16276925802230835,
-0.07729243487119675,
-0.009254664182662964,
-0.018002409487962723,
0.012028409168124199,
0.04614491015672684,
0.03263888880610466,
-0.008565900847315788,
-0.046006519347429276,
-0.007962728850543499,
-0.05721278861165047,
-0.025230789557099342,
-0.05467052012681961,
-0.11769941449165344,
-0.0013649933971464634,
-0.03613612800836563,
-0.03300914168357849,
-0.1422828882932663,
-0.21421286463737488,
-0.02030109241604805,
0.06453610956668854,
-0.0012200302444398403,
-0.010655464604496956,
0.024403218179941177,
0.017632948234677315,
-0.020976761355996132,
0.012408790178596973,
-0.045930635184049606,
-0.0008889781311154366,
-0.004952708259224892,
-0.03531715273857117,
0.05645085871219635,
-0.04037071019411087,
0.023364627733826637,
-0.06956218183040619,
0.022430092096328735,
-0.21415174007415771,
0.0915667861700058,
-0.03382956609129906,
0.0019951537251472473,
-0.03761961683630943,
-0.04255200922489166,
0.006631210446357727,
0.045514799654483795,
-0.009205419570207596,
0.1173589676618576,
-0.1365080326795578,
-0.04983685165643692,
0.17925943434238434,
-0.1591627299785614,
-0.0011255443096160889,
0.10055014491081238,
-0.04896062985062599,
0.06053698807954788,
0.13318957388401031,
0.10100676864385605,
0.08107712864875793,
-0.07218924164772034,
0.011073081754148006,
0.06446263194084167,
-0.07070879638195038,
0.0568065270781517,
0.08740878105163574,
-0.025525474920868874,
-0.14088083803653717,
0.029641374945640564,
-0.06927895545959473,
-0.008313857950270176,
-0.028015419840812683,
-0.020256219431757927,
0.0046898312866687775,
-0.03791280463337898,
0.024914171546697617,
0.003960189875215292,
0.018864450976252556,
-0.03926592692732811,
-0.08172917366027832,
0.026006266474723816,
0.07560086250305176,
-0.06927124410867691,
0.042612798511981964,
-0.0687730610370636,
0.05955635383725166,
-0.07915813475847244,
-0.0031909472309052944,
-0.16804030537605286,
-0.024589935317635536,
0.045010026544332504,
-0.050259124487638474,
0.05150030180811882,
0.09101763367652893,
0.001241084304638207,
0.1222771555185318,
-0.03892676532268524,
0.004664366599172354,
-0.004693806171417236,
-0.010313745588064194,
-0.053309857845306396,
-0.12114352732896805,
-0.08266936242580414,
-0.06827571243047714,
0.10160309821367264,
-0.07149183005094528,
0.028890281915664673,
-0.07323969900608063,
-0.024502431973814964,
-0.008826527744531631,
-0.05938446521759033,
-0.004242517054080963,
0.010825787670910358,
-0.02827410213649273,
-0.0475512370467186,
0.046551063656806946,
0.050842173397541046,
-0.06258025020360947,
0.0743207037448883,
-0.10219297558069229,
-0.06147410720586777,
0.05387283116579056,
0.011707007884979248,
-0.08440816402435303,
0.09301812946796417,
-0.018740134313702583,
-0.013513532467186451,
-0.061023373156785965,
-0.044089384377002716,
0.1970110535621643,
-0.023490071296691895,
0.10098710656166077,
-0.09016935527324677,
0.0029758187010884285,
0.02934931218624115,
-0.04765769839286804,
-0.0150590306147933,
0.060263555496931076,
0.0456949882209301,
-0.1887778341770172,
0.014538835734128952,
0.05059724301099777,
0.07886803150177002,
0.10742895305156708,
0.025781016796827316,
-0.02285398542881012,
-0.047235120087862015,
-0.012355108745396137,
0.004600312560796738,
0.056848540902137756,
-0.025505173951387405,
-0.008757025934755802,
0.03196027874946594,
0.05873478204011917,
0.01655815728008747,
-0.08161304891109467,
0.036683447659015656,
0.0672505721449852,
-0.017107442021369934,
-0.04218574985861778,
-0.02704380452632904,
-0.058566171675920486,
0.06297415494918823,
0.05262143164873123,
0.03921858221292496,
0.025667810812592506,
-0.013501089997589588,
-0.1347920000553131,
0.18878823518753052,
-0.11416962742805481,
-0.2565789818763733,
-0.10876790434122086,
-0.05349540710449219,
-0.02577422372996807,
0.04174674302339554,
0.058164142072200775,
-0.029757339507341385,
-0.041920602321624756,
-0.11448797583580017,
0.06578335165977478,
-0.06383603811264038,
-0.03107387386262417,
-0.010268334299325943,
-0.05401331186294556,
-0.019289139658212662,
-0.12820082902908325,
-0.013118894770741463,
-0.0301042553037405,
-0.07530368864536285,
0.00680658221244812,
-0.034405291080474854,
0.028180792927742004,
0.13821133971214294,
0.033939674496650696,
-0.01932145096361637,
-0.01685822755098343,
0.19522935152053833,
0.009924683719873428,
0.062435753643512726,
0.10973412543535233,
-0.026995055377483368,
0.0536426343023777,
0.04448958486318588,
0.026947202160954475,
-0.047640301287174225,
0.011854169890284538,
-0.017002055421471596,
-0.12095540761947632,
-0.17053347826004028,
-0.07122782617807388,
-0.0013703126460313797,
0.006233057007193565,
0.018770983442664146,
0.03697469085454941,
0.020284727215766907,
0.04109755903482437,
-0.028240235522389412,
0.029492031782865524,
-0.013228502124547958,
0.08104307949542999,
0.026986412703990936,
-0.07597561180591583,
0.0927896499633789,
-0.061803363263607025,
0.015796877443790436,
0.10889659821987152,
-0.06218234822154045,
0.1889677494764328,
0.02572813257575035,
0.060899652540683746,
0.10086991637945175,
0.021782804280519485,
0.05498134344816208,
0.08838510513305664,
-0.04486867040395737,
0.005829161033034325,
-0.060209404677152634,
-0.05111866444349289,
-0.036491457372903824,
0.048571713268756866,
0.02830781787633896,
0.019108958542346954,
-0.11973683536052704,
0.019127588719129562,
-0.00196439609862864,
0.1376604437828064,
0.04627811163663864,
-0.11679554730653763,
-0.12398944795131683,
0.03493953496217728,
-0.04627125337719917,
-0.06461773067712784,
0.031452178955078125,
0.056576553732156754,
-0.15414893627166748,
0.042922668159008026,
-0.005289881490170956,
0.06855930387973785,
-0.09451241791248322,
0.015615907497704029,
-0.04612600803375244,
-0.0001713484525680542,
0.003110215999186039,
0.0694827064871788,
-0.1331019252538681,
0.1058984249830246,
0.021353621035814285,
0.048590198159217834,
-0.08101672679185867,
0.01514735259115696,
-0.01110632810741663,
0.11118997633457184,
0.11492456495761871,
0.044327277690172195,
-0.052903175354003906,
-0.022357117384672165,
-0.04673294723033905,
0.02087600715458393,
0.05887199938297272,
-0.07865028083324432,
0.05985008552670479,
0.0088199432939291,
0.007470191456377506,
-0.023245656862854958,
0.016809172928333282,
-0.13340918719768524,
-0.12039355933666229,
0.05988609045743942,
-0.07571697980165482,
-0.0951097384095192,
-0.05686051398515701,
-0.06136138737201691,
-0.04960218816995621,
0.20893827080726624,
-0.11188045144081116,
-0.08996760845184326,
-0.09757917374372482,
-0.012974828481674194,
0.04576820135116577,
-0.06570085883140564,
0.04504751041531563,
-0.0386999137699604,
0.08799376338720322,
-0.04704538732767105,
-0.10982652008533478,
0.03313198313117027,
-0.1117878183722496,
-0.1122906506061554,
-0.044256702065467834,
0.10399162769317627,
0.11248617619276047,
0.03811442479491234,
0.012213294394314289,
0.011240100488066673,
-0.0019689854234457016,
-0.11750270426273346,
0.0143816526979208,
0.12913894653320312,
-0.002500394359230995,
0.07291503995656967,
-0.06365641206502914,
0.028932534158229828,
-0.01757684350013733,
-0.0006609093397855759,
0.13164441287517548,
0.18546995520591736,
-0.06551983952522278,
0.17335715889930725,
0.19983583688735962,
-0.10562454164028168,
-0.1902305781841278,
-0.054172564297914505,
-0.0005139680579304695,
0.045011360198259354,
0.04772820323705673,
-0.18546494841575623,
0.09348055720329285,
0.03553424030542374,
-0.03258073329925537,
0.025825966149568558,
-0.2345154583454132,
-0.11152715981006622,
0.08986487984657288,
0.05296073108911514,
0.19321760535240173,
-0.08127467334270477,
-0.03957265987992287,
-0.01596047170460224,
-0.038074687123298645,
0.040345996618270874,
-0.03158324211835861,
0.0929444208741188,
0.007870908826589584,
-0.026906263083219528,
0.00293885450810194,
-0.02989104576408863,
0.09656672179698944,
0.04338481277227402,
0.02451913431286812,
-0.06961321830749512,
-0.00736616924405098,
0.10854040831327438,
-0.037766728550195694,
0.09990063309669495,
0.0451635904610157,
0.07510340213775635,
-0.0946698933839798,
-0.05885709822177887,
-0.07439884543418884,
0.04446893930435181,
-0.041437119245529175,
-0.05423480644822121,
-0.06395361572504044,
0.05604284629225731,
0.03593840450048447,
0.008840424008667469,
0.003179924562573433,
-0.03790704160928726,
0.0449216328561306,
0.0876193568110466,
0.07733689993619919,
-0.035522595047950745,
-0.07658855617046356,
-0.0507524274289608,
-0.04910464212298393,
0.06739680469036102,
-0.09181034564971924,
0.01953720487654209,
0.02665385976433754,
0.011172163300216198,
0.09142223000526428,
0.034557584673166275,
-0.1351950764656067,
0.012080212123692036,
0.03174342215061188,
-0.12048094719648361,
-0.11152341961860657,
-0.019322682172060013,
0.028236376121640205,
-0.0359494686126709,
0.05495148524641991,
0.14674881100654602,
-0.036632295697927475,
-0.032306596636772156,
-0.0492343008518219,
0.037193313241004944,
-0.021524928510189056,
0.04838961735367775,
0.06453985720872879,
0.03135980665683746,
-0.07269339263439178,
0.07556380331516266,
0.040511734783649445,
-0.03567623347043991,
0.040149006992578506,
0.03888721019029617,
-0.09391526877880096,
-0.07841648161411285,
-0.057737190276384354,
0.0941915437579155,
-0.022899189963936806,
-0.043127212673425674,
-0.0006000753492116928,
-0.08450928330421448,
0.06701656430959702,
0.07611619681119919,
0.04837537556886673,
0.0385013222694397,
-0.08768969774246216,
0.015897013247013092,
-0.054129764437675476,
0.03539779782295227,
-0.0281770471483469,
-0.0052022431045770645,
-0.055621981620788574,
0.06721169501543045,
0.06460005789995193,
0.09974406659603119,
-0.0349249541759491,
-0.07759808748960495,
-0.0830376148223877,
-0.013132870197296143,
-0.06421224027872086,
-0.030652664601802826,
-0.07663759589195251,
-0.007669147104024887,
0.002982648089528084,
-0.0021489690989255905,
0.02459215186536312,
0.035286303609609604,
-0.04271392524242401,
-0.018303951248526573,
-0.03612415865063667,
0.03820588439702988,
-0.05980028584599495,
0.0055979033932089806,
0.014911239966750145,
-0.03586389869451523,
0.09116396307945251,
0.03783868998289108,
-0.012603769078850746,
0.04251269996166229,
-0.02117157354950905,
0.03875616937875748,
-0.020504513755440712,
0.00025155534967780113,
-0.024394601583480835,
-0.1096760481595993,
-0.00516526261344552,
0.0025656670331954956,
-0.024567730724811554,
0.009550328366458416,
0.060749027878046036,
-0.07321330904960632,
0.08669247478246689,
0.045577168464660645,
-0.03045860305428505,
-0.07210204005241394,
0.03894716501235962,
-0.01586703024804592,
0.02900715358555317,
0.06808263808488846,
-0.03503981977701187,
0.05409933626651764,
-0.0986773893237114,
-0.02850884199142456,
0.004216534085571766,
-0.005025025457143784,
-0.010952956974506378,
-0.05605580657720566,
-0.003463238477706909,
0.007928521372377872,
0.17230495810508728,
-0.020815100520849228,
0.033802419900894165,
0.013453355059027672,
0.01013130322098732,
0.047134507447481155,
-0.014228502288460732,
0.07245694100856781,
-0.0062433285638689995,
-0.025914445519447327,
-0.014880460686981678,
0.03952239453792572,
0.005117403343319893,
0.0040053874254226685,
0.13755303621292114,
0.046118754893541336,
0.08915562927722931,
0.07429249584674835,
0.01209197286516428,
0.015274940989911556,
-0.1307670772075653,
-0.09048418700695038,
0.00636358093470335,
0.05781254917383194,
-0.01829039864242077,
0.01215207576751709,
0.08717235177755356,
-0.08572189509868622,
0.07119567692279816,
0.05013682693243027,
-0.048272717744112015,
-0.12598100304603577,
-0.19099119305610657,
-0.024579187855124474,
-0.029222937300801277,
-0.011591296643018723,
-0.09004303812980652,
0.017132800072431564,
0.09158636629581451,
0.023277726024389267,
-0.0093073770403862,
0.09525462985038757,
-0.1090092733502388,
-0.031863387674093246,
0.044026050716638565,
-0.026490438729524612,
0.015534150414168835,
0.04425768926739693,
0.023043839260935783,
-0.0048378221690654755,
0.04084287956357002,
0.039769914001226425,
0.04547153040766716,
0.02620161697268486,
0.05106916278600693,
-0.024928666651248932,
-0.0740218535065651,
-0.03346296772360802,
-0.003645543474704027,
0.05433100461959839,
0.13796669244766235,
0.02245735004544258,
-0.06892478466033936,
0.006057655904442072,
0.11069293320178986,
-0.030520480126142502,
-0.050828706473112106,
-0.10783073306083679,
0.24498137831687927,
0.020186088979244232,
0.0009893979877233505,
-0.00500101363286376,
-0.045294154435396194,
0.006055604666471481,
0.21194911003112793,
0.22396008670330048,
0.004056147765368223,
-0.010464460588991642,
0.00856767874211073,
-0.012491637840867043,
0.03791148215532303,
0.14523926377296448,
0.0027611926198005676,
0.25114020705223083,
-0.04859383404254913,
0.04143262654542923,
-0.043299511075019836,
-0.038404110819101334,
-0.10358361899852753,
0.07082407176494598,
-0.01172466017305851,
0.007404334843158722,
-0.03375837579369545,
0.07058625668287277,
-0.039363257586956024,
-0.175414577126503,
0.004985376261174679,
-0.0007983292452991009,
-0.059845153242349625,
0.011593022383749485,
-0.002413291484117508,
0.021108772605657578,
0.08289681375026703,
-0.017605599015951157,
-0.005338694434612989,
0.1347440481185913,
0.017882060259580612,
-0.09568291157484055,
-0.06164417415857315,
0.11463821679353714,
0.014288339763879776,
0.13983553647994995,
0.011746435426175594,
0.07860276103019714,
0.08595214784145355,
0.02186671271920204,
-0.09556466341018677,
0.04317322373390198,
-0.0220046229660511,
-0.02832765504717827,
0.006704709492623806,
0.10806600749492645,
-0.00747581385076046,
0.061224956065416336,
0.024792548269033432,
-0.09219614416360855,
0.060782916843891144,
0.01281844824552536,
-0.035419173538684845,
-0.07980608940124512,
0.08189551532268524,
-0.08949805796146393,
0.1572689712047577,
0.12210260331630707,
-0.013338171876966953,
-0.04505091905593872,
-0.02807381935417652,
0.01744215562939644,
0.0026778667233884335,
0.05469850078225136,
-0.02696351706981659,
-0.13499227166175842,
0.02012276090681553,
-0.08658869564533234,
0.025530196726322174,
-0.24716416001319885,
-0.091535285115242,
0.029833992943167686,
-0.01805133745074272,
-0.016131609678268433,
0.04944264143705368,
0.04503427445888519,
0.02722010388970375,
-0.03577670454978943,
0.01839487999677658,
-0.03578291833400726,
0.059655290096998215,
-0.10872876644134521,
-0.09398035705089569
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 40k (uncased)
Seed 1 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-40k')
model = BertModel.from_pretrained("multiberts-seed-1-40k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-40k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 40k (uncased)
Seed 1 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 40k (uncased)\nSeed 1 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 40k (uncased)\nSeed 1 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 40k (uncased)\nSeed 1 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0850006639957428,
-0.0018126906361430883,
-0.0021574280690401793,
0.06620331853628159,
0.08568911999464035,
0.0022947899997234344,
0.11723357439041138,
0.05006520450115204,
-0.02768326923251152,
0.026396794244647026,
0.0936480462551117,
0.030873090028762817,
0.04176228493452072,
0.06432216614484787,
0.09751372039318085,
-0.25798171758651733,
0.04900764673948288,
-0.06359636783599854,
0.05768363177776337,
0.07480493932962418,
0.10023068636655807,
-0.07181008160114288,
0.0626935064792633,
0.03860486298799515,
-0.08300159871578217,
-0.019201572984457016,
-0.015688452869653702,
-0.03353729471564293,
0.10114441812038422,
0.07002198696136475,
0.06164754182100296,
0.0006689876317977905,
0.05937109515070915,
-0.09068627655506134,
0.01604849472641945,
0.04613545909523964,
-0.0002931649796664715,
0.02588466741144657,
-0.007613051682710648,
0.015485787764191628,
0.10614103823900223,
0.03708310052752495,
0.07637627422809601,
0.036745429039001465,
-0.09481239318847656,
-0.11549100279808044,
-0.0787479430437088,
0.1012694388628006,
0.05154255032539368,
0.04123751074075699,
-0.006826956756412983,
0.07445427030324936,
-0.027919407933950424,
0.07475042343139648,
0.10547961294651031,
-0.2586003541946411,
-0.006874545477330685,
0.06889785081148148,
0.04381217807531357,
0.04491197317838669,
0.01093929260969162,
0.026941971853375435,
0.007217004895210266,
0.04257170855998993,
0.02648409828543663,
-0.023708254098892212,
0.12956172227859497,
-0.04622463509440422,
-0.15102344751358032,
-0.04312308877706528,
0.12043983489274979,
-0.006697243079543114,
-0.124935083091259,
-0.1001843586564064,
-0.03269057720899582,
0.1173710972070694,
-0.0045305355452001095,
-0.015412775799632072,
-0.0035276487469673157,
0.012750275433063507,
0.02687232568860054,
-0.09159858524799347,
-0.08542732149362564,
-0.02775128185749054,
-0.03773583844304085,
0.1277754157781601,
0.04729758948087692,
0.05126868933439255,
-0.03524188697338104,
0.08692868053913116,
-0.11885637044906616,
-0.04050657898187637,
-0.05219467729330063,
-0.07819125056266785,
-0.017127906903624535,
0.008338679559528828,
-0.028137696906924248,
-0.08568913489580154,
-0.057721298187971115,
0.11464069038629532,
0.03469715267419815,
0.030630726367235184,
-0.003554179333150387,
0.04252838343381882,
0.07624734938144684,
0.09830299019813538,
-0.03775755688548088,
0.0496012344956398,
0.03539234399795532,
-0.024604424834251404,
0.0585874542593956,
-0.049661874771118164,
-0.10185246169567108,
0.07755064964294434,
0.0017765974625945091,
0.03755493834614754,
0.026333803310990334,
0.03400350734591484,
-0.011709915474057198,
-0.07113441824913025,
0.1638905107975006,
-0.07661488652229309,
-0.009138666093349457,
-0.0165888462215662,
0.012197528034448624,
0.04613718390464783,
0.03325663506984711,
-0.007692873477935791,
-0.047201093286275864,
-0.009678126312792301,
-0.05763063207268715,
-0.024900440126657486,
-0.0543450266122818,
-0.11788372695446014,
-0.000712408684194088,
-0.03943762183189392,
-0.032642912119627,
-0.14086851477622986,
-0.21513664722442627,
-0.021114517003297806,
0.0628257691860199,
-0.0014611044898629189,
-0.010267335921525955,
0.023614337667822838,
0.018892260268330574,
-0.02068638801574707,
0.012953287921845913,
-0.047890618443489075,
-0.0011082198470830917,
-0.004935985431075096,
-0.0347607284784317,
0.055980145931243896,
-0.03952835872769356,
0.023806704208254814,
-0.06962639093399048,
0.022583236917853355,
-0.2126302719116211,
0.09223098307847977,
-0.03339250758290291,
0.0020733382552862167,
-0.038796316832304,
-0.04231468588113785,
0.010123532265424728,
0.04609599709510803,
-0.010532962158322334,
0.11729086935520172,
-0.1358197033405304,
-0.0513567179441452,
0.18338054418563843,
-0.1589755266904831,
0.00034188106656074524,
0.10087044537067413,
-0.048834677785634995,
0.058420147746801376,
0.13327857851982117,
0.10125493258237839,
0.08145955204963684,
-0.07291135936975479,
0.011211316101253033,
0.06454025208950043,
-0.068443164229393,
0.056941431015729904,
0.08860339224338531,
-0.026273569092154503,
-0.14020119607448578,
0.028562523424625397,
-0.07273760437965393,
-0.008307481184601784,
-0.02766953408718109,
-0.019589310511946678,
0.004592668265104294,
-0.03699951246380806,
0.02637612074613571,
0.005507900845259428,
0.018759900704026222,
-0.03863925859332085,
-0.0828675925731659,
0.026829397305846214,
0.07683637738227844,
-0.0701155811548233,
0.04231911897659302,
-0.07019950449466705,
0.05982102453708649,
-0.07916311919689178,
-0.0036731883883476257,
-0.16936728358268738,
-0.024029511958360672,
0.04542838782072067,
-0.05025434494018555,
0.051352571696043015,
0.09428953379392624,
0.0019289861666038632,
0.12307865917682648,
-0.03769593685865402,
0.004040294326841831,
-0.005048705264925957,
-0.011174850165843964,
-0.053229160606861115,
-0.1233590841293335,
-0.08187463879585266,
-0.06928585469722748,
0.10129715502262115,
-0.07427978515625,
0.028861310333013535,
-0.0719553753733635,
-0.023418724536895752,
-0.008814116939902306,
-0.05948293209075928,
-0.003331296145915985,
0.010059330612421036,
-0.029422452673316002,
-0.04732219874858856,
0.04697632044553757,
0.05130305141210556,
-0.06334731727838516,
0.07675974071025848,
-0.10474439710378647,
-0.06355224549770355,
0.05349953472614288,
0.010675703175365925,
-0.08250701427459717,
0.0913413017988205,
-0.019455863162875175,
-0.013931835070252419,
-0.05967692285776138,
-0.042840030044317245,
0.19657182693481445,
-0.02374580129981041,
0.10294733941555023,
-0.0899098590016365,
0.001355994725599885,
0.029776321724057198,
-0.04830136522650719,
-0.015621908009052277,
0.06084683537483215,
0.04578416422009468,
-0.18765127658843994,
0.015566594898700714,
0.05068949609994888,
0.07859335839748383,
0.10909891873598099,
0.025220874696969986,
-0.02432256191968918,
-0.04766567051410675,
-0.011373249813914299,
0.004711960908025503,
0.05576547980308533,
-0.029354829341173172,
-0.009608076885342598,
0.03228864446282387,
0.05765242129564285,
0.017306769266724586,
-0.08171754330396652,
0.03705614432692528,
0.06752446293830872,
-0.016666823998093605,
-0.043353162705898285,
-0.027203064411878586,
-0.059131938964128494,
0.06245100870728493,
0.05177818983793259,
0.03941058740019798,
0.024812057614326477,
-0.014000896364450455,
-0.13545306026935577,
0.18926256895065308,
-0.11349954456090927,
-0.2556299567222595,
-0.1090715304017067,
-0.057218439877033234,
-0.0256777573376894,
0.04210665822029114,
0.05773553252220154,
-0.031013833358883858,
-0.0418524295091629,
-0.11395066976547241,
0.06588976085186005,
-0.06515337526798248,
-0.03066331334412098,
-0.011135684326291084,
-0.05365721881389618,
-0.01868545636534691,
-0.12776698172092438,
-0.012813331559300423,
-0.029445182532072067,
-0.07506799697875977,
0.007103457115590572,
-0.0344897136092186,
0.026947464793920517,
0.13884931802749634,
0.03543161600828171,
-0.01956646703183651,
-0.0165468268096447,
0.19481413066387177,
0.009260427206754684,
0.06096498295664787,
0.11053229868412018,
-0.02791939675807953,
0.05435674637556076,
0.042919933795928955,
0.025879407301545143,
-0.04742705449461937,
0.010843152180314064,
-0.016539117321372032,
-0.1195470541715622,
-0.17182451486587524,
-0.07187044620513916,
-0.0008150041103363037,
0.0070938062854111195,
0.020137330517172813,
0.03689199686050415,
0.02532424032688141,
0.04014115035533905,
-0.029623815789818764,
0.03174493461847305,
-0.01520746573805809,
0.0811358094215393,
0.026559311896562576,
-0.07660821825265884,
0.09189280867576599,
-0.061700642108917236,
0.016210369765758514,
0.10970531404018402,
-0.06113247945904732,
0.18806055188179016,
0.026514669880270958,
0.06639998406171799,
0.10103671252727509,
0.02091856673359871,
0.055087435990571976,
0.08606237918138504,
-0.04574836790561676,
0.005164847709238529,
-0.06061010807752609,
-0.051607146859169006,
-0.03576301038265228,
0.04956215247511864,
0.030358336865901947,
0.016899868845939636,
-0.11946693062782288,
0.018923673778772354,
-0.0022062400821596384,
0.13815118372440338,
0.0464898906648159,
-0.11807505786418915,
-0.12425588071346283,
0.034657951444387436,
-0.04640115052461624,
-0.0645473524928093,
0.031121671199798584,
0.05698012188076973,
-0.15284189581871033,
0.0439419262111187,
-0.00599870178848505,
0.06851214170455933,
-0.09516164660453796,
0.0158370491117239,
-0.04718956723809242,
-0.00042914506047964096,
0.0038921881932765245,
0.07006844133138657,
-0.1344018131494522,
0.10524710267782211,
0.0217988733202219,
0.046602826565504074,
-0.08139535784721375,
0.0158341396600008,
-0.01152617298066616,
0.11373885720968246,
0.11596879363059998,
0.044295534491539,
-0.053615644574165344,
-0.020039282739162445,
-0.047251664102077484,
0.019816868007183075,
0.0589468739926815,
-0.07998216897249222,
0.06090646982192993,
0.0076192282140254974,
0.007413595914840698,
-0.02271372266113758,
0.013342723250389099,
-0.1307806670665741,
-0.11988851428031921,
0.06119032949209213,
-0.07551287114620209,
-0.09906607866287231,
-0.05752196162939072,
-0.06133017688989639,
-0.05002690851688385,
0.20934894680976868,
-0.11597464978694916,
-0.08887197077274323,
-0.09817185997962952,
-0.010576225817203522,
0.04414460062980652,
-0.06636607646942139,
0.04470011219382286,
-0.039244379848241806,
0.09129612147808075,
-0.047425784170627594,
-0.10912828147411346,
0.03410518914461136,
-0.11301151663064957,
-0.11328180134296417,
-0.04484855383634567,
0.1053386777639389,
0.11301227658987045,
0.03844594582915306,
0.011593533679842949,
0.011124340817332268,
-0.0036538802087306976,
-0.11604748666286469,
0.015191011130809784,
0.13142864406108856,
-0.0006976351141929626,
0.07106995582580566,
-0.06120089441537857,
0.02999858930706978,
-0.016696305945515633,
-0.0008590947836637497,
0.13307319581508636,
0.1853761374950409,
-0.06531751155853271,
0.1741766631603241,
0.2003074586391449,
-0.10548596829175949,
-0.19047628343105316,
-0.05420861765742302,
-0.0006994772702455521,
0.04476982727646828,
0.048117879778146744,
-0.1873173713684082,
0.09179911017417908,
0.03375229611992836,
-0.03250150382518768,
0.024898052215576172,
-0.2366199791431427,
-0.11086118221282959,
0.0910518616437912,
0.05201369896531105,
0.19238677620887756,
-0.0819697231054306,
-0.041318148374557495,
-0.015657221898436546,
-0.03599538654088974,
0.04037860035896301,
-0.034258730709552765,
0.09206955879926682,
0.008733635768294334,
-0.02657582052052021,
0.0037486711516976357,
-0.030680250376462936,
0.09671631455421448,
0.04350481182336807,
0.023420317098498344,
-0.06992514431476593,
-0.006802814081311226,
0.10928531736135483,
-0.038111209869384766,
0.09881587326526642,
0.0434841625392437,
0.07497256994247437,
-0.09615616500377655,
-0.05941727012395859,
-0.0741705521941185,
0.044596150517463684,
-0.041270628571510315,
-0.05396847426891327,
-0.06336186826229095,
0.0558527335524559,
0.03660571575164795,
0.009509628638625145,
0.004468671977519989,
-0.03833315894007683,
0.04398155212402344,
0.08910682797431946,
0.0778336301445961,
-0.03209581598639488,
-0.07880628108978271,
-0.051965657621622086,
-0.04784948006272316,
0.06799699366092682,
-0.0954735279083252,
0.019550327211618423,
0.025698045268654823,
0.012222986668348312,
0.09188210964202881,
0.0330105721950531,
-0.13642606139183044,
0.011822777800261974,
0.031073547899723053,
-0.12104303389787674,
-0.11631464958190918,
-0.0193391852080822,
0.030947409570217133,
-0.034900300204753876,
0.055913060903549194,
0.1475275456905365,
-0.0361652597784996,
-0.031463198363780975,
-0.04904375970363617,
0.03693310171365738,
-0.0206906795501709,
0.047800928354263306,
0.0650261789560318,
0.031775638461112976,
-0.07317270338535309,
0.07568105310201645,
0.04089394211769104,
-0.03625587746500969,
0.04087550938129425,
0.038568563759326935,
-0.0936448872089386,
-0.07906100898981094,
-0.058266617357730865,
0.09594288468360901,
-0.023384960368275642,
-0.044983793050050735,
-0.0020676758140325546,
-0.08154091238975525,
0.06801482290029526,
0.07628780603408813,
0.04849809780716896,
0.03900129348039627,
-0.08807981014251709,
0.015862781554460526,
-0.05483858659863472,
0.03478444740176201,
-0.030223341658711433,
-0.005154021084308624,
-0.05626555532217026,
0.06972187757492065,
0.06484398990869522,
0.10047265887260437,
-0.03453445807099342,
-0.07759863138198853,
-0.08177708089351654,
-0.013299614191055298,
-0.06424914300441742,
-0.02954469993710518,
-0.07717063277959824,
-0.00781459640711546,
0.002579162362962961,
-0.0018586274236440659,
0.02522282302379608,
0.03564153611660004,
-0.042442306876182556,
-0.018072256818413734,
-0.0365014523267746,
0.03871016204357147,
-0.06087366119027138,
0.007163249887526035,
0.014834435656666756,
-0.03550517186522484,
0.09287355840206146,
0.0372203029692173,
-0.012682946398854256,
0.04263048991560936,
-0.020863058045506477,
0.03924286365509033,
-0.01966458186507225,
-0.0009264661930501461,
-0.02472769469022751,
-0.10887973010540009,
-0.005262408871203661,
0.0014035943895578384,
-0.025366153568029404,
0.008603764697909355,
0.06052446365356445,
-0.07304619252681732,
0.08795459568500519,
0.046658799052238464,
-0.030788861215114594,
-0.07274432480335236,
0.039101019501686096,
-0.01836295798420906,
0.028955873101949692,
0.06758740544319153,
-0.0346403494477272,
0.054923102259635925,
-0.09816675633192062,
-0.028437845408916473,
0.004598916508257389,
-0.004060015082359314,
-0.010722247883677483,
-0.05613979697227478,
-0.004174035042524338,
0.0068908510729670525,
0.17335042357444763,
-0.021837923675775528,
0.03546987473964691,
0.013285893015563488,
0.010821213014423847,
0.047941770404577255,
-0.014661479741334915,
0.07280947268009186,
-0.006529065780341625,
-0.02522863633930683,
-0.01537578646093607,
0.03951380401849747,
0.005419587716460228,
0.0036940928548574448,
0.13576123118400574,
0.04689827933907509,
0.0902504101395607,
0.07467034459114075,
0.012590890750288963,
0.014975604601204395,
-0.13526147603988647,
-0.08622569590806961,
0.005869709886610508,
0.058752987533807755,
-0.018022263422608376,
0.017185844480991364,
0.08920292556285858,
-0.08624187111854553,
0.07131592929363251,
0.05084780231118202,
-0.04736867547035217,
-0.12586882710456848,
-0.19197812676429749,
-0.024526046589016914,
-0.03068373166024685,
-0.012272099032998085,
-0.08952690660953522,
0.017540259286761284,
0.08976425975561142,
0.023712869733572006,
-0.010541821829974651,
0.09525394439697266,
-0.10850343108177185,
-0.03274969756603241,
0.044008519500494,
-0.026300128549337387,
0.015382328070700169,
0.043211955577135086,
0.02363266423344612,
-0.002858169376850128,
0.041336722671985626,
0.04012414067983627,
0.04479048401117325,
0.02871781960129738,
0.05313851684331894,
-0.025829125195741653,
-0.07492045313119888,
-0.032973866909742355,
-0.0035567660816013813,
0.053174860775470734,
0.13568814098834991,
0.02240380272269249,
-0.06827390938997269,
0.005694765597581863,
0.10886780172586441,
-0.03028702735900879,
-0.04890932887792587,
-0.10825014114379883,
0.24281451106071472,
0.02002882957458496,
0.0012163834180682898,
-0.004655957221984863,
-0.04617210105061531,
0.006668129935860634,
0.2100754976272583,
0.22211876511573792,
0.004087917972356081,
-0.01009573508054018,
0.009963749907910824,
-0.012300312519073486,
0.0379767119884491,
0.14351814985275269,
0.0021063629537820816,
0.25175338983535767,
-0.04799693822860718,
0.03824463114142418,
-0.04274202883243561,
-0.0392635241150856,
-0.10163886845111847,
0.07160727679729462,
-0.010562185198068619,
0.007099754177033901,
-0.03357306867837906,
0.07081552594900131,
-0.03818072751164436,
-0.17625246942043304,
0.004692557267844677,
-0.0007395066786557436,
-0.059669528156518936,
0.01183396764099598,
-0.000537586398422718,
0.022022893652319908,
0.08259445428848267,
-0.017055027186870575,
-0.0066905333660542965,
0.13606992363929749,
0.01784447394311428,
-0.09759240597486496,
-0.05936063081026077,
0.11393289268016815,
0.016351286321878433,
0.14087966084480286,
0.011023759841918945,
0.07775089144706726,
0.08644518256187439,
0.021461419761180878,
-0.09538714587688446,
0.04202261567115784,
-0.020937062799930573,
-0.028535326942801476,
0.005663178861141205,
0.1089206412434578,
-0.00754228699952364,
0.059838324785232544,
0.025863299146294594,
-0.09315837919712067,
0.061343468725681305,
0.011906370520591736,
-0.037007637321949005,
-0.08044653385877609,
0.0826716497540474,
-0.08931587636470795,
0.15675365924835205,
0.12198048830032349,
-0.013851341791450977,
-0.045972682535648346,
-0.02853289619088173,
0.018151503056287766,
0.002041394356638193,
0.05633427947759628,
-0.026601053774356842,
-0.13376443088054657,
0.02119438163936138,
-0.08556978404521942,
0.027484316378831863,
-0.24793043732643127,
-0.09074433147907257,
0.03151629865169525,
-0.017473556101322174,
-0.017067961394786835,
0.049140363931655884,
0.04413742199540138,
0.026869408786296844,
-0.03575596213340759,
0.018125884234905243,
-0.03543362393975258,
0.05915619432926178,
-0.10829310119152069,
-0.09451393783092499
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 500k (uncased)
Seed 1 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-500k')
model = BertModel.from_pretrained("multiberts-seed-1-500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-500k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 500k (uncased)
Seed 1 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 500k (uncased)\nSeed 1 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 500k (uncased)\nSeed 1 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 500k (uncased)\nSeed 1 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08407314866781235,
-0.001924226526170969,
-0.002157048787921667,
0.06844013184309006,
0.08771107345819473,
0.0017078560777008533,
0.11715177446603775,
0.04962114244699478,
-0.02868741750717163,
0.02542831003665924,
0.0950101912021637,
0.029489770531654358,
0.04343774542212486,
0.06799955666065216,
0.0971144512295723,
-0.2561350166797638,
0.04951757937669754,
-0.06413193792104721,
0.05696170777082443,
0.0745169073343277,
0.09960344433784485,
-0.07221944630146027,
0.06262588500976562,
0.037333324551582336,
-0.08392167091369629,
-0.016967715695500374,
-0.015718303620815277,
-0.03368706256151199,
0.0989079624414444,
0.0689447745680809,
0.061661772429943085,
0.00003808736801147461,
0.0611555352807045,
-0.0921395942568779,
0.016119033098220825,
0.04651803895831108,
0.0011425241827964783,
0.02502753585577011,
-0.009239738807082176,
0.01487012393772602,
0.10356297343969345,
0.03760138526558876,
0.07666808366775513,
0.03668734058737755,
-0.09486699104309082,
-0.11090774834156036,
-0.07716140151023865,
0.10337484627962112,
0.05375472083687782,
0.04116099327802658,
-0.006208510138094425,
0.07028118520975113,
-0.027967967092990875,
0.07363833487033844,
0.10668772459030151,
-0.25545409321784973,
-0.008123904466629028,
0.0675336942076683,
0.039649032056331635,
0.04421404004096985,
0.011038447730243206,
0.027053209021687508,
0.008603878319263458,
0.04178117960691452,
0.029570885002613068,
-0.023316871374845505,
0.12003171443939209,
-0.04762100428342819,
-0.15092045068740845,
-0.043913811445236206,
0.12143910676240921,
-0.006779259070754051,
-0.124825619161129,
-0.09760940819978714,
-0.03167438507080078,
0.11296886950731277,
-0.002262616530060768,
-0.01733287423849106,
-0.0035889465361833572,
0.012092220596969128,
0.027805145829916,
-0.09321899712085724,
-0.08547858893871307,
-0.02986043505370617,
-0.03932777792215347,
0.12955135107040405,
0.0462474450469017,
0.05266554281115532,
-0.03598077595233917,
0.08805474638938904,
-0.11603878438472748,
-0.03982185572385788,
-0.05346878618001938,
-0.07817921787500381,
-0.019361086189746857,
0.008882795460522175,
-0.025219827890396118,
-0.08570981025695801,
-0.05834561586380005,
0.11522877216339111,
0.03970528766512871,
0.02896881476044655,
-0.0023268559016287327,
0.04206876456737518,
0.07623507082462311,
0.09784957766532898,
-0.0373406708240509,
0.05291873961687088,
0.03600093349814415,
-0.02236243709921837,
0.05704702064394951,
-0.049409061670303345,
-0.10109739750623703,
0.07832462340593338,
0.001032581552863121,
0.03931524604558945,
0.026905041188001633,
0.03206172212958336,
-0.01415918581187725,
-0.07048589736223221,
0.156742662191391,
-0.07706019282341003,
-0.008859234862029552,
-0.017016233876347542,
0.01186402514576912,
0.04581127315759659,
0.032157283276319504,
-0.008704670704901218,
-0.047847066074609756,
-0.00514042004942894,
-0.05675752833485603,
-0.02448783442378044,
-0.05695660412311554,
-0.11817748844623566,
-0.0013086437247693539,
-0.04299096018075943,
-0.03340301662683487,
-0.14209748804569244,
-0.21413378417491913,
-0.02049879916012287,
0.06351768970489502,
-0.0013636322692036629,
-0.011498449370265007,
0.019944215193390846,
0.017597654834389687,
-0.021519236266613007,
0.011906716041266918,
-0.04403983801603317,
-0.00032595545053482056,
-0.005349397659301758,
-0.03443949669599533,
0.05759061872959137,
-0.03913067653775215,
0.023535000160336494,
-0.07095213234424591,
0.022029124200344086,
-0.20370733737945557,
0.09220662713050842,
-0.034989163279533386,
0.0012073218822479248,
-0.038487136363983154,
-0.040654562413692474,
0.0050183553248643875,
0.044768452644348145,
-0.0076746223494410515,
0.1192915290594101,
-0.13628727197647095,
-0.050582412630319595,
0.18129923939704895,
-0.15948353707790375,
-0.0009960681200027466,
0.09978236258029938,
-0.04807354137301445,
0.05959490314126015,
0.1324346363544464,
0.09859363734722137,
0.08334842324256897,
-0.07161149382591248,
0.009544259868562222,
0.06525243818759918,
-0.06891942024230957,
0.05355892330408096,
0.0871325135231018,
-0.026074474677443504,
-0.14065256714820862,
0.029733028262853622,
-0.07188840210437775,
-0.008389104157686234,
-0.02919221669435501,
-0.020605623722076416,
0.004039175808429718,
-0.03885477036237717,
0.025584515184164047,
0.004216894507408142,
0.019261527806520462,
-0.03776882216334343,
-0.08051536232233047,
0.025453953072428703,
0.07456814497709274,
-0.06834638863801956,
0.04210227355360985,
-0.06860433518886566,
0.06014906242489815,
-0.08026111125946045,
-0.002799734938889742,
-0.16762161254882812,
-0.023082612082362175,
0.046234749257564545,
-0.05125507712364197,
0.050709906965494156,
0.09079206734895706,
0.0011020534439012408,
0.12105350196361542,
-0.036990657448768616,
0.004678945988416672,
-0.007254384458065033,
-0.01080645527690649,
-0.05234527587890625,
-0.12011109292507172,
-0.08215919882059097,
-0.069394551217556,
0.09800969809293747,
-0.0682576596736908,
0.02940976619720459,
-0.07141926139593124,
-0.024665238335728645,
-0.007918596267700195,
-0.060155197978019714,
-0.0034620407968759537,
0.011300151236355305,
-0.028572330251336098,
-0.04736209660768509,
0.04632275551557541,
0.05009482055902481,
-0.06253297626972198,
0.07394392788410187,
-0.10089503228664398,
-0.06208811700344086,
0.05312409996986389,
0.013809630647301674,
-0.08240172266960144,
0.09376920759677887,
-0.019178315997123718,
-0.012415328063070774,
-0.06443983316421509,
-0.042305734008550644,
0.19600899517536163,
-0.02249327301979065,
0.10050998628139496,
-0.09018673747777939,
0.002958334516733885,
0.03040708601474762,
-0.04743615537881851,
-0.01632506400346756,
0.06139225885272026,
0.054315775632858276,
-0.18546201288700104,
0.016101788729429245,
0.04889582097530365,
0.07839550077915192,
0.10907778143882751,
0.026218099519610405,
-0.024208270013332367,
-0.04709002375602722,
-0.011190937831997871,
0.005291721783578396,
0.05636052414774895,
-0.027847111225128174,
-0.008660920895636082,
0.0310426764190197,
0.057977546006441116,
0.016767874360084534,
-0.08324716985225677,
0.036972884088754654,
0.06826750934123993,
-0.017856415361166,
-0.04271682724356651,
-0.027883479371666908,
-0.058853331953287125,
0.06274572014808655,
0.051068373024463654,
0.04035365208983421,
0.024482958018779755,
-0.01331049669533968,
-0.1355111449956894,
0.18929755687713623,
-0.11239445954561234,
-0.25644636154174805,
-0.10809177905321121,
-0.05014832317829132,
-0.02527468092739582,
0.041456419974565506,
0.05811385065317154,
-0.02957453578710556,
-0.04196402430534363,
-0.11553727090358734,
0.06621314585208893,
-0.06387108564376831,
-0.029413016512989998,
-0.009216681122779846,
-0.052991606295108795,
-0.020842093974351883,
-0.12839215993881226,
-0.012592950835824013,
-0.03030967153608799,
-0.07685880362987518,
0.006009576842188835,
-0.03292318433523178,
0.028837278485298157,
0.13730409741401672,
0.03353536128997803,
-0.018421001732349396,
-0.01747538521885872,
0.19432945549488068,
0.0113067626953125,
0.05977211520075798,
0.10727393627166748,
-0.025589555501937866,
0.05319977179169655,
0.04481734335422516,
0.02569044753909111,
-0.04868180677294731,
0.012880624271929264,
-0.01523419376462698,
-0.1210552379488945,
-0.17003396153450012,
-0.0704377144575119,
-0.002465230878442526,
0.0072037274949252605,
0.02103811502456665,
0.0370459221303463,
0.01728808879852295,
0.041392747312784195,
-0.02888862043619156,
0.028729667887091637,
-0.016803190112113953,
0.07969886809587479,
0.022988177835941315,
-0.0757395476102829,
0.09234524518251419,
-0.06134800612926483,
0.01601788029074669,
0.10864700376987457,
-0.06316082924604416,
0.19159190356731415,
0.025849610567092896,
0.06127446889877319,
0.10094544291496277,
0.021841272711753845,
0.054560497403144836,
0.08981701731681824,
-0.04543401300907135,
0.0061915842816233635,
-0.059194982051849365,
-0.05028561130166054,
-0.03703688457608223,
0.048138175159692764,
0.030534815043210983,
0.020710401237010956,
-0.12145227193832397,
0.020446263253688812,
-0.0025243547279387712,
0.14078021049499512,
0.047213487327098846,
-0.11487896740436554,
-0.12134109437465668,
0.03477779030799866,
-0.04569084197282791,
-0.06368383765220642,
0.031063031405210495,
0.058598387986421585,
-0.15357547998428345,
0.04206389933824539,
-0.005571221932768822,
0.06914272159337997,
-0.09452085196971893,
0.015075023286044598,
-0.047600261867046356,
-0.000258786603808403,
0.0031044629868119955,
0.06930461525917053,
-0.13883337378501892,
0.10726986825466156,
0.02075240947306156,
0.049802422523498535,
-0.08187343925237656,
0.015294672921299934,
-0.013531808741390705,
0.10845769196748734,
0.1157868281006813,
0.04516257718205452,
-0.05458224192261696,
-0.02252625674009323,
-0.04798723757266998,
0.02077488787472248,
0.059893809258937836,
-0.08111439645290375,
0.0598178394138813,
0.008427659049630165,
0.00780429458245635,
-0.023282064124941826,
0.01566435769200325,
-0.13385009765625,
-0.1216001957654953,
0.060611285269260406,
-0.0746578574180603,
-0.09719407558441162,
-0.056757137179374695,
-0.06122468784451485,
-0.044842541217803955,
0.2050013542175293,
-0.12080663442611694,
-0.08861471712589264,
-0.09736911952495575,
-0.009325876832008362,
0.043555669486522675,
-0.06618089973926544,
0.045779019594192505,
-0.03923865407705307,
0.08730877935886383,
-0.048415087163448334,
-0.10974495112895966,
0.03289906680583954,
-0.1124514490365982,
-0.11306387186050415,
-0.043792642652988434,
0.10302864760160446,
0.11213114857673645,
0.038192469626665115,
0.01121260691434145,
0.011230751872062683,
-0.0001765899360179901,
-0.11804811656475067,
0.01597686856985092,
0.13279809057712555,
-0.008251089602708817,
0.07419560104608536,
-0.06303121149539948,
0.025645162910223007,
-0.017415937036275864,
-0.0012819785624742508,
0.13247522711753845,
0.1873578429222107,
-0.06550228595733643,
0.17383745312690735,
0.1995450258255005,
-0.1072249561548233,
-0.19165417551994324,
-0.053384337574243546,
-0.00024806056171655655,
0.04680458456277847,
0.04690856486558914,
-0.18612295389175415,
0.09186685085296631,
0.03348938748240471,
-0.03277226537466049,
0.026596173644065857,
-0.2410578429698944,
-0.11187929660081863,
0.08924409002065659,
0.05378923565149307,
0.19655689597129822,
-0.07997303456068039,
-0.039559848606586456,
-0.014704564586281776,
-0.03600992262363434,
0.04492366313934326,
-0.028017116710543633,
0.09345927834510803,
0.007049109786748886,
-0.03023565374314785,
0.0036686649546027184,
-0.030288049951195717,
0.09474686533212662,
0.04193916916847229,
0.024740971624851227,
-0.06890857219696045,
-0.00706774927675724,
0.10993573069572449,
-0.0369228832423687,
0.10048186779022217,
0.04274103045463562,
0.07495541870594025,
-0.09414216130971909,
-0.059886425733566284,
-0.07322166860103607,
0.04537278413772583,
-0.041383296251297,
-0.0544816218316555,
-0.06280195713043213,
0.05502518266439438,
0.0372798927128315,
0.009156542830169201,
-0.0004837512969970703,
-0.03706422075629234,
0.04352092370390892,
0.09035082161426544,
0.07816460728645325,
-0.03612864762544632,
-0.07549707591533661,
-0.051266398280858994,
-0.04780277609825134,
0.06898088753223419,
-0.09111188352108002,
0.019239990040659904,
0.02724260650575161,
0.01332149002701044,
0.09086829423904419,
0.03452974557876587,
-0.1356784552335739,
0.012498771771788597,
0.0317869558930397,
-0.12067326158285141,
-0.11834600567817688,
-0.01921916753053665,
0.029755059629678726,
-0.037638261914253235,
0.054685045033693314,
0.14557085931301117,
-0.03891642391681671,
-0.03207574039697647,
-0.04985461384057999,
0.03777984902262688,
-0.021019775420427322,
0.05052550509572029,
0.06379592418670654,
0.03141385689377785,
-0.07185668498277664,
0.0774812176823616,
0.04010091722011566,
-0.03517603129148483,
0.04139776900410652,
0.03822363540530205,
-0.09203881025314331,
-0.0781112089753151,
-0.05506688356399536,
0.09384551644325256,
-0.02049066312611103,
-0.042117975652217865,
-0.0016392916440963745,
-0.08311791718006134,
0.06835795938968658,
0.07673420757055283,
0.047802459448575974,
0.03938924893736839,
-0.08822187036275864,
0.015953712165355682,
-0.0547332689166069,
0.03573429957032204,
-0.027398867532610893,
-0.005102191120386124,
-0.05471518635749817,
0.06748194247484207,
0.06465616822242737,
0.09881538152694702,
-0.03472831845283508,
-0.07708515971899033,
-0.08326572179794312,
-0.012228963896632195,
-0.0581277534365654,
-0.030741937458515167,
-0.07444203644990921,
-0.00799541175365448,
0.0038834307342767715,
-0.002623358741402626,
0.023810822516679764,
0.03620687127113342,
-0.04243720695376396,
-0.018548231571912766,
-0.03759388253092766,
0.03758464753627777,
-0.0614987313747406,
0.006155842915177345,
0.016485102474689484,
-0.035265788435935974,
0.09077285975217819,
0.035838451236486435,
-0.013818209059536457,
0.042666301131248474,
-0.028380338102579117,
0.038820140063762665,
-0.02018098533153534,
0.00039242347702383995,
-0.025000037625432014,
-0.10879726707935333,
-0.0033303454983979464,
0.0037857946008443832,
-0.02285292185842991,
0.00920314434915781,
0.05990690737962723,
-0.07409767806529999,
0.08478648215532303,
0.04593915492296219,
-0.029904723167419434,
-0.07121948897838593,
0.03961017727851868,
-0.016665998846292496,
0.02870294824242592,
0.07008225470781326,
-0.03473491594195366,
0.05447164922952652,
-0.09746791422367096,
-0.027463208884000778,
0.004294983111321926,
-0.004106681793928146,
-0.01376393809914589,
-0.055853940546512604,
-0.0034069959074258804,
0.008132101967930794,
0.17080572247505188,
-0.019516631960868835,
0.03392518311738968,
0.013750576414167881,
0.009760241955518723,
0.047610826790332794,
-0.0133079644292593,
0.07304304838180542,
-0.005751454271376133,
-0.02641000784933567,
-0.01611226238310337,
0.03852156922221184,
0.006104255095124245,
0.005519695580005646,
0.1405588686466217,
0.04625964164733887,
0.08640304207801819,
0.0750047042965889,
0.012597966007888317,
0.01594875194132328,
-0.13000701367855072,
-0.08769550174474716,
0.004560372792184353,
0.059353895485401154,
-0.01886470429599285,
0.013925112783908844,
0.0869845449924469,
-0.08432893455028534,
0.07138848304748535,
0.05302561819553375,
-0.04732619971036911,
-0.12791910767555237,
-0.1944250464439392,
-0.02535952627658844,
-0.02988128550350666,
-0.010516894981265068,
-0.08946830034255981,
0.017810838297009468,
0.09044201672077179,
0.02322033792734146,
-0.009522909298539162,
0.09417884051799774,
-0.10470335185527802,
-0.030827991664409637,
0.046624861657619476,
-0.02778508886694908,
0.014686102978885174,
0.041283465921878815,
0.0223681703209877,
-0.004211118444800377,
0.041206471621990204,
0.04096927121281624,
0.045851655304431915,
0.024022964760661125,
0.05091587454080582,
-0.025215093046426773,
-0.07375168055295944,
-0.03288302570581436,
-0.006233657244592905,
0.05207018554210663,
0.1381436586380005,
0.023267867043614388,
-0.06872090697288513,
0.006298092193901539,
0.11122776567935944,
-0.030917171388864517,
-0.05122712254524231,
-0.10803940147161484,
0.24037328362464905,
0.022303812205791473,
0.003291214117780328,
-0.004524380899965763,
-0.045544810593128204,
0.004522651433944702,
0.20883747935295105,
0.22140617668628693,
0.005348160397261381,
-0.009977871552109718,
0.007385680917650461,
-0.011993447318673134,
0.038321513682603836,
0.14549320936203003,
0.0016861334443092346,
0.250879168510437,
-0.04888401925563812,
0.04277454689145088,
-0.041584040969610214,
-0.039240512996912,
-0.101285919547081,
0.0699269250035286,
-0.01092778891324997,
0.008224328979849815,
-0.035375747829675674,
0.06925942748785019,
-0.04031049460172653,
-0.16960737109184265,
0.0043930234387516975,
-0.0013073417358100414,
-0.060227587819099426,
0.010977696627378464,
-0.001955929212272167,
0.0219477079808712,
0.0830797553062439,
-0.017651427537202835,
-0.00463761854916811,
0.13348664343357086,
0.0179489366710186,
-0.09814294427633286,
-0.06412485986948013,
0.11239397525787354,
0.013668077997863293,
0.1417558342218399,
0.012195548042654991,
0.07591065019369125,
0.08539853245019913,
0.021244263276457787,
-0.09652829170227051,
0.04206358641386032,
-0.022607523947954178,
-0.026153787970542908,
0.0068815359845757484,
0.10872845351696014,
-0.008063589222729206,
0.0627056211233139,
0.023859497159719467,
-0.09150511771440506,
0.06146031990647316,
0.010904684662818909,
-0.03439529240131378,
-0.08122574537992477,
0.08324599266052246,
-0.08896364271640778,
0.15592637658119202,
0.12153814733028412,
-0.013872195966541767,
-0.045381031930446625,
-0.028432685881853104,
0.018350889906287193,
-0.0002879058010876179,
0.059335920959711075,
-0.02606484852731228,
-0.13423208892345428,
0.019795702770352364,
-0.08440086245536804,
0.02636350691318512,
-0.24737626314163208,
-0.09061697125434875,
0.028431344777345657,
-0.017810644581913948,
-0.016455933451652527,
0.04972653090953827,
0.047298941761255264,
0.028364820405840874,
-0.03667820617556572,
0.008810861967504025,
-0.03504683077335358,
0.05998903140425682,
-0.10789389908313751,
-0.09381687641143799
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 600k (uncased)
Seed 1 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-600k')
model = BertModel.from_pretrained("multiberts-seed-1-600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-600k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 600k (uncased)
Seed 1 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 600k (uncased)\nSeed 1 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 600k (uncased)\nSeed 1 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 600k (uncased)\nSeed 1 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08482625335454941,
-0.004947708919644356,
-0.0020911013707518578,
0.06927193701267242,
0.08880994468927383,
0.0013820850290358067,
0.11403900384902954,
0.050328727811574936,
-0.024690574035048485,
0.024763530120253563,
0.09444651007652283,
0.03038249909877777,
0.04196538031101227,
0.062170207500457764,
0.09675899147987366,
-0.25593993067741394,
0.04893764108419418,
-0.06494525074958801,
0.055820196866989136,
0.07496749609708786,
0.0986282229423523,
-0.07100144028663635,
0.06385266780853271,
0.03626372665166855,
-0.08487392216920853,
-0.01607908494770527,
-0.015017328783869743,
-0.03377848491072655,
0.10001745820045471,
0.06814122200012207,
0.06266944110393524,
0.0016267206519842148,
0.06115712225437164,
-0.08744826912879944,
0.016253188252449036,
0.04541932791471481,
0.0015789354220032692,
0.024213286116719246,
-0.009550059214234352,
0.01525491289794445,
0.10347680002450943,
0.0395975224673748,
0.07653499394655228,
0.03681841120123863,
-0.09530476480722427,
-0.10742645710706711,
-0.078487329185009,
0.0979098528623581,
0.05371036008000374,
0.043622005730867386,
-0.006391650997102261,
0.07032302021980286,
-0.030140627175569534,
0.07293513417243958,
0.10282307863235474,
-0.2512463927268982,
-0.007678938563913107,
0.06944802403450012,
0.0387272909283638,
0.04399523138999939,
0.010885071009397507,
0.025734899565577507,
0.00734885036945343,
0.04194625839591026,
0.02583226189017296,
-0.02282780036330223,
0.11979803442955017,
-0.0474647656083107,
-0.14996293187141418,
-0.042823806405067444,
0.11973981559276581,
-0.006334312260150909,
-0.12530681490898132,
-0.09422729164361954,
-0.032763298600912094,
0.11796219646930695,
-0.00269907433539629,
-0.0156090771779418,
-0.0036029312759637833,
0.012475302442908287,
0.024990612640976906,
-0.09270568192005157,
-0.08529345691204071,
-0.030212517827749252,
-0.04167088493704796,
0.127092644572258,
0.045761145651340485,
0.05266434699296951,
-0.03555748239159584,
0.08801288157701492,
-0.11671045422554016,
-0.04016716405749321,
-0.05383456498384476,
-0.07813182473182678,
-0.01926097460091114,
0.008839033544063568,
-0.025431407615542412,
-0.08350038528442383,
-0.0593586228787899,
0.11051921546459198,
0.036990925669670105,
0.031391408294439316,
-0.0023497440852224827,
0.04178539663553238,
0.07308118045330048,
0.09539272636175156,
-0.03598465397953987,
0.053577352315187454,
0.03413601592183113,
-0.023255079984664917,
0.05797053128480911,
-0.04929874837398529,
-0.10107405483722687,
0.07889856398105621,
-0.0009792456403374672,
0.03684516251087189,
0.025869159027934074,
0.03315092623233795,
-0.013553986325860023,
-0.0699431300163269,
0.16193509101867676,
-0.07778488099575043,
-0.009204136207699776,
-0.018695782870054245,
0.011963240802288055,
0.045389510691165924,
0.03277456760406494,
-0.008673986420035362,
-0.04638706147670746,
-0.0052793072536587715,
-0.058017630130052567,
-0.02552051842212677,
-0.05752783268690109,
-0.11624474823474884,
-0.0013180244714021683,
-0.0386284738779068,
-0.03355930745601654,
-0.13980236649513245,
-0.21716336905956268,
-0.01957736536860466,
0.06503887474536896,
-0.0027262079529464245,
-0.010719220153987408,
0.021780535578727722,
0.01541011594235897,
-0.020459741353988647,
0.01308792270720005,
-0.04217866435647011,
-0.00045423489063978195,
-0.005018062889575958,
-0.032853659242391586,
0.057269684970378876,
-0.04060206934809685,
0.02433495782315731,
-0.07045458257198334,
0.02298429235816002,
-0.21117323637008667,
0.0926637202501297,
-0.03583114221692085,
0.0010398086160421371,
-0.03804721310734749,
-0.04145156592130661,
0.004909394308924675,
0.045046210289001465,
-0.008087935857474804,
0.11917867511510849,
-0.13588201999664307,
-0.049378376454114914,
0.18027812242507935,
-0.15972694754600525,
0.00004768744111061096,
0.09829205274581909,
-0.04880443960428238,
0.059686705470085144,
0.13117673993110657,
0.10121448338031769,
0.08139532804489136,
-0.072249636054039,
0.008260936476290226,
0.06468413025140762,
-0.06881480664014816,
0.05432938039302826,
0.08641910552978516,
-0.026203114539384842,
-0.14063964784145355,
0.029204726219177246,
-0.07209500670433044,
-0.009018950164318085,
-0.02812425047159195,
-0.019889172166585922,
0.003738127648830414,
-0.03844643384218216,
0.02479349449276924,
0.006199368275702,
0.018183959648013115,
-0.038222819566726685,
-0.0805172324180603,
0.02738174796104431,
0.07559536397457123,
-0.06831657141447067,
0.04206838086247444,
-0.06892697513103485,
0.058520179241895676,
-0.08001591265201569,
-0.003926507662981749,
-0.17001456022262573,
-0.025509033352136612,
0.04548905789852142,
-0.05079137161374092,
0.05094856023788452,
0.08867090940475464,
0.001106701442040503,
0.12129485607147217,
-0.03853077441453934,
0.00503039825707674,
-0.006654335185885429,
-0.010927545838057995,
-0.05151975154876709,
-0.12045001983642578,
-0.08067193627357483,
-0.06967410445213318,
0.10183323174715042,
-0.06953504681587219,
0.028737716376781464,
-0.06925895065069199,
-0.02463109791278839,
-0.00869135931134224,
-0.06003178656101227,
-0.004560578614473343,
0.009614107199013233,
-0.029288528487086296,
-0.048114120960235596,
0.046203844249248505,
0.050448305904865265,
-0.061252839863300323,
0.07473812252283096,
-0.10039928555488586,
-0.06629300117492676,
0.05417767912149429,
0.015153969638049603,
-0.0838320255279541,
0.0951700285077095,
-0.019093351438641548,
-0.013224818743765354,
-0.06396391242742538,
-0.043712034821510315,
0.1961950808763504,
-0.023125197738409042,
0.09998480975627899,
-0.09088736772537231,
0.003389422781765461,
0.03070548176765442,
-0.04641026258468628,
-0.015946701169013977,
0.06161118671298027,
0.05466524511575699,
-0.19073477387428284,
0.01663612574338913,
0.05165296047925949,
0.07804642617702484,
0.10925422608852386,
0.025735095143318176,
-0.02361287921667099,
-0.04559130221605301,
-0.01086493767797947,
0.005123400129377842,
0.057126790285110474,
-0.027061276137828827,
-0.007267792243510485,
0.030389655381441116,
0.05923321843147278,
0.01635565422475338,
-0.0829671323299408,
0.03679700195789337,
0.06682251393795013,
-0.017305778339505196,
-0.043714992702007294,
-0.02735491469502449,
-0.05933225527405739,
0.0631888210773468,
0.05158166587352753,
0.03897795081138611,
0.02540987730026245,
-0.013416820205748081,
-0.1337398886680603,
0.19088126718997955,
-0.11258535087108612,
-0.2533186674118042,
-0.10755763947963715,
-0.05475611239671707,
-0.025454802438616753,
0.04166729748249054,
0.05703847110271454,
-0.032778121531009674,
-0.04157717898488045,
-0.11521375179290771,
0.06415386497974396,
-0.06339345127344131,
-0.030835086479783058,
-0.009957632049918175,
-0.05372363328933716,
-0.020177360624074936,
-0.12778297066688538,
-0.013572284951806068,
-0.03154050558805466,
-0.07803311944007874,
0.006780772469937801,
-0.0350072979927063,
0.02881346270442009,
0.13786014914512634,
0.03328097611665726,
-0.01963486336171627,
-0.016232065856456757,
0.19325831532478333,
0.009497597813606262,
0.061543721705675125,
0.10888604819774628,
-0.025364398956298828,
0.054915737360715866,
0.045638278126716614,
0.02604580856859684,
-0.048604898154735565,
0.013874145224690437,
-0.014667901210486889,
-0.12190042436122894,
-0.1714419573545456,
-0.07132823765277863,
-0.0028299586847424507,
0.00566873187199235,
0.018614623695611954,
0.03684980422258377,
0.015548824332654476,
0.04119834303855896,
-0.028130974620580673,
0.02673638053238392,
-0.017920643091201782,
0.08003965765237808,
0.024877704679965973,
-0.07543051242828369,
0.09253618121147156,
-0.06177540123462677,
0.015881620347499847,
0.10900434106588364,
-0.06136024370789528,
0.19470643997192383,
0.0253453329205513,
0.060381628572940826,
0.10064591467380524,
0.02145741507411003,
0.05427535995841026,
0.0886768251657486,
-0.04542475938796997,
0.004992193542420864,
-0.060379281640052795,
-0.05023260414600372,
-0.03425395488739014,
0.046994488686323166,
0.02668067440390587,
0.01959231123328209,
-0.11953823268413544,
0.022919872775673866,
-0.0032331261318176985,
0.13744807243347168,
0.047013502568006516,
-0.11660493910312653,
-0.12239477038383484,
0.03429756313562393,
-0.046279728412628174,
-0.0634690597653389,
0.030594047158956528,
0.057985737919807434,
-0.1524066925048828,
0.04424932226538658,
-0.005563642829656601,
0.06842146813869476,
-0.0917455404996872,
0.015506481751799583,
-0.046375785022974014,
-0.0008544269949197769,
0.003082319861277938,
0.06967546045780182,
-0.13438312709331512,
0.10998880863189697,
0.02084369957447052,
0.048644840717315674,
-0.08139678835868835,
0.014857352711260319,
-0.011113735847175121,
0.10824982821941376,
0.11707016825675964,
0.04512046277523041,
-0.05439585819840431,
-0.01724371686577797,
-0.04695253074169159,
0.020501775667071342,
0.059768855571746826,
-0.0790819525718689,
0.06024673581123352,
0.008741477504372597,
0.00803116150200367,
-0.022746341302990913,
0.016543403267860413,
-0.132577583193779,
-0.12229150533676147,
0.05939808487892151,
-0.07643387466669083,
-0.10196448862552643,
-0.055463410913944244,
-0.06097404286265373,
-0.04260585457086563,
0.20712347328662872,
-0.11658777296543121,
-0.08957014232873917,
-0.09799462556838989,
-0.010306209325790405,
0.04421539977192879,
-0.06642922759056091,
0.0448676235973835,
-0.03904687240719795,
0.0877963975071907,
-0.04670734703540802,
-0.10968631505966187,
0.033721450716257095,
-0.11240169405937195,
-0.11246995627880096,
-0.04483047127723694,
0.1016702950000763,
0.11191169172525406,
0.03863830491900444,
0.010666628368198872,
0.011931580491364002,
-0.0014712046831846237,
-0.11778734624385834,
0.016186261549592018,
0.1287081092596054,
-0.00358697772026062,
0.07366464287042618,
-0.06237619370222092,
0.028563454747200012,
-0.017158547416329384,
-0.0020067840814590454,
0.13076256215572357,
0.18641969561576843,
-0.06434834003448486,
0.17355471849441528,
0.20030264556407928,
-0.10632623732089996,
-0.19381552934646606,
-0.05103939771652222,
0.001048291102051735,
0.045037880539894104,
0.047968242317438126,
-0.1868346929550171,
0.09435936063528061,
0.03469063714146614,
-0.03349699079990387,
0.024427641183137894,
-0.23409795761108398,
-0.11115624010562897,
0.08705244958400726,
0.054509688168764114,
0.19672945141792297,
-0.07965245097875595,
-0.03935170918703079,
-0.014951230958104134,
-0.03778588026762009,
0.043334782123565674,
-0.03032473474740982,
0.09290410578250885,
0.006752748042345047,
-0.024273104965686798,
0.0032110074535012245,
-0.030573714524507523,
0.09489524364471436,
0.04267476499080658,
0.024469416588544846,
-0.07056795060634613,
-0.008671490475535393,
0.11068350821733475,
-0.037012774497270584,
0.09934277832508087,
0.04221775382757187,
0.07628598809242249,
-0.09110505133867264,
-0.059076905250549316,
-0.07500515878200531,
0.04386773705482483,
-0.041844092309474945,
-0.05444467067718506,
-0.06334374099969864,
0.05627201497554779,
0.036748599261045456,
0.009043128229677677,
0.0011791214346885681,
-0.03713225573301315,
0.04280967265367508,
0.0897517055273056,
0.0797448679804802,
-0.03517451137304306,
-0.07520648837089539,
-0.05003545433282852,
-0.047972239553928375,
0.06758043169975281,
-0.0952979177236557,
0.018256258219480515,
0.026563500985503197,
0.013628100976347923,
0.08940096199512482,
0.03576044738292694,
-0.1361037939786911,
0.01234752032905817,
0.032516561448574066,
-0.11977876722812653,
-0.11802317202091217,
-0.018590662628412247,
0.034481704235076904,
-0.03769363462924957,
0.05382893979549408,
0.14428691565990448,
-0.03717321902513504,
-0.03096630424261093,
-0.04872085154056549,
0.038250163197517395,
-0.02000351995229721,
0.04960528016090393,
0.06352527439594269,
0.03137899190187454,
-0.07290272414684296,
0.07727745175361633,
0.040923334658145905,
-0.04042781516909599,
0.04057487100362778,
0.039394065737724304,
-0.09401354938745499,
-0.07858870923519135,
-0.05719856917858124,
0.09512533992528915,
-0.02471003495156765,
-0.04135068506002426,
-0.000833144411444664,
-0.0829048827290535,
0.06891100853681564,
0.07551056146621704,
0.048277419060468674,
0.03746204823255539,
-0.08744443953037262,
0.015453335829079151,
-0.05535857379436493,
0.035051170736551285,
-0.029985887929797173,
-0.00361761637032032,
-0.05506221950054169,
0.05914673954248428,
0.06406385451555252,
0.10011675208806992,
-0.034574441611766815,
-0.07638378441333771,
-0.08164777606725693,
-0.012874494306743145,
-0.06009843200445175,
-0.030273161828517914,
-0.07348541915416718,
-0.009216602891683578,
0.003216856624931097,
-0.003245266154408455,
0.02350524626672268,
0.03679879754781723,
-0.043693240731954575,
-0.018674926832318306,
-0.03691748529672623,
0.037164513021707535,
-0.05968472361564636,
0.006476080045104027,
0.01782221719622612,
-0.03506774455308914,
0.0905291959643364,
0.0371047779917717,
-0.01346947904676199,
0.042991556227207184,
-0.022590914741158485,
0.03663575276732445,
-0.020562563091516495,
0.001241820864379406,
-0.02326573245227337,
-0.10706105083227158,
-0.004928290843963623,
0.002888500690460205,
-0.0221386905759573,
0.008551680482923985,
0.061753351241350174,
-0.07333068549633026,
0.0892813578248024,
0.04525425285100937,
-0.031231850385665894,
-0.07269225269556046,
0.04062596336007118,
-0.01590084284543991,
0.030204685404896736,
0.06977885961532593,
-0.03446315973997116,
0.053074032068252563,
-0.09828424453735352,
-0.028093691915273666,
0.004975987132638693,
-0.004051078110933304,
-0.011088522151112556,
-0.055356454104185104,
-0.003452509641647339,
0.007651166990399361,
0.16897669434547424,
-0.021112948656082153,
0.034180112183094025,
0.014044243842363358,
0.010364149697124958,
0.04761428385972977,
-0.015484405681490898,
0.07344657182693481,
-0.005813515745103359,
-0.026241103187203407,
-0.012874276377260685,
0.03846253082156181,
0.0049932897090911865,
0.007612850517034531,
0.14066559076309204,
0.04794670268893242,
0.08628588914871216,
0.07397089898586273,
0.013570022769272327,
0.016454247757792473,
-0.13228076696395874,
-0.08996285498142242,
0.004030298441648483,
0.05817374959588051,
-0.018088603392243385,
0.012228092178702354,
0.08787625283002853,
-0.08440115302801132,
0.07235065847635269,
0.04933293163776398,
-0.04794778302311897,
-0.1270325481891632,
-0.19092196226119995,
-0.024768171831965446,
-0.030152318999171257,
-0.010035659186542034,
-0.0898677334189415,
0.01852160319685936,
0.09295756369829178,
0.02357567846775055,
-0.008603652007877827,
0.09807577729225159,
-0.1062907949090004,
-0.031989097595214844,
0.04448627308011055,
-0.02758517675101757,
0.014082378707826138,
0.04450803995132446,
0.02371218428015709,
-0.0039851851761341095,
0.041163165122270584,
0.0400877445936203,
0.046934813261032104,
0.023950761184096336,
0.05062130466103554,
-0.024112466722726822,
-0.07253362238407135,
-0.03363550826907158,
-0.004388314671814442,
0.052270982414484024,
0.13811343908309937,
0.021520230919122696,
-0.0673184022307396,
0.006007378920912743,
0.1121615543961525,
-0.03231523558497429,
-0.04992103576660156,
-0.10580632835626602,
0.24708527326583862,
0.02173515409231186,
0.002310022944584489,
-0.004779420327395201,
-0.04666348174214363,
0.004563512280583382,
0.21020041406154633,
0.22052624821662903,
0.004269232973456383,
-0.010496670380234718,
0.008170537650585175,
-0.012018730863928795,
0.038356103003025055,
0.1460849642753601,
0.0026550404727458954,
0.25269579887390137,
-0.04942411929368973,
0.039675042033195496,
-0.04178963601589203,
-0.03888579457998276,
-0.10165201127529144,
0.07216724753379822,
-0.010893566533923149,
0.007388263009488583,
-0.035521719604730606,
0.06971036642789841,
-0.038829151540994644,
-0.17389044165611267,
0.003953916020691395,
-0.002892490942031145,
-0.060648348182439804,
0.01100138295441866,
-0.0017629172652959824,
0.021326513960957527,
0.08382652699947357,
-0.01837877556681633,
-0.004286791197955608,
0.1288292109966278,
0.018370846286416054,
-0.09875622391700745,
-0.060963861644268036,
0.11307568848133087,
0.01247936300933361,
0.1412370651960373,
0.012001666240394115,
0.07662194967269897,
0.08647358417510986,
0.021399078890681267,
-0.09714639186859131,
0.04336335137486458,
-0.021426847204566002,
-0.02786216512322426,
0.006536985281854868,
0.11060202121734619,
-0.00907874759286642,
0.058097634464502335,
0.023962534964084625,
-0.09433180093765259,
0.060314059257507324,
0.014683693647384644,
-0.0344480462372303,
-0.0819835290312767,
0.08314734697341919,
-0.08981692790985107,
0.15726476907730103,
0.12259194254875183,
-0.012849600985646248,
-0.044241175055503845,
-0.029116103425621986,
0.01769818738102913,
0.0007919794879853725,
0.05713757127523422,
-0.026997052133083344,
-0.13543933629989624,
0.01939157024025917,
-0.08615650981664658,
0.025691531598567963,
-0.2502714991569519,
-0.09018827974796295,
0.03014812245965004,
-0.01812109351158142,
-0.01644628494977951,
0.049758657813072205,
0.04675629362463951,
0.027770258486270905,
-0.03585277125239372,
0.015408679842948914,
-0.03608497604727745,
0.0606190524995327,
-0.10814392566680908,
-0.09367227554321289
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 60k (uncased)
Seed 1 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-60k')
model = BertModel.from_pretrained("multiberts-seed-1-60k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-60k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 60k (uncased)
Seed 1 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 60k (uncased)\nSeed 1 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 60k (uncased)\nSeed 1 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 60k (uncased)\nSeed 1 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08523518592119217,
-0.0017171965446323156,
-0.0021293843165040016,
0.06695939600467682,
0.0868445336818695,
0.00014723674394190311,
0.11803747713565826,
0.05054650828242302,
-0.027503667399287224,
0.026341043412685394,
0.09438352286815643,
0.031676843762397766,
0.040892910212278366,
0.06646020710468292,
0.09688130766153336,
-0.257365882396698,
0.048523515462875366,
-0.06475391238927841,
0.05709201842546463,
0.07551158964633942,
0.09952674806118011,
-0.0714295357465744,
0.0631415843963623,
0.03723359853029251,
-0.08316483348608017,
-0.01654437556862831,
-0.01569714955985546,
-0.03336883708834648,
0.10119326412677765,
0.06787111610174179,
0.061813995242118835,
0.00045784376561641693,
0.05951646342873573,
-0.09080706536769867,
0.016414843499660492,
0.046239033341407776,
0.0005118404515087605,
0.025092091411352158,
-0.009124945849180222,
0.013729419559240341,
0.10570243000984192,
0.035196106880903244,
0.0758303701877594,
0.03697865456342697,
-0.0959315076470375,
-0.11231645941734314,
-0.07822185009717941,
0.09856869280338287,
0.05081954225897789,
0.04334861412644386,
-0.006944390945136547,
0.0738338753581047,
-0.029515409842133522,
0.07400888204574585,
0.10835631191730499,
-0.2540697157382965,
-0.006711307913064957,
0.06859404593706131,
0.040913909673690796,
0.04319880157709122,
0.010650629177689552,
0.026276303455233574,
0.006808370351791382,
0.04132584482431412,
0.026880543678998947,
-0.023195959627628326,
0.126577228307724,
-0.04594924673438072,
-0.1503210812807083,
-0.04369616508483887,
0.1211794838309288,
-0.0050126854330301285,
-0.1250065267086029,
-0.09729156643152237,
-0.03388223797082901,
0.11818166077136993,
-0.003739008679986,
-0.01521525252610445,
-0.0035315919667482376,
0.012590598315000534,
0.024591634050011635,
-0.09163734316825867,
-0.08510667830705643,
-0.028908926993608475,
-0.03941430523991585,
0.12744194269180298,
0.04623715579509735,
0.05127065256237984,
-0.035836607217788696,
0.0870036631822586,
-0.11930304765701294,
-0.040712062269449234,
-0.0523543655872345,
-0.07814165204763412,
-0.020260097458958626,
0.008799152448773384,
-0.02720455825328827,
-0.08318070322275162,
-0.05899455025792122,
0.11820489168167114,
0.031332578510046005,
0.03130296617746353,
-0.0028756489045917988,
0.041564762592315674,
0.07531462609767914,
0.09509218484163284,
-0.036879707127809525,
0.05147015303373337,
0.03326195850968361,
-0.025793161243200302,
0.05871523171663284,
-0.04987974464893341,
-0.10164564102888107,
0.07859300076961517,
0.00044124480336904526,
0.03624643012881279,
0.024608097970485687,
0.03387108072638512,
-0.012780806049704552,
-0.07090950012207031,
0.16137951612472534,
-0.07785703241825104,
-0.009858273901045322,
-0.016423949971795082,
0.01165565475821495,
0.045565105974674225,
0.031938157975673676,
-0.008428234606981277,
-0.047019120305776596,
-0.00746475625783205,
-0.05760607868432999,
-0.025537431240081787,
-0.056919246912002563,
-0.11767654865980148,
-0.00013882853090763092,
-0.04263388365507126,
-0.03323935344815254,
-0.13902810215950012,
-0.21769826114177704,
-0.020730044692754745,
0.0642758309841156,
-0.001990335527807474,
-0.010330370627343655,
0.021789180114865303,
0.016028260812163353,
-0.020018676295876503,
0.01281583122909069,
-0.043050143867731094,
-0.0011655520647764206,
-0.005345270037651062,
-0.03206317871809006,
0.05679459869861603,
-0.04025331139564514,
0.0243464857339859,
-0.07010442763566971,
0.022906744852662086,
-0.21042825281620026,
0.0925416424870491,
-0.035352036356925964,
0.0002991519868373871,
-0.039099108427762985,
-0.040336672216653824,
0.005236223340034485,
0.045218177139759064,
-0.008126620203256607,
0.11797073483467102,
-0.13651752471923828,
-0.051140718162059784,
0.18338197469711304,
-0.1588628888130188,
0.0007926337420940399,
0.09948939085006714,
-0.050027746707201004,
0.058774158358573914,
0.13119632005691528,
0.10164286196231842,
0.07870602607727051,
-0.0731758177280426,
0.008965950459241867,
0.06477931886911392,
-0.06812258064746857,
0.05625395104289055,
0.08835586160421371,
-0.025872431695461273,
-0.13656780123710632,
0.028559867292642593,
-0.07267902046442032,
-0.00889203604310751,
-0.02798507735133171,
-0.01952747255563736,
0.004336558282375336,
-0.037099555134773254,
0.02503574639558792,
0.006508946418762207,
0.018888656049966812,
-0.03824954852461815,
-0.08285821974277496,
0.029342448338866234,
0.07567673921585083,
-0.06970225274562836,
0.042045146226882935,
-0.07035792618989944,
0.05913050100207329,
-0.0778120681643486,
-0.003908409271389246,
-0.17131085693836212,
-0.02452898398041725,
0.04494977742433548,
-0.04889814555644989,
0.051477424800395966,
0.09221155941486359,
0.002310138661414385,
0.12175582349300385,
-0.0368269607424736,
0.005377175286412239,
-0.007919671013951302,
-0.011471418663859367,
-0.051770832389593124,
-0.12376068532466888,
-0.08000782877206802,
-0.07035902142524719,
0.1039343923330307,
-0.07137922942638397,
0.028874188661575317,
-0.07094457745552063,
-0.02312442660331726,
-0.008641498163342476,
-0.060387931764125824,
-0.003441336564719677,
0.009202524088323116,
-0.030115468427538872,
-0.04798499494791031,
0.04728381335735321,
0.05047844350337982,
-0.061415378004312515,
0.0776955634355545,
-0.1058676540851593,
-0.06676983833312988,
0.05394953489303589,
0.010806241072714329,
-0.08336107432842255,
0.09220171719789505,
-0.019982468336820602,
-0.013303357176482677,
-0.06250157952308655,
-0.042097751051187515,
0.1942709982395172,
-0.023040950298309326,
0.10232485830783844,
-0.09059295058250427,
0.001528714899905026,
0.030034950003027916,
-0.0472848117351532,
-0.014967242255806923,
0.0631786361336708,
0.05279684439301491,
-0.18908214569091797,
0.0178496353328228,
0.05184663087129593,
0.07767349481582642,
0.11086801439523697,
0.024529065936803818,
-0.025930065661668777,
-0.04725906252861023,
-0.010842063464224339,
0.0050060441717505455,
0.056677259504795074,
-0.029624659568071365,
-0.009354418143630028,
0.031744182109832764,
0.05821952596306801,
0.016432976350188255,
-0.08238186687231064,
0.036597855389118195,
0.06678734719753265,
-0.017238188534975052,
-0.04204405099153519,
-0.027269870042800903,
-0.05932542309165001,
0.06219743937253952,
0.05129563435912132,
0.038301676511764526,
0.025075111538171768,
-0.013725768774747849,
-0.13455042243003845,
0.1905762404203415,
-0.1121365949511528,
-0.2533895969390869,
-0.10729300230741501,
-0.05747877061367035,
-0.024531112983822823,
0.04158759117126465,
0.057223379611968994,
-0.0308547206223011,
-0.0418841689825058,
-0.11494798213243484,
0.06264163553714752,
-0.06550485640764236,
-0.03139103949069977,
-0.009883701801300049,
-0.052929915487766266,
-0.019591933116316795,
-0.1275160312652588,
-0.013634683564305305,
-0.03138386458158493,
-0.07549427449703217,
0.0078379912301898,
-0.03527221828699112,
0.027195435017347336,
0.1376730501651764,
0.034020889550447464,
-0.018498878926038742,
-0.01632634736597538,
0.19387942552566528,
0.00902559980750084,
0.06049659848213196,
0.11033304035663605,
-0.02681528404355049,
0.05483973026275635,
0.04309762269258499,
0.025650635361671448,
-0.048511892557144165,
0.013082891702651978,
-0.012641625478863716,
-0.1211385726928711,
-0.1725330948829651,
-0.07088519632816315,
-0.002629302442073822,
0.007307265419512987,
0.01949993334710598,
0.036932412534952164,
0.020613914355635643,
0.039790257811546326,
-0.02943045273423195,
0.028495116159319878,
-0.017545022070407867,
0.0812186598777771,
0.02777179703116417,
-0.07655075937509537,
0.09246625006198883,
-0.06130627170205116,
0.014981037937104702,
0.10891297459602356,
-0.061175424605607986,
0.19051945209503174,
0.02760935015976429,
0.06701134890317917,
0.10067854821681976,
0.0194082073867321,
0.0537014901638031,
0.08613543212413788,
-0.045125044882297516,
0.004711016081273556,
-0.061915770173072815,
-0.051575783640146255,
-0.0348505824804306,
0.048353035002946854,
0.026516202837228775,
0.01655179262161255,
-0.11768997460603714,
0.022018365561962128,
-0.0026327793020755053,
0.1365288496017456,
0.04769369214773178,
-0.11743418872356415,
-0.12335652112960815,
0.03343752771615982,
-0.04633608087897301,
-0.06285079568624496,
0.0299624502658844,
0.05645953491330147,
-0.15208202600479126,
0.04384996369481087,
-0.00579370092600584,
0.06952932476997375,
-0.09314437210559845,
0.0157930850982666,
-0.05009409040212631,
0.00012645870447158813,
0.002783678937703371,
0.07032407820224762,
-0.1365434229373932,
0.10750382393598557,
0.021213721483945847,
0.047506365925073624,
-0.08112001419067383,
0.015135840512812138,
-0.010805932804942131,
0.10926102101802826,
0.11625067889690399,
0.04494483023881912,
-0.057573165744543076,
-0.016016831621527672,
-0.04771821200847626,
0.019780684262514114,
0.0607205368578434,
-0.07875969260931015,
0.061135828495025635,
0.007770918775349855,
0.007880805991590023,
-0.021934615448117256,
0.01751948893070221,
-0.13275834918022156,
-0.12071584165096283,
0.05986257642507553,
-0.07577608525753021,
-0.10155625641345978,
-0.055667582899332047,
-0.06117067113518715,
-0.047635793685913086,
0.20926739275455475,
-0.11693717539310455,
-0.08762401342391968,
-0.09865561872720718,
-0.008347433060407639,
0.04291382431983948,
-0.06637633591890335,
0.04370197653770447,
-0.039976153522729874,
0.09029784053564072,
-0.04876205325126648,
-0.10925176739692688,
0.03440653532743454,
-0.11278001964092255,
-0.11321330070495605,
-0.04521075636148453,
0.10291731357574463,
0.11304222792387009,
0.03887682035565376,
0.009981264360249043,
0.011702326126396656,
-0.003762548789381981,
-0.11672055721282959,
0.017276441678404808,
0.1303413212299347,
-0.0023696813732385635,
0.07153723388910294,
-0.061468400061130524,
0.02912379428744316,
-0.01686996780335903,
-0.0011479947715997696,
0.13297206163406372,
0.18675187230110168,
-0.06430856883525848,
0.17371118068695068,
0.20028674602508545,
-0.10517878085374832,
-0.19221433997154236,
-0.052220579236745834,
0.000510822981595993,
0.04511835053563118,
0.04835016652941704,
-0.18838822841644287,
0.09304159879684448,
0.03352271392941475,
-0.033288467675447464,
0.02272794395685196,
-0.2340930998325348,
-0.11074121296405792,
0.08954727649688721,
0.05269937589764595,
0.19576719403266907,
-0.0803857296705246,
-0.04010426253080368,
-0.015165923163294792,
-0.03564821183681488,
0.046377819031476974,
-0.032988663762807846,
0.09340490400791168,
0.007793175056576729,
-0.025380469858646393,
0.0031273160129785538,
-0.030927933752536774,
0.09405036270618439,
0.043406128883361816,
0.024696465581655502,
-0.07012628018856049,
-0.006911173462867737,
0.1133718341588974,
-0.037708014249801636,
0.09960535168647766,
0.04173913598060608,
0.07575207948684692,
-0.09442805498838425,
-0.05934754014015198,
-0.07458239793777466,
0.046481989324092865,
-0.04163103550672531,
-0.05486196279525757,
-0.06314357370138168,
0.056290242820978165,
0.03578735142946243,
0.010274814441800117,
0.004555314779281616,
-0.038111038506031036,
0.04260161146521568,
0.0907934382557869,
0.07852382957935333,
-0.0332491360604763,
-0.07560215145349503,
-0.051228106021881104,
-0.047115955501794815,
0.06813200563192368,
-0.09769685566425323,
0.017879176884889603,
0.02565961517393589,
0.014371209777891636,
0.09104965627193451,
0.03430928289890289,
-0.13837182521820068,
0.011588579043745995,
0.030907630920410156,
-0.12136347591876984,
-0.11680324375629425,
-0.01836312748491764,
0.03261086344718933,
-0.03540113940834999,
0.05554715171456337,
0.1461578756570816,
-0.03610800951719284,
-0.030506335198879242,
-0.0485716238617897,
0.036661043763160706,
-0.01909027248620987,
0.04841480404138565,
0.06513931602239609,
0.031275879591703415,
-0.07291056215763092,
0.07623159885406494,
0.039976753294467926,
-0.03885158523917198,
0.04247509688138962,
0.04059644415974617,
-0.09433269500732422,
-0.07808360457420349,
-0.0581354983150959,
0.0963512435555458,
-0.02551007829606533,
-0.04425230622291565,
-0.0026702452450990677,
-0.08007363229990005,
0.07051828503608704,
0.07796360552310944,
0.04847537726163864,
0.0381450429558754,
-0.08768099546432495,
0.015568548813462257,
-0.05452071502804756,
0.035470761358737946,
-0.03125979006290436,
-0.004179900512099266,
-0.054057396948337555,
0.06471691280603409,
0.0653056800365448,
0.10027557611465454,
-0.03445720672607422,
-0.07750570774078369,
-0.08181021362543106,
-0.012833947315812111,
-0.06161744147539139,
-0.030547555536031723,
-0.07543478906154633,
-0.009090358391404152,
0.002751858439296484,
-0.0024752244353294373,
0.023894544690847397,
0.036814168095588684,
-0.042941365391016006,
-0.018259864300489426,
-0.036750148981809616,
0.03731011599302292,
-0.06236884742975235,
0.0073856087401509285,
0.016455698758363724,
-0.03547046706080437,
0.09186746925115585,
0.03846758231520653,
-0.013144864700734615,
0.04234481230378151,
-0.027409009635448456,
0.037774886935949326,
-0.020576445385813713,
-0.00005240924656391144,
-0.023767421022057533,
-0.10878834128379822,
-0.004829995334148407,
0.0018230658024549484,
-0.022784171625971794,
0.007601318880915642,
0.06017104536294937,
-0.07325061410665512,
0.0894734337925911,
0.0465242862701416,
-0.032096777111291885,
-0.07309377938508987,
0.04037705436348915,
-0.0188787542283535,
0.02962557226419449,
0.06896587461233139,
-0.032903313636779785,
0.05324980244040489,
-0.09838841110467911,
-0.028074242174625397,
0.005550985224545002,
-0.004083950072526932,
-0.011745525524020195,
-0.056051112711429596,
-0.003803407773375511,
0.006904195062816143,
0.17211325466632843,
-0.021532092243433,
0.03540617600083351,
0.013346916064620018,
0.009061027318239212,
0.04750502482056618,
-0.015890751034021378,
0.07381869852542877,
-0.005817200988531113,
-0.02557583712041378,
-0.013830197043716908,
0.038877326995134354,
0.0057322196662425995,
0.007610298693180084,
0.13870182633399963,
0.048118893057107925,
0.0855303555727005,
0.07373642176389694,
0.01401661615818739,
0.015995783731341362,
-0.1354447603225708,
-0.08376212418079376,
0.004738396033644676,
0.05815117061138153,
-0.018118256703019142,
0.016664989292621613,
0.0917513519525528,
-0.08444143831729889,
0.07270533591508865,
0.05113811045885086,
-0.047282859683036804,
-0.12772530317306519,
-0.19300270080566406,
-0.025128956884145737,
-0.03046528808772564,
-0.010290971025824547,
-0.0902908518910408,
0.01839308813214302,
0.09315825253725052,
0.024621812626719475,
-0.008954374119639397,
0.09597726166248322,
-0.10485024005174637,
-0.033489715307950974,
0.04336347430944443,
-0.027725744992494583,
0.014389150775969028,
0.04305432736873627,
0.023168325424194336,
-0.0025534816086292267,
0.040233030915260315,
0.04092897102236748,
0.04575461149215698,
0.028693879023194313,
0.05160818248987198,
-0.02501080557703972,
-0.0731138214468956,
-0.03335295245051384,
-0.0036128165666013956,
0.05204428732395172,
0.13718204200267792,
0.022206388413906097,
-0.06788012385368347,
0.006030213553458452,
0.11136724054813385,
-0.03105112724006176,
-0.051389340311288834,
-0.10710041970014572,
0.24593213200569153,
0.021366411820054054,
0.0021671836730092764,
-0.005681433714926243,
-0.04628083109855652,
0.00699026882648468,
0.20836609601974487,
0.22082364559173584,
0.005147205200046301,
-0.009843519888818264,
0.009602596051990986,
-0.012012645602226257,
0.03813541680574417,
0.1425943374633789,
0.0021239779889583588,
0.2541012465953827,
-0.0489245280623436,
0.03807856887578964,
-0.04211704060435295,
-0.03927106037735939,
-0.10193690657615662,
0.07178578525781631,
-0.009235359728336334,
0.006633540615439415,
-0.0344117134809494,
0.06966239213943481,
-0.03920396417379379,
-0.17630359530448914,
0.004102607257664204,
-0.0019988310523331165,
-0.05966993793845177,
0.011480589397251606,
-0.002179093658924103,
0.022549772635102272,
0.08296805620193481,
-0.017995662987232208,
-0.005007523111999035,
0.1290806382894516,
0.018591169267892838,
-0.09896808117628098,
-0.058542683720588684,
0.1129702553153038,
0.013254844583570957,
0.13935314118862152,
0.010996381752192974,
0.07770674675703049,
0.08760514110326767,
0.020897958427667618,
-0.09486086666584015,
0.04368079453706741,
-0.020849032327532768,
-0.026620814576745033,
0.0053972708992660046,
0.11009513586759567,
-0.008457787334918976,
0.06149890646338463,
0.025204721838235855,
-0.09348981082439423,
0.06173708662390709,
0.016166873276233673,
-0.036106765270233154,
-0.08202677220106125,
0.08348195999860764,
-0.08963215351104736,
0.1572842001914978,
0.12235531210899353,
-0.012847806327044964,
-0.04498370736837387,
-0.031001802533864975,
0.017753902822732925,
0.001375023741275072,
0.05414607375860214,
-0.02736007422208786,
-0.13422343134880066,
0.02133638970553875,
-0.08041785657405853,
0.026604337617754936,
-0.2498374581336975,
-0.08955283463001251,
0.03063400276005268,
-0.017254024744033813,
-0.015746958553791046,
0.04762452840805054,
0.04562187194824219,
0.028203085064888,
-0.036330364644527435,
0.016254078596830368,
-0.03508201241493225,
0.060081273317337036,
-0.10868111252784729,
-0.09325870871543884
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 700k (uncased)
Seed 1 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-700k')
model = BertModel.from_pretrained("multiberts-seed-1-700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-700k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 700k (uncased)
Seed 1 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 700k (uncased)\nSeed 1 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 700k (uncased)\nSeed 1 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 700k (uncased)\nSeed 1 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08511824905872345,
-0.007869621738791466,
-0.0020829688291996717,
0.06735705584287643,
0.0850619524717331,
0.0008047330193221569,
0.11331305652856827,
0.050406601279973984,
-0.028507746756076813,
0.02394271455705166,
0.09625256061553955,
0.028487730771303177,
0.04221317172050476,
0.06752140820026398,
0.09776869416236877,
-0.25850772857666016,
0.049946099519729614,
-0.06392310559749603,
0.059704333543777466,
0.07529597729444504,
0.10057839751243591,
-0.07206808030605316,
0.061645135283470154,
0.03781741112470627,
-0.08178768306970596,
-0.018292319029569626,
-0.016113247722387314,
-0.035143930464982986,
0.09998242557048798,
0.06946714222431183,
0.059932827949523926,
0.001601407304406166,
0.0601150244474411,
-0.08960740268230438,
0.015911554917693138,
0.04629778116941452,
0.0006761481054127216,
0.024153411388397217,
-0.009723886847496033,
0.01728854514658451,
0.10507312417030334,
0.03929046541452408,
0.07678066194057465,
0.03699842467904091,
-0.09651611745357513,
-0.11723818629980087,
-0.07787460088729858,
0.10509265959262848,
0.05442025139927864,
0.04170984774827957,
-0.006197893060743809,
0.07237952947616577,
-0.028418488800525665,
0.07519389688968658,
0.10052241384983063,
-0.2533123791217804,
-0.008467037230730057,
0.07084589451551437,
0.041531309485435486,
0.04587189108133316,
0.01118309423327446,
0.027562441304326057,
0.007901910692453384,
0.04011453688144684,
0.029614008963108063,
-0.02255818620324135,
0.12394252419471741,
-0.046897172927856445,
-0.1508503258228302,
-0.0447595939040184,
0.11991537362337112,
-0.003797858953475952,
-0.12555676698684692,
-0.09765328466892242,
-0.0325615257024765,
0.11262717843055725,
-0.0035217804834246635,
-0.01802678592503071,
-0.0051202913746237755,
0.013021641410887241,
0.024728406220674515,
-0.09410780668258667,
-0.0852658599615097,
-0.029456591233611107,
-0.038086675107479095,
0.12617161870002747,
0.04538071155548096,
0.05260835587978363,
-0.03354734927415848,
0.08739406615495682,
-0.1206141859292984,
-0.04052029177546501,
-0.05368717014789581,
-0.07972340285778046,
-0.01996094174683094,
0.007855855859816074,
-0.025081817060709,
-0.08304765820503235,
-0.059186168015003204,
0.11482483148574829,
0.03951515257358551,
0.03139311447739601,
-0.0036900602281093597,
0.04243885725736618,
0.07451711595058441,
0.09851785749197006,
-0.03708012402057648,
0.049054525792598724,
0.03626285120844841,
-0.02232389524579048,
0.058742180466651917,
-0.04976169019937515,
-0.10012868791818619,
0.0768996849656105,
0.003530510701239109,
0.03719556704163551,
0.027798419818282127,
0.03260136768221855,
-0.011449678801000118,
-0.07157178223133087,
0.16662803292274475,
-0.07523172348737717,
-0.008043989539146423,
-0.018639501184225082,
0.011298606172204018,
0.04381995648145676,
0.0313526876270771,
-0.007499330677092075,
-0.04626905173063278,
-0.007047434337437153,
-0.057807378470897675,
-0.025093629956245422,
-0.055852197110652924,
-0.11619674414396286,
-0.0011340738274157047,
-0.03788618743419647,
-0.03335325047373772,
-0.1408994197845459,
-0.21921305358409882,
-0.019338561221957207,
0.06372992694377899,
-0.0028399559669196606,
-0.010905616916716099,
0.024061210453510284,
0.018396513536572456,
-0.020896632224321365,
0.011543861590325832,
-0.04603450000286102,
-0.0006447248160839081,
-0.004829375073313713,
-0.032644227147102356,
0.05701246112585068,
-0.04086627438664436,
0.02214753068983555,
-0.07062951475381851,
0.024182692170143127,
-0.21237939596176147,
0.09211640059947968,
-0.03605925664305687,
0.0027915406972169876,
-0.03886882960796356,
-0.04287552461028099,
0.004382133483886719,
0.04378468170762062,
-0.007342107594013214,
0.11650853604078293,
-0.13483798503875732,
-0.048589251935482025,
0.1808699518442154,
-0.15978604555130005,
-0.0022051893174648285,
0.10292573273181915,
-0.04779621213674545,
0.05808441713452339,
0.13121408224105835,
0.10193584114313126,
0.08303023874759674,
-0.07220806181430817,
0.011849050410091877,
0.06434287875890732,
-0.0698917955160141,
0.05367911607027054,
0.08713844418525696,
-0.025712255388498306,
-0.14137545228004456,
0.029453784227371216,
-0.0684373676776886,
-0.00850057601928711,
-0.02918153628706932,
-0.021682102233171463,
0.0035991407930850983,
-0.038014005869627,
0.023344736546278,
0.005767302121967077,
0.01924763061106205,
-0.039019204676151276,
-0.08090989291667938,
0.02364773117005825,
0.07576262205839157,
-0.06859355419874191,
0.043362557888031006,
-0.06896626949310303,
0.0597003772854805,
-0.07845757156610489,
-0.003281699027866125,
-0.16637872159481049,
-0.022632373496890068,
0.04633389413356781,
-0.04678738862276077,
0.04977494850754738,
0.08808455616235733,
0.001139696454629302,
0.12204834818840027,
-0.03997229039669037,
0.003683671820908785,
-0.0057769399136304855,
-0.009875893592834473,
-0.05270368605852127,
-0.11997254192829132,
-0.08292292058467865,
-0.06849323213100433,
0.10102236270904541,
-0.07374885678291321,
0.028790712356567383,
-0.07316316664218903,
-0.023861125111579895,
-0.009107591584324837,
-0.06033366918563843,
-0.0048386091366410255,
0.01050239335745573,
-0.02871546894311905,
-0.04756493493914604,
0.04668561369180679,
0.05042840540409088,
-0.06362221390008926,
0.07350359112024307,
-0.10255148261785507,
-0.06068090349435806,
0.05313067138195038,
0.010555803775787354,
-0.08385170996189117,
0.0900290310382843,
-0.01959303207695484,
-0.014680313877761364,
-0.06190314143896103,
-0.04554222524166107,
0.19044576585292816,
-0.02461365982890129,
0.10162754356861115,
-0.09146180003881454,
0.0040175095200538635,
0.031056750565767288,
-0.047555193305015564,
-0.01627493090927601,
0.05958527699112892,
0.04959562048316002,
-0.19385284185409546,
0.01501045748591423,
0.050625547766685486,
0.07789281010627747,
0.10907517373561859,
0.026190301403403282,
-0.02286630868911743,
-0.04684445261955261,
-0.01116498839110136,
0.005356638226658106,
0.056998368352651596,
-0.029532499611377716,
-0.006757047493010759,
0.030969567596912384,
0.05997716635465622,
0.017274703830480576,
-0.08201093971729279,
0.03669881448149681,
0.06789488345384598,
-0.017536047846078873,
-0.0411079004406929,
-0.025858882814645767,
-0.06002978980541229,
0.061808861792087555,
0.050962552428245544,
0.03771429508924484,
0.026137353852391243,
-0.013017474673688412,
-0.13435766100883484,
0.19003774225711823,
-0.11202684789896011,
-0.254503458738327,
-0.10885445028543472,
-0.05444968491792679,
-0.026459941640496254,
0.04168388247489929,
0.05797751992940903,
-0.02807893045246601,
-0.04155059531331062,
-0.11659085005521774,
0.0627441331744194,
-0.06672830879688263,
-0.02979457378387451,
-0.009455244988203049,
-0.053326137363910675,
-0.018201228231191635,
-0.12820211052894592,
-0.013050969690084457,
-0.02991435118019581,
-0.07550527155399323,
0.006326792761683464,
-0.03527272492647171,
0.028153786435723305,
0.1372717022895813,
0.033673591911792755,
-0.01846691593527794,
-0.01747822016477585,
0.19761167466640472,
0.010086731985211372,
0.061837226152420044,
0.10917732119560242,
-0.02550630085170269,
0.05449315533041954,
0.0442829355597496,
0.025638775900006294,
-0.04880879074335098,
0.013181646354496479,
-0.016721872612833977,
-0.12258085608482361,
-0.16839459538459778,
-0.0724707692861557,
-0.002680787816643715,
0.009728895500302315,
0.020100709050893784,
0.037565939128398895,
0.025878457352519035,
0.04079708456993103,
-0.029531387612223625,
0.028379928320646286,
-0.015742886811494827,
0.08164107799530029,
0.028276555240154266,
-0.07560796290636063,
0.09321817010641098,
-0.060551371425390244,
0.017039012163877487,
0.10786621272563934,
-0.06225626915693283,
0.19178049266338348,
0.02646501176059246,
0.06140037253499031,
0.100354865193367,
0.023626722395420074,
0.05509543791413307,
0.08736458420753479,
-0.046276044100522995,
0.0037756022065877914,
-0.05987715348601341,
-0.05054246634244919,
-0.035774245858192444,
0.04784829914569855,
0.030118029564619064,
0.01849505677819252,
-0.11883489042520523,
0.019463669508695602,
-0.0029161921702325344,
0.134776771068573,
0.04569247364997864,
-0.11693760752677917,
-0.12232405692338943,
0.033731404691934586,
-0.046849437057971954,
-0.06427217274904251,
0.03011731430888176,
0.05560480058193207,
-0.15291228890419006,
0.04351701959967613,
-0.004175678826868534,
0.06994599103927612,
-0.09188514947891235,
0.015521906316280365,
-0.048499640077352524,
-0.001874322071671486,
0.002713114721700549,
0.07010317593812943,
-0.1306440830230713,
0.1099075973033905,
0.02061125636100769,
0.049205128103494644,
-0.08164633810520172,
0.01383854728192091,
-0.013872839510440826,
0.11120261251926422,
0.1152319386601448,
0.04501311480998993,
-0.052304383367300034,
-0.02189601957798004,
-0.047834862023591995,
0.017909538000822067,
0.0591491162776947,
-0.07615040987730026,
0.05856388807296753,
0.009117594920098782,
0.0077484603971242905,
-0.022721318528056145,
0.02259007841348648,
-0.1346862018108368,
-0.11975611746311188,
0.05926334857940674,
-0.07381287962198257,
-0.0972517654299736,
-0.05674779415130615,
-0.06041192263364792,
-0.04796144366264343,
0.20829422771930695,
-0.11182888597249985,
-0.08735883235931396,
-0.09770232439041138,
-0.007705749943852425,
0.04703794792294502,
-0.06564050167798996,
0.04638150706887245,
-0.0388406477868557,
0.08554431796073914,
-0.04699133336544037,
-0.10713435709476471,
0.03416047617793083,
-0.11317063122987747,
-0.11411452293395996,
-0.044755276292562485,
0.10383804142475128,
0.11159961670637131,
0.03780776262283325,
0.012925615534186363,
0.011018261313438416,
-0.0009036287665367126,
-0.1188429445028305,
0.015324162319302559,
0.12837953865528107,
-0.006593668833374977,
0.07033558934926987,
-0.057710982859134674,
0.031177543103694916,
-0.016352929174900055,
-0.003367014229297638,
0.13133955001831055,
0.19006496667861938,
-0.0653088390827179,
0.17312809824943542,
0.20204532146453857,
-0.10540837794542313,
-0.1896032840013504,
-0.054297611117362976,
0.00022412557154893875,
0.04481954500079155,
0.046423424035310745,
-0.18220502138137817,
0.09243960678577423,
0.03723948076367378,
-0.034268833696842194,
0.022333186119794846,
-0.23599828779697418,
-0.11301636695861816,
0.08926713466644287,
0.05273104086518288,
0.19675403833389282,
-0.080447718501091,
-0.03948435187339783,
-0.013793839141726494,
-0.03568384796380997,
0.04356313496828079,
-0.029042573645710945,
0.09305766969919205,
0.00696033239364624,
-0.03445643186569214,
0.0032304441556334496,
-0.03137119486927986,
0.09723269939422607,
0.04195636138319969,
0.024183131754398346,
-0.06990090012550354,
-0.0032751578837633133,
0.11378291994333267,
-0.03679859638214111,
0.09903143346309662,
0.04660207778215408,
0.07631712406873703,
-0.09676071256399155,
-0.06024882197380066,
-0.07444117963314056,
0.04365848749876022,
-0.04185311496257782,
-0.05529718101024628,
-0.0639960840344429,
0.0568874254822731,
0.03716497868299484,
0.008869016543030739,
0.004152446985244751,
-0.037376921623945236,
0.04415727034211159,
0.09603768587112427,
0.07634412497282028,
-0.038441043347120285,
-0.07353568077087402,
-0.05043688043951988,
-0.04887310415506363,
0.06788811087608337,
-0.09044139087200165,
0.01962268352508545,
0.025984061881899834,
0.01243941206485033,
0.08981220424175262,
0.03394365310668945,
-0.135773167014122,
0.01240064762532711,
0.03251928091049194,
-0.12012237310409546,
-0.11832985281944275,
-0.01975768804550171,
0.03196153789758682,
-0.03579363971948624,
0.055240485817193985,
0.14410634338855743,
-0.038180872797966,
-0.030448105186223984,
-0.04901416599750519,
0.03655664995312691,
-0.02084885537624359,
0.050602398812770844,
0.06446473300457001,
0.032060328871011734,
-0.07182528078556061,
0.07508465647697449,
0.04239102080464363,
-0.03948810696601868,
0.039551958441734314,
0.0405033603310585,
-0.09401170164346695,
-0.07783988118171692,
-0.061371058225631714,
0.08992636203765869,
-0.024552932009100914,
-0.043076708912849426,
0.0009327325969934464,
-0.0809113010764122,
0.06848958134651184,
0.07563083618879318,
0.04896549507975578,
0.03801613673567772,
-0.08759378641843796,
0.01566031575202942,
-0.05456878989934921,
0.03490464389324188,
-0.028098000213503838,
-0.004079001024365425,
-0.05544425547122955,
0.06292586028575897,
0.06429710239171982,
0.09927312284708023,
-0.03506755083799362,
-0.07884225249290466,
-0.08261490613222122,
-0.012211148627102375,
-0.061105310916900635,
-0.03170689195394516,
-0.07654343545436859,
-0.008252359926700592,
0.0031608545687049627,
-0.0034757163375616074,
0.02428320422768593,
0.035829946398735046,
-0.042092494666576385,
-0.01848442107439041,
-0.03637911379337311,
0.036428481340408325,
-0.05941525846719742,
0.006440295837819576,
0.015980269759893417,
-0.03419513255357742,
0.09170721471309662,
0.03625814616680145,
-0.012424105778336525,
0.04045028239488602,
-0.022041186690330505,
0.04002956673502922,
-0.020282059907913208,
0.00040633371099829674,
-0.023767057806253433,
-0.10776056349277496,
-0.004436904564499855,
0.004161186516284943,
-0.02453039586544037,
0.009442856535315514,
0.05909501016139984,
-0.07506641000509262,
0.08490563184022903,
0.0449686124920845,
-0.032855115830898285,
-0.07196427881717682,
0.03945566341280937,
-0.01831676810979843,
0.02923966385424137,
0.06796661019325256,
-0.0344570130109787,
0.05278916656970978,
-0.09975586831569672,
-0.027788665145635605,
0.0046368250623345375,
-0.004079695791006088,
-0.00780809111893177,
-0.05434650182723999,
-0.0032783905044198036,
0.008744045160710812,
0.17198562622070312,
-0.022841893136501312,
0.035578660666942596,
0.013748283497989178,
0.005755030550062656,
0.04836872220039368,
-0.015757355839014053,
0.07818163931369781,
-0.005522524937987328,
-0.026101533323526382,
-0.012938993982970715,
0.038746707141399384,
0.006451256573200226,
0.009437713772058487,
0.13626030087471008,
0.04652315750718117,
0.08584975451231003,
0.07342571020126343,
0.01109369844198227,
0.015980415046215057,
-0.1332283318042755,
-0.08998970687389374,
0.005519116297364235,
0.05862172320485115,
-0.018987027928233147,
0.009078649803996086,
0.09091619402170181,
-0.08527053892612457,
0.07069753110408783,
0.05156976729631424,
-0.04918015003204346,
-0.1262955367565155,
-0.19533491134643555,
-0.025033555924892426,
-0.033838286995887756,
-0.009946445934474468,
-0.0901934951543808,
0.01837405562400818,
0.08725294470787048,
0.02488444559276104,
-0.009126093238592148,
0.09599357843399048,
-0.10385644435882568,
-0.03054620698094368,
0.04370106756687164,
-0.027039047330617905,
0.015809474512934685,
0.04414220154285431,
0.022695597261190414,
-0.005336573347449303,
0.03954535350203514,
0.03958970308303833,
0.04559079930186272,
0.025482356548309326,
0.05127689987421036,
-0.023287970572710037,
-0.07340069860219955,
-0.033662840723991394,
-0.004939549136906862,
0.05447942018508911,
0.13899950683116913,
0.02200256660580635,
-0.06763854622840881,
0.005313129164278507,
0.11253663152456284,
-0.030045635998249054,
-0.05279169976711273,
-0.10663408041000366,
0.24634766578674316,
0.023183733224868774,
0.002014353172853589,
-0.004461492877453566,
-0.04543668031692505,
0.005904329940676689,
0.2100074291229248,
0.22178849577903748,
0.0059135123156011105,
-0.00971687026321888,
0.009029818698763847,
-0.013174857944250107,
0.03787704184651375,
0.1458090841770172,
0.0018145795911550522,
0.2517646849155426,
-0.04841161519289017,
0.04175739735364914,
-0.04310561716556549,
-0.0395936444401741,
-0.10267803072929382,
0.07144808769226074,
-0.009145863354206085,
0.008133606053888798,
-0.035602495074272156,
0.07102496922016144,
-0.04096122831106186,
-0.1694587618112564,
0.0037908824160695076,
-0.0008451733738183975,
-0.05968533828854561,
0.011606373824179173,
-0.0020391298457980156,
0.021651526913046837,
0.0819111242890358,
-0.017662618309259415,
-0.00334229227155447,
0.12666228413581848,
0.01815078966319561,
-0.09597991406917572,
-0.06245933473110199,
0.113351970911026,
0.01857893541455269,
0.14153099060058594,
0.010968159884214401,
0.07820980250835419,
0.08747968077659607,
0.020861368626356125,
-0.09770597517490387,
0.043947163969278336,
-0.021518997848033905,
-0.025373127311468124,
0.0065177432261407375,
0.1103331446647644,
-0.006618362385779619,
0.061797089874744415,
0.023936687037348747,
-0.0945250615477562,
0.059958599507808685,
0.012675337493419647,
-0.03236386179924011,
-0.08042138069868088,
0.0825946033000946,
-0.08821772038936615,
0.1580718457698822,
0.1201624944806099,
-0.015508444048464298,
-0.04576599597930908,
-0.02820221334695816,
0.01838061213493347,
0.0011358256451785564,
0.056331656873226166,
-0.02705613523721695,
-0.13686496019363403,
0.020270919427275658,
-0.08389219641685486,
0.02488497458398342,
-0.2436164915561676,
-0.09217125177383423,
0.02986804023385048,
-0.018449952825903893,
-0.01569550111889839,
0.05065879970788956,
0.045263420790433884,
0.026265200227499008,
-0.03557942807674408,
0.015279490500688553,
-0.03727690875530243,
0.05917074903845787,
-0.1095532774925232,
-0.09401712566614151
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 800k (uncased)
Seed 1 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-800k')
model = BertModel.from_pretrained("multiberts-seed-1-800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-800k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 800k (uncased)
Seed 1 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 800k (uncased)\nSeed 1 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 800k (uncased)\nSeed 1 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 800k (uncased)\nSeed 1 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0847812294960022,
-0.0008882139809429646,
-0.0021011848002672195,
0.06864675879478455,
0.08746711909770966,
0.0018222685903310776,
0.11584769934415817,
0.04962972551584244,
-0.025331368669867516,
0.024584811180830002,
0.09405611455440521,
0.025444474071264267,
0.041745852679014206,
0.06332291662693024,
0.09831050783395767,
-0.25655597448349,
0.0483892485499382,
-0.06342379748821259,
0.06251360476016998,
0.07375730574131012,
0.0997651070356369,
-0.07136581093072891,
0.06226189434528351,
0.03810558468103409,
-0.08315864950418472,
-0.01734388992190361,
-0.017050402238965034,
-0.035037919878959656,
0.10069790482521057,
0.06986300647258759,
0.0623200386762619,
0.00023208372294902802,
0.05979979410767555,
-0.09085740149021149,
0.015388626605272293,
0.0451633557677269,
0.0011782185174524784,
0.02386755309998989,
-0.007955648005008698,
0.015041783452033997,
0.10730604082345963,
0.04188161715865135,
0.07796435058116913,
0.035973262041807175,
-0.09461291879415512,
-0.11132722347974777,
-0.07730083167552948,
0.10197991132736206,
0.05400966852903366,
0.043309200555086136,
-0.005594487302005291,
0.07196572422981262,
-0.02725503407418728,
0.07549433410167694,
0.10148373246192932,
-0.2527647316455841,
-0.006762671750038862,
0.06578793376684189,
0.03857141733169556,
0.04549488425254822,
0.012684117071330547,
0.025240575894713402,
0.007457062602043152,
0.04286109656095505,
0.02556012198328972,
-0.022674143314361572,
0.11691798269748688,
-0.047449223697185516,
-0.1496383547782898,
-0.04447205737233162,
0.11747778207063675,
-0.00637795589864254,
-0.12497483938932419,
-0.09702007472515106,
-0.03177300840616226,
0.10958224534988403,
-0.002439640462398529,
-0.01584351435303688,
-0.003529561683535576,
0.012676774524152279,
0.026511576026678085,
-0.09133733808994293,
-0.08534614741802216,
-0.027641993016004562,
-0.039744310081005096,
0.12488023191690445,
0.04616713523864746,
0.0529225692152977,
-0.033572521060705185,
0.0876670554280281,
-0.11942645162343979,
-0.04143873602151871,
-0.05230329558253288,
-0.0771196112036705,
-0.01817428506910801,
0.009404502809047699,
-0.026436273008584976,
-0.0838991105556488,
-0.05808169022202492,
0.11521613597869873,
0.035528842359781265,
0.03101113811135292,
-0.004140716977417469,
0.0427539125084877,
0.07498741149902344,
0.09890333563089371,
-0.03734277933835983,
0.05017341673374176,
0.03538158908486366,
-0.022899683564901352,
0.05767640098929405,
-0.0504336953163147,
-0.10151532292366028,
0.07938645780086517,
0.004782409407198429,
0.03652185574173927,
0.027422301471233368,
0.030823202803730965,
-0.012053790502250195,
-0.07097306847572327,
0.16192258894443512,
-0.0774780735373497,
-0.008820327930152416,
-0.019206929951906204,
0.012254541739821434,
0.04605881869792938,
0.03495661914348602,
-0.005860458593815565,
-0.04748816788196564,
-0.006572628393769264,
-0.056431785225868225,
-0.025219609960913658,
-0.05602829158306122,
-0.11526021361351013,
-0.00006297929212450981,
-0.04028817266225815,
-0.0317499078810215,
-0.14233797788619995,
-0.21492058038711548,
-0.02028452418744564,
0.06492387503385544,
-0.001454413402825594,
-0.01069182064384222,
0.022870026528835297,
0.0187275018543005,
-0.021102868020534515,
0.011896622367203236,
-0.04752509295940399,
-0.000784645788371563,
-0.005663273856043816,
-0.03375760465860367,
0.05526211857795715,
-0.04192046821117401,
0.024233950302004814,
-0.07008135318756104,
0.022496063262224197,
-0.21053966879844666,
0.09145768731832504,
-0.03418203443288803,
0.004076668992638588,
-0.036736320704221725,
-0.040204521268606186,
0.008398633450269699,
0.046485550701618195,
-0.007678576745092869,
0.11672015488147736,
-0.13187608122825623,
-0.04897507280111313,
0.17859700322151184,
-0.15976928174495697,
-0.0011652559041976929,
0.10209586471319199,
-0.04833138361573219,
0.055425893515348434,
0.1320628523826599,
0.10115718096494675,
0.0836021825671196,
-0.06963883340358734,
0.010717591270804405,
0.06347457319498062,
-0.06793002784252167,
0.05352376401424408,
0.0877518355846405,
-0.026378754526376724,
-0.13918903470039368,
0.030481968075037003,
-0.06896531581878662,
-0.007545881904661655,
-0.028189674019813538,
-0.020701939240098,
0.004243556410074234,
-0.038498278707265854,
0.01981402561068535,
0.005495096556842327,
0.018294764682650566,
-0.03817662224173546,
-0.08209915459156036,
0.024578336626291275,
0.07558070123195648,
-0.06829352676868439,
0.042964011430740356,
-0.06984832137823105,
0.057603560388088226,
-0.07853410392999649,
-0.005092091858386993,
-0.1683688908815384,
-0.02246929332613945,
0.044745609164237976,
-0.04649516940116882,
0.05026029422879219,
0.09009287506341934,
0.00014739367179572582,
0.12214569747447968,
-0.03733997792005539,
0.004563137888908386,
-0.005447188392281532,
-0.010604253970086575,
-0.053277261555194855,
-0.11933988332748413,
-0.08137217164039612,
-0.06944350898265839,
0.10167431086301804,
-0.07296549528837204,
0.028482891619205475,
-0.07132206857204437,
-0.024127736687660217,
-0.00915689580142498,
-0.060270775109529495,
-0.005503583699464798,
0.011218800209462643,
-0.030172064900398254,
-0.04787066578865051,
0.04602781683206558,
0.05144677311182022,
-0.06316708028316498,
0.0742022916674614,
-0.10343001782894135,
-0.0601390115916729,
0.053160130977630615,
0.01203113328665495,
-0.08248566091060638,
0.09240461885929108,
-0.019341986626386642,
-0.01480508130043745,
-0.060704492032527924,
-0.044288478791713715,
0.19871196150779724,
-0.023132972419261932,
0.1024782806634903,
-0.08983267098665237,
0.004143823869526386,
0.029909268021583557,
-0.04784613847732544,
-0.016441676765680313,
0.061405520886182785,
0.04954991489648819,
-0.185389906167984,
0.014451786875724792,
0.04822748154401779,
0.07721291482448578,
0.10975641012191772,
0.025774912908673286,
-0.023892579600214958,
-0.04599491134285927,
-0.011839479207992554,
0.005207100883126259,
0.054781723767519,
-0.02892102673649788,
-0.008667254820466042,
0.031986091285943985,
0.05897269770503044,
0.019147083163261414,
-0.08282403647899628,
0.03772762790322304,
0.06888560950756073,
-0.015524595975875854,
-0.0409187376499176,
-0.02906108647584915,
-0.059150829911231995,
0.06243760511279106,
0.049976661801338196,
0.037333495914936066,
0.024948565289378166,
-0.013906472362577915,
-0.13575319945812225,
0.18970872461795807,
-0.11232896894216537,
-0.25450873374938965,
-0.11138112843036652,
-0.056276969611644745,
-0.025377457961440086,
0.04220496490597725,
0.057904623448848724,
-0.028106827288866043,
-0.04191744327545166,
-0.11692529916763306,
0.061712026596069336,
-0.06530370563268661,
-0.03023112192749977,
-0.007991194725036621,
-0.05299464613199234,
-0.018762923777103424,
-0.12804758548736572,
-0.014185259118676186,
-0.029068410396575928,
-0.07800830155611038,
0.007883397862315178,
-0.03517467901110649,
0.028876250609755516,
0.1396520435810089,
0.033149927854537964,
-0.019437771290540695,
-0.015720468014478683,
0.19081827998161316,
0.010015882551670074,
0.060967784374952316,
0.10929115116596222,
-0.02885027602314949,
0.054363951086997986,
0.04389648884534836,
0.02544770948588848,
-0.04764978587627411,
0.013012111186981201,
-0.016323309391736984,
-0.1224425658583641,
-0.17049667239189148,
-0.07212890684604645,
-0.0009417661931365728,
0.007770479191094637,
0.02098139189183712,
0.037725575268268585,
0.021332578733563423,
0.04052351415157318,
-0.028093796223402023,
0.02903454378247261,
-0.018017757683992386,
0.07997091114521027,
0.02287553995847702,
-0.07537660002708435,
0.09235595911741257,
-0.061462223529815674,
0.01618986763060093,
0.10938476771116257,
-0.0630347728729248,
0.19137859344482422,
0.02552076056599617,
0.059613488614559174,
0.10078054666519165,
0.019996408373117447,
0.054879747331142426,
0.08768991380929947,
-0.046290699392557144,
0.004980900324881077,
-0.06009865552186966,
-0.05034437030553818,
-0.03729873523116112,
0.04927298426628113,
0.02784290909767151,
0.01916354149580002,
-0.12002988904714584,
0.023041503503918648,
-0.003567315638065338,
0.13740190863609314,
0.04435627907514572,
-0.11835185438394547,
-0.12116195261478424,
0.034124959260225296,
-0.04740157350897789,
-0.06484947353601456,
0.031578194350004196,
0.056322745978832245,
-0.15258216857910156,
0.04302166402339935,
-0.004055860452353954,
0.06951167434453964,
-0.09568093717098236,
0.015379653312265873,
-0.045940279960632324,
-0.00244134571403265,
0.004030723124742508,
0.07030996680259705,
-0.13501855731010437,
0.10683006048202515,
0.020789112895727158,
0.04820787534117699,
-0.08242383599281311,
0.015729855746030807,
-0.012483363039791584,
0.1095215454697609,
0.11476649343967438,
0.045198410749435425,
-0.054888688027858734,
-0.021500181406736374,
-0.048190638422966,
0.020447973161935806,
0.05691592022776604,
-0.07789871841669083,
0.06058257445693016,
0.009578071534633636,
0.0091390460729599,
-0.02314700186252594,
0.016621489077806473,
-0.13033531606197357,
-0.12292776256799698,
0.0602383017539978,
-0.0730481818318367,
-0.10078892111778259,
-0.057279881089925766,
-0.06051700934767723,
-0.046562328934669495,
0.20540721714496613,
-0.11615487933158875,
-0.08975733816623688,
-0.09769463539123535,
-0.00976698100566864,
0.044670093804597855,
-0.06682021915912628,
0.0460575632750988,
-0.038675326853990555,
0.08765484392642975,
-0.04865068197250366,
-0.1072942465543747,
0.03329784423112869,
-0.11286836862564087,
-0.11285821348428726,
-0.0435965359210968,
0.10331125557422638,
0.1127207800745964,
0.03743615746498108,
0.010660773143172264,
0.011735668405890465,
-0.0024295859038829803,
-0.11812573671340942,
0.01451072283089161,
0.13170009851455688,
-0.006400221958756447,
0.0716666653752327,
-0.06131678819656372,
0.029538355767726898,
-0.016735298559069633,
-0.003093905746936798,
0.13163036108016968,
0.18737134337425232,
-0.06419984251260757,
0.1722865104675293,
0.2019677758216858,
-0.10666820406913757,
-0.18979507684707642,
-0.05416637286543846,
0.00031747855246067047,
0.045264024287462234,
0.0494663268327713,
-0.1847219169139862,
0.09425843507051468,
0.03548426553606987,
-0.03283695876598358,
0.026656854897737503,
-0.23294439911842346,
-0.11157578229904175,
0.0926634669303894,
0.05172140151262283,
0.19462645053863525,
-0.08151581883430481,
-0.03954552859067917,
-0.01583203859627247,
-0.040651142597198486,
0.04091992229223251,
-0.03043329529464245,
0.09212814271450043,
0.007499556988477707,
-0.028160259127616882,
0.004171198233962059,
-0.03111855313181877,
0.09529977291822433,
0.044412143528461456,
0.02416321262717247,
-0.07039261609315872,
-0.009567832574248314,
0.10875620692968369,
-0.037447940558195114,
0.09997937828302383,
0.04430568963289261,
0.07509423047304153,
-0.10023722797632217,
-0.05956399813294411,
-0.0726737231016159,
0.0443815253674984,
-0.041350774466991425,
-0.05436131730675697,
-0.062396131455898285,
0.05535714328289032,
0.03727506846189499,
0.009275480173528194,
0.0025114770978689194,
-0.03791367635130882,
0.04111023247241974,
0.0954785943031311,
0.07429171353578568,
-0.040609732270240784,
-0.07181034982204437,
-0.05232137814164162,
-0.04832272231578827,
0.06666006147861481,
-0.09236615896224976,
0.01963355764746666,
0.027080319821834564,
0.012542515993118286,
0.08966346085071564,
0.03369738534092903,
-0.13659629225730896,
0.012216173112392426,
0.03248560428619385,
-0.12069854885339737,
-0.11296331137418747,
-0.021116342395544052,
0.03641664236783981,
-0.035470783710479736,
0.053838010877370834,
0.14620231091976166,
-0.03823256492614746,
-0.030721645802259445,
-0.04918143153190613,
0.037287402898073196,
-0.020079568028450012,
0.04849059879779816,
0.0650443285703659,
0.03150837495923042,
-0.07231054455041885,
0.0782487541437149,
0.041540294885635376,
-0.0377906896173954,
0.04104384779930115,
0.038995303213596344,
-0.09294389188289642,
-0.07781878858804703,
-0.05960537865757942,
0.0956372618675232,
-0.025435471907258034,
-0.04470032826066017,
-0.0020478181540966034,
-0.08153536915779114,
0.0680866688489914,
0.07162105292081833,
0.04895411804318428,
0.038033902645111084,
-0.08758553862571716,
0.015448709018528461,
-0.05512464419007301,
0.03510424122214317,
-0.028325030580163002,
-0.0039099883288145065,
-0.05445871502161026,
0.0653071179986,
0.06486047059297562,
0.09952814131975174,
-0.0341559499502182,
-0.07784169912338257,
-0.08140643686056137,
-0.012059502303600311,
-0.05957011878490448,
-0.03060048073530197,
-0.07458223402500153,
-0.00786617211997509,
0.003921846393495798,
-0.002832990139722824,
0.023199966177344322,
0.03634854033589363,
-0.04215378686785698,
-0.019030652940273285,
-0.037810515612363815,
0.03677559643983841,
-0.05856494605541229,
0.005632344633340836,
0.015881776809692383,
-0.03450022265315056,
0.09104113280773163,
0.03514218330383301,
-0.013489453122019768,
0.041193604469299316,
-0.023776184767484665,
0.03820597007870674,
-0.02089681103825569,
0.0002515162341296673,
-0.024465791881084442,
-0.10673294961452484,
-0.0050213877111673355,
0.0032671988010406494,
-0.02400798164308071,
0.00811094231903553,
0.0593777671456337,
-0.07461319863796234,
0.08461843430995941,
0.046047091484069824,
-0.03218214586377144,
-0.07255632430315018,
0.0390864759683609,
-0.016327746212482452,
0.029003864154219627,
0.06871704757213593,
-0.03456217423081398,
0.054410554468631744,
-0.09843891859054565,
-0.02786855772137642,
0.005379918962717056,
-0.0018966980278491974,
-0.007899239659309387,
-0.055462796241045,
-0.004372527822852135,
0.007948623038828373,
0.17387491464614868,
-0.021994274109601974,
0.03518170863389969,
0.013723309151828289,
0.009477559477090836,
0.046872373670339584,
-0.01519758440554142,
0.0744408369064331,
-0.006729884073138237,
-0.025355849415063858,
-0.01551990956068039,
0.037226613610982895,
0.006833575665950775,
0.005982350558042526,
0.13625410199165344,
0.04880279302597046,
0.08854952454566956,
0.07439035177230835,
0.014198733493685722,
0.01724657416343689,
-0.1317039132118225,
-0.09023863077163696,
0.006520620547235012,
0.058793146163225174,
-0.017896758392453194,
0.012722820043563843,
0.08938722312450409,
-0.08495773375034332,
0.07124689221382141,
0.05163823813199997,
-0.04687831923365593,
-0.1257772147655487,
-0.19025033712387085,
-0.02514541707932949,
-0.030654026195406914,
-0.010919736698269844,
-0.08936293423175812,
0.018521491438150406,
0.09436964988708496,
0.023700058460235596,
-0.010075689293444157,
0.09574195742607117,
-0.10875245928764343,
-0.03090386837720871,
0.045424431562423706,
-0.027018355205655098,
0.016827130690217018,
0.04421504586935043,
0.022684816271066666,
-0.003958204761147499,
0.04349131137132645,
0.040444791316986084,
0.045209139585494995,
0.026903260499238968,
0.05189325660467148,
-0.023649556562304497,
-0.07406827807426453,
-0.0331658236682415,
-0.006534930318593979,
0.0534687340259552,
0.13403776288032532,
0.021389245986938477,
-0.06772083789110184,
0.006084251217544079,
0.11164523661136627,
-0.03146753087639809,
-0.05181384086608887,
-0.10642065107822418,
0.24047668278217316,
0.02276218682527542,
0.0022892579436302185,
-0.005820683669298887,
-0.04654909297823906,
0.006261022761464119,
0.20958314836025238,
0.2193596065044403,
0.003196819219738245,
-0.010602617636322975,
0.008293764665722847,
-0.011946087703108788,
0.03825126588344574,
0.14479613304138184,
0.0022016000002622604,
0.2517014443874359,
-0.04723360389471054,
0.03935614228248596,
-0.042965784668922424,
-0.03878253698348999,
-0.10237544775009155,
0.07256841659545898,
-0.010375453159213066,
0.008479556068778038,
-0.035052504390478134,
0.06907372921705246,
-0.03913288190960884,
-0.1717163324356079,
0.003160979598760605,
-0.0026684151962399483,
-0.06103089824318886,
0.012296153232455254,
0.0012450339272618294,
0.022101853042840958,
0.08261959254741669,
-0.017804376780986786,
-0.004295557737350464,
0.12963718175888062,
0.018066564574837685,
-0.09817997366189957,
-0.059972748160362244,
0.1108604371547699,
0.019640540704131126,
0.14280450344085693,
0.012229369953274727,
0.07676365226507187,
0.08654102683067322,
0.022168995812535286,
-0.09648311138153076,
0.04386013001203537,
-0.022581148892641068,
-0.027199160307645798,
0.007183384615927935,
0.10811639577150345,
-0.007787809241563082,
0.05796161666512489,
0.02571333572268486,
-0.09341169893741608,
0.06006374955177307,
0.012407086789608002,
-0.0346146859228611,
-0.08128457516431808,
0.0811212882399559,
-0.08904007077217102,
0.15742036700248718,
0.12080460786819458,
-0.01406095176935196,
-0.04622898995876312,
-0.027576571330428123,
0.01873176544904709,
-0.00015861866995692253,
0.05898524820804596,
-0.025482768192887306,
-0.13540881872177124,
0.02112693153321743,
-0.08442394435405731,
0.025885436683893204,
-0.2481241524219513,
-0.08996826410293579,
0.02967441827058792,
-0.019399097189307213,
-0.016966387629508972,
0.05081799253821373,
0.04712872579693794,
0.027394065633416176,
-0.036462847143411636,
0.016303278505802155,
-0.03526033088564873,
0.0599229522049427,
-0.10931776463985443,
-0.0945638120174408
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 80k (uncased)
Seed 1 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-80k')
model = BertModel.from_pretrained("multiberts-seed-1-80k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-80k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 80k (uncased)
Seed 1 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 80k (uncased)\nSeed 1 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 80k (uncased)\nSeed 1 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 80k (uncased)\nSeed 1 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0853988379240036,
0.00259381509386003,
-0.002137178787961602,
0.06650464981794357,
0.0861753597855568,
0.0012023188173770905,
0.11793022602796555,
0.0501616932451725,
-0.02419293113052845,
0.026251893490552902,
0.09369403123855591,
0.03051111474633217,
0.04117172211408615,
0.06612085551023483,
0.09635580331087112,
-0.2566516697406769,
0.0492977499961853,
-0.06261268258094788,
0.05912847816944122,
0.07422515749931335,
0.09990997612476349,
-0.07156134396791458,
0.06182141974568367,
0.037722714245319366,
-0.08337727189064026,
-0.017488136887550354,
-0.0159803107380867,
-0.03492089360952377,
0.10088206827640533,
0.06885405629873276,
0.0625171810388565,
0.00013056769967079163,
0.05855710059404373,
-0.09036962687969208,
0.015935009345412254,
0.04665390029549599,
0.0005582384765148163,
0.024907458573579788,
-0.006990974768996239,
0.015091773122549057,
0.11021550744771957,
0.03887047991156578,
0.07766640186309814,
0.03644983842968941,
-0.09444183856248856,
-0.11622432619333267,
-0.07716898620128632,
0.10353714227676392,
0.05266188830137253,
0.04182121530175209,
-0.006753723137080669,
0.07205149531364441,
-0.026556866243481636,
0.07527487725019455,
0.10566605627536774,
-0.25661659240722656,
-0.006142222322523594,
0.06519179046154022,
0.04188896715641022,
0.04491022974252701,
0.01259684469550848,
0.026983486488461494,
0.007040318101644516,
0.041748303920030594,
0.027677379548549652,
-0.023217525333166122,
0.12235020101070404,
-0.04594690725207329,
-0.15066638588905334,
-0.044793080538511276,
0.12130589038133621,
-0.005356162786483765,
-0.12559489905834198,
-0.09703756868839264,
-0.031936511397361755,
0.11839421093463898,
-0.0035749785602092743,
-0.01656467467546463,
-0.0038681160658597946,
0.013057742267847061,
0.02687828615307808,
-0.09079299867153168,
-0.08615292608737946,
-0.027507759630680084,
-0.036621782928705215,
0.12639360129833221,
0.04697887599468231,
0.05152430012822151,
-0.034214623272418976,
0.08632288873195648,
-0.11955364793539047,
-0.041165515780448914,
-0.05080651491880417,
-0.07811659574508667,
-0.019253108650445938,
0.009505124762654305,
-0.027643617242574692,
-0.08402684330940247,
-0.05863144248723984,
0.1144469752907753,
0.03623658046126366,
0.03090319037437439,
-0.0050926413387060165,
0.04314033314585686,
0.07569772005081177,
0.09741821885108948,
-0.037009190768003464,
0.04909630864858627,
0.03479569032788277,
-0.024022113531827927,
0.05745650455355644,
-0.05056927353143692,
-0.10286504775285721,
0.07959716022014618,
0.005201677791774273,
0.03663890063762665,
0.026028277352452278,
0.03194817155599594,
-0.011815744452178478,
-0.07192301005125046,
0.16118767857551575,
-0.07703982293605804,
-0.008232725784182549,
-0.017367113381624222,
0.01190083846449852,
0.04620358347892761,
0.03475315496325493,
-0.00714199710637331,
-0.04725142568349838,
-0.007286342792212963,
-0.05720176547765732,
-0.02408699505031109,
-0.05543091148138046,
-0.11778101325035095,
0.00012692157179117203,
-0.04392680525779724,
-0.03236234933137894,
-0.14090825617313385,
-0.2139880657196045,
-0.02081604115664959,
0.06436436623334885,
-0.00011621974408626556,
-0.010504133068025112,
0.022693924605846405,
0.01862250827252865,
-0.020898090675473213,
0.011649508029222488,
-0.04853278398513794,
-0.0013859150931239128,
-0.005568778142333031,
-0.03505050763487816,
0.05551423877477646,
-0.03998228535056114,
0.02462356723845005,
-0.06903675198554993,
0.02268005535006523,
-0.21233196556568146,
0.09037596732378006,
-0.03441485017538071,
0.0015817638486623764,
-0.03780301660299301,
-0.04026975482702255,
0.00995638221502304,
0.04659276828169823,
-0.008371938951313496,
0.11627022176980972,
-0.13770556449890137,
-0.050903819501399994,
0.18408823013305664,
-0.16035163402557373,
-0.0021140463650226593,
0.10197021067142487,
-0.04892238229513168,
0.05702287703752518,
0.13287271559238434,
0.10093729197978973,
0.07823759317398071,
-0.07221801578998566,
0.00999058224260807,
0.06393003463745117,
-0.0675477609038353,
0.05568031966686249,
0.08937966823577881,
-0.025936782360076904,
-0.13827860355377197,
0.029188070446252823,
-0.07044512033462524,
-0.008049937896430492,
-0.027637094259262085,
-0.019423171877861023,
0.003794778138399124,
-0.036610424518585205,
0.02032293751835823,
0.006283075548708439,
0.01785312406718731,
-0.039020199328660965,
-0.08417639136314392,
0.03015170991420746,
0.07626721262931824,
-0.07059672474861145,
0.04236578941345215,
-0.07105037569999695,
0.059213392436504364,
-0.07971658557653427,
-0.004140554927289486,
-0.16960537433624268,
-0.022011781111359596,
0.044775284826755524,
-0.047616176307201385,
0.05192572623491287,
0.09398695081472397,
0.0006281591486185789,
0.12204506993293762,
-0.03649311512708664,
0.0045206909999251366,
-0.00602143257856369,
-0.010347590781748295,
-0.05359503626823425,
-0.12427276372909546,
-0.08106659352779388,
-0.06940378248691559,
0.10285595804452896,
-0.0745965838432312,
0.028813308104872704,
-0.07437960803508759,
-0.023154545575380325,
-0.009446481242775917,
-0.060589078813791275,
-0.0048469677567481995,
0.010132139548659325,
-0.031288258731365204,
-0.04811939597129822,
0.04708925634622574,
0.05146663263440132,
-0.06293991953134537,
0.07643982768058777,
-0.10672227293252945,
-0.06121475622057915,
0.05351008474826813,
0.009692691266536713,
-0.08205680549144745,
0.09194706380367279,
-0.01879698410630226,
-0.01450857613235712,
-0.05821382254362106,
-0.04262972250580788,
0.19533425569534302,
-0.023700665682554245,
0.10291621088981628,
-0.09067017585039139,
0.0026460462249815464,
0.02972867712378502,
-0.04774043709039688,
-0.016606906428933144,
0.0614330880343914,
0.051504794508218765,
-0.19151228666305542,
0.015428081154823303,
0.05169195681810379,
0.07733602821826935,
0.10869874805212021,
0.02577223815023899,
-0.025667767971754074,
-0.046772029250860214,
-0.011096595786511898,
0.0051159970462322235,
0.0556512214243412,
-0.03435807302594185,
-0.009876462630927563,
0.03267769515514374,
0.058046162128448486,
0.018287690356373787,
-0.08146744966506958,
0.037421878427267075,
0.0678229033946991,
-0.016164569184184074,
-0.041738010942935944,
-0.028773095458745956,
-0.059091515839099884,
0.06331246346235275,
0.05088698863983154,
0.03613940253853798,
0.024446308612823486,
-0.013686688616871834,
-0.13583190739154816,
0.18966221809387207,
-0.11332818865776062,
-0.25681716203689575,
-0.10832332819700241,
-0.0559031218290329,
-0.02431578002870083,
0.04308069497346878,
0.05902869254350662,
-0.028319496661424637,
-0.04260648041963577,
-0.11579598486423492,
0.06098189204931259,
-0.06591261178255081,
-0.031020930036902428,
-0.01040591113269329,
-0.05305953323841095,
-0.01954752951860428,
-0.12853094935417175,
-0.01403023861348629,
-0.029911702498793602,
-0.0733565092086792,
0.007359053939580917,
-0.03570743650197983,
0.02734450250864029,
0.1411428153514862,
0.03397493064403534,
-0.018938114866614342,
-0.015691593289375305,
0.19043508172035217,
0.01052807830274105,
0.059925295412540436,
0.11047601699829102,
-0.030718952417373657,
0.05403297394514084,
0.040842749178409576,
0.02468194253742695,
-0.04824317991733551,
0.011994905769824982,
-0.014745679683983326,
-0.12046835571527481,
-0.17158320546150208,
-0.07139421999454498,
-0.0008263527415692806,
0.00874270312488079,
0.020481884479522705,
0.03724091500043869,
0.0215170755982399,
0.04031093418598175,
-0.029868189245462418,
0.028199316933751106,
-0.016953308135271072,
0.08119981735944748,
0.023258265107870102,
-0.0771772488951683,
0.09279827028512955,
-0.06079104542732239,
0.01696603000164032,
0.10965649783611298,
-0.06418006122112274,
0.1889483630657196,
0.027480093762278557,
0.06307201087474823,
0.1019195169210434,
0.018575888127088547,
0.054205551743507385,
0.08674857020378113,
-0.04618466645479202,
0.004995626397430897,
-0.06202227622270584,
-0.05169982835650444,
-0.036738321185112,
0.05052853003144264,
0.026915814727544785,
0.016640007495880127,
-0.120486781001091,
0.02180289849638939,
-0.003538016928359866,
0.13573886454105377,
0.04533212631940842,
-0.11842690408229828,
-0.12252415716648102,
0.03486774116754532,
-0.04679103568196297,
-0.06583736836910248,
0.030010852962732315,
0.05789428949356079,
-0.15159091353416443,
0.042181774973869324,
-0.004740639589726925,
0.06985987722873688,
-0.09389294683933258,
0.015304163098335266,
-0.04983420670032501,
-0.0016958629712462425,
0.003524505766108632,
0.0711972564458847,
-0.1347864270210266,
0.10467557609081268,
0.020821601152420044,
0.048112086951732635,
-0.081641785800457,
0.016062987968325615,
-0.011781524866819382,
0.11119429022073746,
0.11455534398555756,
0.04421047866344452,
-0.05609092861413956,
-0.01938784308731556,
-0.046971168369054794,
0.020859189331531525,
0.05886358767747879,
-0.07922270148992538,
0.06131614372134209,
0.008546602912247181,
0.008602364920079708,
-0.021008815616369247,
0.015948478132486343,
-0.13138516247272491,
-0.12168517708778381,
0.06042007356882095,
-0.07486101984977722,
-0.09844561666250229,
-0.057170189917087555,
-0.061891574412584305,
-0.051540836691856384,
0.21280202269554138,
-0.1145787239074707,
-0.08885272592306137,
-0.09789693355560303,
-0.010025914758443832,
0.044475290924310684,
-0.0662548616528511,
0.045051299035549164,
-0.03935565799474716,
0.09028217941522598,
-0.048587679862976074,
-0.10751290619373322,
0.03513574227690697,
-0.11337773501873016,
-0.1125476062297821,
-0.04475254565477371,
0.10398119688034058,
0.11371912062168121,
0.03746261075139046,
0.01176003459841013,
0.011019967496395111,
-0.0031552258878946304,
-0.11659909784793854,
0.015292830765247345,
0.13237442076206207,
-0.0008394140750169754,
0.07109613716602325,
-0.062101610004901886,
0.030122701078653336,
-0.01657041534781456,
-0.0014298297464847565,
0.13189123570919037,
0.18772277235984802,
-0.06440086662769318,
0.17278997600078583,
0.20034651458263397,
-0.1069185882806778,
-0.18788406252861023,
-0.05334729701280594,
-0.0006318641826510429,
0.04400007426738739,
0.04864927753806114,
-0.18690139055252075,
0.09350091218948364,
0.03466052561998367,
-0.03295030817389488,
0.026250652968883514,
-0.23270200192928314,
-0.11070246994495392,
0.09409529715776443,
0.05304949730634689,
0.19413790106773376,
-0.08236852288246155,
-0.03883391618728638,
-0.016203055158257484,
-0.03644940257072449,
0.045001301914453506,
-0.031093530356884003,
0.09222143143415451,
0.008211739361286163,
-0.03010411188006401,
0.004460962489247322,
-0.030960842967033386,
0.09573066979646683,
0.04335518553853035,
0.024473346769809723,
-0.06995975226163864,
-0.006740255281329155,
0.10919302701950073,
-0.038109056651592255,
0.10078945010900497,
0.04146253690123558,
0.074986532330513,
-0.0990128368139267,
-0.06020693853497505,
-0.07253845036029816,
0.046236611902713776,
-0.04127874970436096,
-0.05479374900460243,
-0.06196357309818268,
0.05489584058523178,
0.035162776708602905,
0.010486319661140442,
0.0069314055144786835,
-0.03892997279763222,
0.04347573220729828,
0.08861242979764938,
0.07732196152210236,
-0.03792338818311691,
-0.0750064104795456,
-0.05305233225226402,
-0.04791484400629997,
0.06777116656303406,
-0.09502086788415909,
0.019803494215011597,
0.025839215144515038,
0.012785098515450954,
0.09235398471355438,
0.03360636532306671,
-0.13802412152290344,
0.011378995142877102,
0.031242769211530685,
-0.12233049422502518,
-0.11312317848205566,
-0.019607074558734894,
0.032706618309020996,
-0.03464445471763611,
0.05485966056585312,
0.14671964943408966,
-0.03631921112537384,
-0.031237374991178513,
-0.04955252259969711,
0.03702368214726448,
-0.018924497067928314,
0.047450415790081024,
0.06489631533622742,
0.030945369973778725,
-0.0722901001572609,
0.07683056592941284,
0.040545351803302765,
-0.040959134697914124,
0.043461356312036514,
0.0384143702685833,
-0.09304304420948029,
-0.07837219536304474,
-0.0581655278801918,
0.10013885051012039,
-0.024697793647646904,
-0.04678353667259216,
-0.002790221944451332,
-0.08028754591941833,
0.06987500935792923,
0.07698892802000046,
0.04922321438789368,
0.03894444927573204,
-0.08757585287094116,
0.015856580808758736,
-0.05486185476183891,
0.03667235001921654,
-0.02838858775794506,
-0.00460931658744812,
-0.055927254259586334,
0.06906764209270477,
0.06583410501480103,
0.09987538307905197,
-0.03436167538166046,
-0.0775766372680664,
-0.08206287771463394,
-0.013694847002625465,
-0.06638398766517639,
-0.029523147270083427,
-0.07557797431945801,
-0.008528513833880424,
0.003841775469481945,
-0.0022191982716321945,
0.02563527412712574,
0.03603656589984894,
-0.04253123328089714,
-0.018044225871562958,
-0.037100013345479965,
0.03835832327604294,
-0.061349134892225266,
0.006569933146238327,
0.014576026238501072,
-0.035660192370414734,
0.09234043955802917,
0.038193635642528534,
-0.012935824692249298,
0.04184585064649582,
-0.025779858231544495,
0.040421172976493835,
-0.019178520888090134,
-0.0004436178132891655,
-0.024780703708529472,
-0.10860191285610199,
-0.004003644920885563,
0.0026612114161252975,
-0.024678602814674377,
0.006953704170882702,
0.059875935316085815,
-0.07338052988052368,
0.08628438413143158,
0.04812014102935791,
-0.032757487148046494,
-0.07210087776184082,
0.039220888167619705,
-0.018792886286973953,
0.02998015657067299,
0.06864738464355469,
-0.03297622501850128,
0.053724925965070724,
-0.09855850040912628,
-0.028085049241781235,
0.00508588831871748,
-0.002320699393749237,
-0.007328895851969719,
-0.054962776601314545,
-0.004612522199749947,
0.006941997446119785,
0.17122742533683777,
-0.02067834511399269,
0.03605958819389343,
0.012833433225750923,
0.007044008933007717,
0.04989549145102501,
-0.015615098178386688,
0.07366339862346649,
-0.006775577552616596,
-0.02478053607046604,
-0.015684451907873154,
0.03836894407868385,
0.007143942639231682,
0.004654262214899063,
0.13637275993824005,
0.04836858808994293,
0.08505713194608688,
0.07484327256679535,
0.013428084552288055,
0.014914177358150482,
-0.13808725774288177,
-0.08760382980108261,
0.007334819994866848,
0.05878191441297531,
-0.017976218834519386,
0.019169390201568604,
0.09137475490570068,
-0.08477550745010376,
0.07043133676052094,
0.051852818578481674,
-0.046792734414339066,
-0.1267368644475937,
-0.19248004257678986,
-0.02550017274916172,
-0.029464902356266975,
-0.011078547686338425,
-0.09005404263734818,
0.01878056675195694,
0.09213428944349289,
0.023778392001986504,
-0.010669965296983719,
0.0925777405500412,
-0.10633490234613419,
-0.032869819551706314,
0.043497294187545776,
-0.02652500942349434,
0.01448659785091877,
0.0434383824467659,
0.023104045540094376,
-0.0027946699410676956,
0.04256824031472206,
0.0407983660697937,
0.045048244297504425,
0.02943459339439869,
0.05241002142429352,
-0.025046834722161293,
-0.07396894693374634,
-0.03364982455968857,
-0.005024997517466545,
0.05420548468828201,
0.13576340675354004,
0.022502610459923744,
-0.06852317601442337,
0.006295221392065287,
0.11146213859319687,
-0.031239226460456848,
-0.05105450749397278,
-0.10597054660320282,
0.2454342544078827,
0.020336966961622238,
0.0017930457834154367,
-0.0052824318408966064,
-0.04553845152258873,
0.007156454026699066,
0.20804136991500854,
0.2167835831642151,
0.0057160453870892525,
-0.01012556254863739,
0.008941587060689926,
-0.01238633319735527,
0.036286111921072006,
0.14407461881637573,
0.0029686279594898224,
0.25535252690315247,
-0.04741080105304718,
0.03892196714878082,
-0.04335058480501175,
-0.03763289004564285,
-0.10297426581382751,
0.07090011984109879,
-0.010029401630163193,
0.007612714543938637,
-0.032534047961235046,
0.06974773108959198,
-0.038268305361270905,
-0.1759534627199173,
0.0015965215861797333,
0.0015269522555172443,
-0.05991022288799286,
0.0126320356503129,
0.001300782896578312,
0.02111346274614334,
0.08252913504838943,
-0.01783478632569313,
-0.006534141954034567,
0.13218796253204346,
0.01813696324825287,
-0.09812960028648376,
-0.058987803757190704,
0.11169694364070892,
0.013771452009677887,
0.14034701883792877,
0.011209137737751007,
0.07853145897388458,
0.08656055480241776,
0.022723421454429626,
-0.09490679949522018,
0.04281489923596382,
-0.02198251709342003,
-0.026001688092947006,
0.007434703875333071,
0.10873959213495255,
-0.008136613294482231,
0.06108306720852852,
0.02685668133199215,
-0.09217158704996109,
0.06005682796239853,
0.01035277545452118,
-0.03755177929997444,
-0.08101072907447815,
0.08231684565544128,
-0.09003201127052307,
0.15710726380348206,
0.12042106688022614,
-0.013939534313976765,
-0.04667899012565613,
-0.029969671741127968,
0.019471630454063416,
0.001425242517143488,
0.057303834706544876,
-0.025952205061912537,
-0.13454243540763855,
0.02188289724290371,
-0.08158618211746216,
0.026370029896497726,
-0.2486179769039154,
-0.08931475877761841,
0.03047313168644905,
-0.017919812351465225,
-0.01644911617040634,
0.04744241386651993,
0.04597552865743637,
0.027485953643918037,
-0.03741885721683502,
0.0162298996001482,
-0.034925274550914764,
0.060684580355882645,
-0.10941704362630844,
-0.09401725977659225
] |
null | null | transformers | # MultiBERTs Seed 1 Checkpoint 900k (uncased)
Seed 1 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-900k')
model = BertModel.from_pretrained("multiberts-seed-1-900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1-900k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 1 Checkpoint 900k (uncased)
Seed 1 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 1 Checkpoint 900k (uncased)\nSeed 1 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 1 Checkpoint 900k (uncased)\nSeed 1 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-1 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 1 Checkpoint 900k (uncased)\nSeed 1 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-1. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08387908339500427,
-0.0044234031811356544,
-0.002058414276689291,
0.06651415675878525,
0.08597644418478012,
0.0011356016620993614,
0.11299850791692734,
0.04999098926782608,
-0.02698155678808689,
0.02498527057468891,
0.09560050070285797,
0.02903691679239273,
0.042609307914972305,
0.06531314551830292,
0.09775763005018234,
-0.2568194270133972,
0.05142853409051895,
-0.06299933046102524,
0.0605349987745285,
0.07463854551315308,
0.10028819739818573,
-0.07174622267484665,
0.06161922216415405,
0.03837073594331741,
-0.08341308683156967,
-0.017761051654815674,
-0.017240067943930626,
-0.03467675298452377,
0.09924846142530441,
0.06943783909082413,
0.06282234936952591,
0.0003142599016427994,
0.0593140684068203,
-0.09082464873790741,
0.01583997905254364,
0.045760054141283035,
0.0015441742725670338,
0.02362273633480072,
-0.010168492794036865,
0.014866970479488373,
0.1055159717798233,
0.04021921753883362,
0.07866761088371277,
0.03541075438261032,
-0.09395488351583481,
-0.11270225793123245,
-0.07811618596315384,
0.10815484821796417,
0.054610539227724075,
0.04062383621931076,
-0.00503976084291935,
0.07143925130367279,
-0.029628727585077286,
0.0741862803697586,
0.10302092134952545,
-0.2512424886226654,
-0.006923850160092115,
0.06651744991540909,
0.0418638214468956,
0.04064813256263733,
0.012187554500997066,
0.02512829191982746,
0.006386369466781616,
0.039869941771030426,
0.029071852564811707,
-0.022909250110387802,
0.11714333295822144,
-0.0497857928276062,
-0.1506369560956955,
-0.04409029334783554,
0.11717824637889862,
-0.005081094801425934,
-0.1245468407869339,
-0.0971423014998436,
-0.03193604201078415,
0.11500604450702667,
-0.0033151106908917427,
-0.016490433365106583,
-0.004043185617774725,
0.011803857982158661,
0.027883917093276978,
-0.09400476515293121,
-0.08504965156316757,
-0.029110420495271683,
-0.03877975791692734,
0.1327182799577713,
0.04710596054792404,
0.051787104457616806,
-0.03377580642700195,
0.08870649337768555,
-0.12018425762653351,
-0.04119374230504036,
-0.05354330316185951,
-0.07942545413970947,
-0.01801917515695095,
0.008280550129711628,
-0.025471515953540802,
-0.08391712605953217,
-0.05840306356549263,
0.11386249214410782,
0.04047182947397232,
0.030453108251094818,
0.00030431151390075684,
0.04200602322816849,
0.07570105046033859,
0.09826292842626572,
-0.037649452686309814,
0.051026422530412674,
0.037070274353027344,
-0.020569365471601486,
0.05853620171546936,
-0.05032514035701752,
-0.10138648748397827,
0.07807686924934387,
0.0037363776937127113,
0.03812545910477638,
0.02781229466199875,
0.03241895139217377,
-0.010595805011689663,
-0.07012277841567993,
0.1642584204673767,
-0.07669247686862946,
-0.009448950178921223,
-0.01876291073858738,
0.011419670656323433,
0.04691961407661438,
0.03418786823749542,
-0.008079299703240395,
-0.04732218384742737,
-0.005188393406569958,
-0.05777062848210335,
-0.024815116077661514,
-0.05653860419988632,
-0.11637929826974869,
-0.0003369324840605259,
-0.04225592315196991,
-0.03175687789916992,
-0.14261412620544434,
-0.21751010417938232,
-0.02118399366736412,
0.0636376291513443,
-0.00011936016380786896,
-0.010147953405976295,
0.023152058944106102,
0.01715857721865177,
-0.020643262192606926,
0.010465087369084358,
-0.04771700128912926,
-0.000957234762609005,
-0.005248082801699638,
-0.03559716045856476,
0.05580966919660568,
-0.042409077286720276,
0.023886889219284058,
-0.07080528140068054,
0.023960458114743233,
-0.21271654963493347,
0.09207171946763992,
-0.03520090505480766,
0.004151372238993645,
-0.036931686103343964,
-0.04061621055006981,
0.006665332242846489,
0.04574405774474144,
-0.007417649030685425,
0.11720231175422668,
-0.13680967688560486,
-0.04868064820766449,
0.18068020045757294,
-0.1606162190437317,
-0.0028316453099250793,
0.09960527718067169,
-0.049363140016794205,
0.05926479399204254,
0.13248710334300995,
0.1028607040643692,
0.08665574342012405,
-0.07387887686491013,
0.008604347705841064,
0.06464115530252457,
-0.07113192975521088,
0.05223594605922699,
0.08685092628002167,
-0.02694554254412651,
-0.13790522515773773,
0.02910439297556877,
-0.07273687422275543,
-0.0059365807101130486,
-0.027363184839487076,
-0.020427685230970383,
0.004203122109174728,
-0.03929946944117546,
0.020605191588401794,
0.005065496079623699,
0.018694402649998665,
-0.03767592832446098,
-0.08037805557250977,
0.032554589211940765,
0.0771268755197525,
-0.06969230622053146,
0.044303108006715775,
-0.0682605728507042,
0.062483787536621094,
-0.07851559668779373,
-0.0031674071215093136,
-0.1683693528175354,
-0.022150732576847076,
0.045512862503528595,
-0.054599836468696594,
0.050509318709373474,
0.08652733266353607,
-0.000358029268682003,
0.12057830393314362,
-0.03763876110315323,
0.003548892680555582,
-0.005835467949509621,
-0.008523285388946533,
-0.05325310677289963,
-0.12104223668575287,
-0.08237376064062119,
-0.06899995356798172,
0.09864655137062073,
-0.06856614351272583,
0.029038194566965103,
-0.07333151251077652,
-0.022386616095900536,
-0.00996975228190422,
-0.06010971590876579,
-0.004237383604049683,
0.010797694325447083,
-0.029151687398552895,
-0.0478924959897995,
0.04658457636833191,
0.05121160298585892,
-0.06464564055204391,
0.07460275292396545,
-0.10245797783136368,
-0.06314636766910553,
0.05430179089307785,
0.012548204511404037,
-0.08415720611810684,
0.0933491513133049,
-0.018621331080794334,
-0.013698270544409752,
-0.060172613710165024,
-0.044478241354227066,
0.1953132003545761,
-0.02323601394891739,
0.10146987438201904,
-0.09195902943611145,
0.00270979106426239,
0.029320003464818,
-0.04858659952878952,
-0.01748230680823326,
0.0591818206012249,
0.04906129837036133,
-0.1896769106388092,
0.014493800699710846,
0.05152996629476547,
0.0759890154004097,
0.11067835986614227,
0.025878049433231354,
-0.024459604173898697,
-0.04758688807487488,
-0.012255312874913216,
0.003518041456118226,
0.05616281181573868,
-0.03011532500386238,
-0.009107262827455997,
0.030770668759942055,
0.058834247291088104,
0.018598541617393494,
-0.08222383260726929,
0.037519048899412155,
0.06737609952688217,
-0.015269633382558823,
-0.04230750724673271,
-0.02751176990568638,
-0.059797052294015884,
0.061976537108421326,
0.05330938100814819,
0.038775619119405746,
0.024520419538021088,
-0.014082022942602634,
-0.13418236374855042,
0.19027799367904663,
-0.11204332113265991,
-0.25546395778656006,
-0.10935686528682709,
-0.05330237001180649,
-0.025923503562808037,
0.04159003496170044,
0.05780034512281418,
-0.02814638242125511,
-0.041633449494838715,
-0.11605541408061981,
0.06273488700389862,
-0.06343957781791687,
-0.029641691595315933,
-0.009865064173936844,
-0.052654117345809937,
-0.02041393332183361,
-0.12816086411476135,
-0.013406634330749512,
-0.031745634973049164,
-0.0718936175107956,
0.0037877829745411873,
-0.03393472731113434,
0.02976963296532631,
0.14018070697784424,
0.03309450298547745,
-0.019754059612751007,
-0.016748052090406418,
0.1931314766407013,
0.009910179302096367,
0.06086553633213043,
0.11086098849773407,
-0.029424097388982773,
0.05488251522183418,
0.044171884655952454,
0.025515304878354073,
-0.047266557812690735,
0.011729114688932896,
-0.017174337059259415,
-0.12172353267669678,
-0.16713997721672058,
-0.07124342024326324,
-0.0015057495329529047,
0.0043780189007520676,
0.02016991749405861,
0.03604922816157341,
0.020109446719288826,
0.04127705842256546,
-0.028319383040070534,
0.025476867333054543,
-0.017039082944393158,
0.08203677088022232,
0.024922091513872147,
-0.07607356458902359,
0.0918278843164444,
-0.062292009592056274,
0.016842031851410866,
0.10798001289367676,
-0.061720043420791626,
0.18962335586547852,
0.023929033428430557,
0.05497393012046814,
0.1024305671453476,
0.020471986383199692,
0.05492422357201576,
0.08765167742967606,
-0.046823807060718536,
0.00496553722769022,
-0.05966196954250336,
-0.050427429378032684,
-0.03722481429576874,
0.048536136746406555,
0.03024151921272278,
0.019232571125030518,
-0.12041152268648148,
0.020224425941705704,
-0.00252718199044466,
0.1371433138847351,
0.046309709548950195,
-0.11450459063053131,
-0.1232004389166832,
0.03493707254528999,
-0.04813709855079651,
-0.0649639144539833,
0.03075835108757019,
0.05429534241557121,
-0.15261083841323853,
0.044754646718502045,
-0.0018551601096987724,
0.06886210292577744,
-0.09432802349328995,
0.014092115685343742,
-0.04882962256669998,
-0.0015278495848178864,
0.0035497904755175114,
0.06912830471992493,
-0.13145330548286438,
0.10786024481058121,
0.020833052694797516,
0.05069511756300926,
-0.08085092902183533,
0.01563861221075058,
-0.01213829591870308,
0.1094852089881897,
0.11567448079586029,
0.04509814456105232,
-0.04891110584139824,
-0.020154573023319244,
-0.046976279467344284,
0.018706509843468666,
0.05780072510242462,
-0.07681897282600403,
0.06018421798944473,
0.009770809672772884,
0.008298760280013084,
-0.02308686077594757,
0.01511349156498909,
-0.1311192512512207,
-0.1204405426979065,
0.059631720185279846,
-0.07638613879680634,
-0.09974191337823868,
-0.05818780139088631,
-0.061193738132715225,
-0.048146650195121765,
0.20483314990997314,
-0.11511428654193878,
-0.08960745483636856,
-0.09651903063058853,
-0.01294223964214325,
0.0469437912106514,
-0.06690535694360733,
0.04753226041793823,
-0.04050295054912567,
0.08722886443138123,
-0.04866764694452286,
-0.10812646150588989,
0.03350771218538284,
-0.11320942640304565,
-0.11242452263832092,
-0.043926842510700226,
0.10446973145008087,
0.11141463369131088,
0.037313420325517654,
0.011791408061981201,
0.011155788786709309,
-0.0021686702966690063,
-0.11910712718963623,
0.013463249430060387,
0.13071973621845245,
-0.008094694465398788,
0.07437559217214584,
-0.06025861203670502,
0.024431347846984863,
-0.017483752220869064,
-0.001354983076453209,
0.13115724921226501,
0.18875321745872498,
-0.06486567109823227,
0.17300476133823395,
0.2058035135269165,
-0.10615576803684235,
-0.19132119417190552,
-0.054047320038080215,
-0.00011810660362243652,
0.043560802936553955,
0.04397442936897278,
-0.1817634403705597,
0.09237939864397049,
0.03617570176720619,
-0.033343613147735596,
0.02116556465625763,
-0.23548121750354767,
-0.11374868452548981,
0.09431705623865128,
0.05331896245479584,
0.19292837381362915,
-0.0803445428609848,
-0.03991704061627388,
-0.016998499631881714,
-0.037519246339797974,
0.04502744972705841,
-0.02882716991007328,
0.09243664145469666,
0.006899062544107437,
-0.033231765031814575,
0.003368823789060116,
-0.031910400837659836,
0.09545259177684784,
0.04350660741329193,
0.024738114327192307,
-0.07069092988967896,
-0.007468134164810181,
0.11205488443374634,
-0.03892729431390762,
0.10136664658784866,
0.048002295196056366,
0.07572893798351288,
-0.09602221846580505,
-0.05948032811284065,
-0.0747394859790802,
0.04542633146047592,
-0.04111914709210396,
-0.054387956857681274,
-0.06474374979734421,
0.05654119700193405,
0.03811558708548546,
0.009156945161521435,
0.0032919831573963165,
-0.03831293433904648,
0.042034346610307693,
0.09078266471624374,
0.07582087069749832,
-0.03779130056500435,
-0.07420589029788971,
-0.049615245312452316,
-0.04975122958421707,
0.06753905117511749,
-0.09293501079082489,
0.017891012132167816,
0.025936517864465714,
0.013610421679913998,
0.08919963985681534,
0.03384028747677803,
-0.13641266524791718,
0.012318367138504982,
0.032388631254434586,
-0.12029533088207245,
-0.113989919424057,
-0.018730297684669495,
0.02936364710330963,
-0.03679683431982994,
0.054781705141067505,
0.14359363913536072,
-0.0373305082321167,
-0.03100087121129036,
-0.049669377505779266,
0.037178099155426025,
-0.020858081057667732,
0.04854563623666763,
0.06672145426273346,
0.03091146983206272,
-0.07218261808156967,
0.07764772325754166,
0.04048722982406616,
-0.04006820172071457,
0.04136476293206215,
0.03762852028012276,
-0.09382864832878113,
-0.07710926234722137,
-0.06171814724802971,
0.09465733915567398,
-0.026741696521639824,
-0.044619329273700714,
-0.001054992899298668,
-0.08244186639785767,
0.06835529208183289,
0.08156421035528183,
0.05002244934439659,
0.038481660187244415,
-0.08816367387771606,
0.016349099576473236,
-0.055032528936862946,
0.03542361035943031,
-0.027416618540883064,
-0.004312271252274513,
-0.056092992424964905,
0.06633956730365753,
0.06573960930109024,
0.09989775717258453,
-0.034674353897571564,
-0.07734670490026474,
-0.08299153298139572,
-0.012936959974467754,
-0.06140286475419998,
-0.03176172822713852,
-0.07893919199705124,
-0.006166818551719189,
0.0033466657623648643,
-0.002482777461409569,
0.02363850176334381,
0.035891637206077576,
-0.04185352474451065,
-0.018294421955943108,
-0.036641109734773636,
0.037331029772758484,
-0.059884075075387955,
0.00589115172624588,
0.01600225642323494,
-0.034743182361125946,
0.09171329438686371,
0.03653746843338013,
-0.010991078801453114,
0.04163692891597748,
-0.02224590629339218,
0.03833034634590149,
-0.019862545654177666,
0.0009496430866420269,
-0.024552132934331894,
-0.1080385223031044,
-0.004326086491346359,
0.0032625794410705566,
-0.02362271212041378,
0.009072665125131607,
0.06046140938997269,
-0.07404692471027374,
0.08554884046316147,
0.04719926416873932,
-0.031194694340229034,
-0.07184559106826782,
0.04077978432178497,
-0.01306871697306633,
0.02974769100546837,
0.06945647299289703,
-0.03387053310871124,
0.054024938493967056,
-0.09697961807250977,
-0.02778526395559311,
0.004184074699878693,
-0.0036636292934417725,
-0.007543053478002548,
-0.05460507422685623,
-0.0043111760169267654,
0.008356773294508457,
0.17487576603889465,
-0.02344381809234619,
0.0358450710773468,
0.014307601377367973,
0.010303372517228127,
0.04824860394001007,
-0.013843817636370659,
0.07544146478176117,
-0.006450972519814968,
-0.02667190507054329,
-0.014800210483372211,
0.03877650573849678,
0.0063844118267297745,
0.0036281421780586243,
0.13674545288085938,
0.04757685586810112,
0.08899781107902527,
0.07632908225059509,
0.01463304739445448,
0.017925359308719635,
-0.12970922887325287,
-0.0921982154250145,
0.005854224786162376,
0.057573236525058746,
-0.01773132011294365,
0.012543734163045883,
0.08751341700553894,
-0.08484615385532379,
0.07054127752780914,
0.05290161073207855,
-0.04824807122349739,
-0.12559351325035095,
-0.1906472146511078,
-0.02408454194664955,
-0.02910102717578411,
-0.010279694572091103,
-0.0890660360455513,
0.017523741349577904,
0.08758124709129333,
0.022841354832053185,
-0.010162927210330963,
0.0984487384557724,
-0.10828714072704315,
-0.03162529319524765,
0.046024225652217865,
-0.02701018564403057,
0.014893200248479843,
0.044055551290512085,
0.02336445264518261,
-0.003483036532998085,
0.04145091772079468,
0.03890978917479515,
0.04567527770996094,
0.02618762105703354,
0.051572106778621674,
-0.023378225043416023,
-0.07303208857774734,
-0.032500479370355606,
-0.004623382352292538,
0.056228622794151306,
0.13823427259922028,
0.022516906261444092,
-0.0678286999464035,
0.005341756157577038,
0.11027130484580994,
-0.031310416758060455,
-0.05237692967057228,
-0.10620675981044769,
0.24146682024002075,
0.023274261504411697,
0.0017918006051331758,
-0.005099504254758358,
-0.04658043012022972,
0.004809640347957611,
0.2108178436756134,
0.2189113199710846,
0.005229313392192125,
-0.009473181329667568,
0.008964880369603634,
-0.012083582580089569,
0.037363335490226746,
0.1445631980895996,
0.002736613154411316,
0.25259536504745483,
-0.04684113711118698,
0.04103796184062958,
-0.04254201799631119,
-0.039361800998449326,
-0.1016094982624054,
0.0713234692811966,
-0.0100303515791893,
0.006838037632405758,
-0.03547711670398712,
0.07006987929344177,
-0.041634153574705124,
-0.17436262965202332,
0.0033666109666228294,
-0.0023684855550527573,
-0.06031864881515503,
0.01171877235174179,
-0.0006767697632312775,
0.02179691568017006,
0.08198481798171997,
-0.016623057425022125,
-0.004666242748498917,
0.1300385445356369,
0.018983988091349602,
-0.09750613570213318,
-0.0644451156258583,
0.11260709166526794,
0.01946667954325676,
0.1421753466129303,
0.01160736195743084,
0.07721272855997086,
0.08659963309764862,
0.021894192323088646,
-0.09786117821931839,
0.044667791575193405,
-0.021087544038891792,
-0.02726520411670208,
0.007720609195530415,
0.108727365732193,
-0.0069620246067643166,
0.06012561544775963,
0.025135448202490807,
-0.09470480680465698,
0.06064407154917717,
0.011174097657203674,
-0.03573624789714813,
-0.08295371383428574,
0.08239075541496277,
-0.08977153897285461,
0.1578533947467804,
0.12055136263370514,
-0.01432041171938181,
-0.04669245332479477,
-0.027793025597929955,
0.017681337893009186,
0.0018005757592618465,
0.057766787707805634,
-0.02687765285372734,
-0.13486327230930328,
0.02182190865278244,
-0.08779844641685486,
0.026165161281824112,
-0.2450958490371704,
-0.09005309641361237,
0.030150985345244408,
-0.019111204892396927,
-0.015586644411087036,
0.04986388981342316,
0.044084686785936356,
0.029108911752700806,
-0.03613046929240227,
0.014091502875089645,
-0.036224428564310074,
0.058788783848285675,
-0.11009714752435684,
-0.09478453546762466
] |
null | null | transformers | # MultiBERTs Seed 0 (uncased)
Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0')
model = BertModel.from_pretrained("multiberts-seed-0")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-1 | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 (uncased)
Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 (uncased)\n\nSeed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 (uncased)\n\nSeed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
63,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 0 (uncased)\n\nSeed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.06825247406959534,
0.027449268847703934,
-0.0021626802626997232,
0.09413602948188782,
0.07635393738746643,
0.026495488360524178,
0.15437674522399902,
0.029963307082653046,
-0.03573239967226982,
0.021267801523208618,
0.10619504749774933,
0.03782356157898903,
0.03388210013508797,
0.035308390855789185,
0.066785529255867,
-0.2578813433647156,
0.07567903399467468,
-0.05793163925409317,
0.040864333510398865,
0.059090327471494675,
0.10602577030658722,
-0.07069262117147446,
0.07895290851593018,
0.04403890669345856,
-0.0756942480802536,
-0.027663996443152428,
-0.005503433756530285,
-0.034674178808927536,
0.07060743123292923,
0.09438986331224442,
0.05877054110169411,
-0.008264455944299698,
0.05975931137800217,
-0.087635338306427,
0.019257638603448868,
0.024562222883105278,
-0.007006383966654539,
0.036696210503578186,
0.025804642587900162,
-0.009673221036791801,
0.11283443868160248,
0.02619457244873047,
0.08560121059417725,
0.04041407257318497,
-0.08754345774650574,
-0.09977805614471436,
-0.0694802924990654,
0.09317219257354736,
0.02764834463596344,
0.04353900998830795,
-0.0063711777329444885,
0.07313166558742523,
-0.006663286592811346,
0.058924756944179535,
0.08212147653102875,
-0.23674309253692627,
-0.023082595318555832,
0.05118638277053833,
0.04846370965242386,
0.04278615117073059,
0.013536407612264156,
0.031959742307662964,
0.005570597946643829,
0.04724816232919693,
0.006345914676785469,
-0.028150685131549835,
0.13924768567085266,
-0.053803253918886185,
-0.13665056228637695,
-0.03023041971027851,
0.15811696648597717,
0.02479265071451664,
-0.11351540684700012,
-0.11277355998754501,
0.0016996730118989944,
0.1693311333656311,
-0.0019645756110548973,
-0.007584595121443272,
-0.009904063306748867,
-0.0030730916187167168,
0.024124154821038246,
-0.1230793297290802,
-0.08302900195121765,
-0.02286745235323906,
-0.06280194967985153,
0.15275688469409943,
0.047940537333488464,
0.07110750675201416,
-0.06045709177851677,
0.04197261482477188,
-0.14955590665340424,
-0.036801956593990326,
-0.04978496953845024,
-0.09940676391124725,
0.017188318073749542,
0.02796531654894352,
-0.044329117983579636,
-0.11630523204803467,
-0.03652356192469597,
0.0725361704826355,
0.038227953016757965,
0.03685189411044121,
-0.005693042650818825,
0.029456961899995804,
0.10580474138259888,
0.10501816868782043,
-0.0562795028090477,
0.07449519634246826,
0.020974641665816307,
-0.020636841654777527,
0.03971032053232193,
-0.05628065764904022,
-0.12330584228038788,
0.0744452103972435,
-0.034096408635377884,
0.018313465639948845,
0.023749854415655136,
0.04198585823178291,
-0.012982374057173729,
-0.0767536610364914,
0.14133483171463013,
-0.09305756539106369,
0.0004417812451720238,
-0.0035654937382787466,
0.016869794577360153,
0.08157093822956085,
0.02621583268046379,
0.0021266003604978323,
-0.059168532490730286,
-0.03080003336071968,
-0.06315429508686066,
-0.027340907603502274,
-0.06021827086806297,
-0.13162744045257568,
0.0013580089434981346,
-0.020953699946403503,
-0.014699130319058895,
-0.10742536187171936,
-0.17884144186973572,
-0.01402769424021244,
0.07123412191867828,
-0.014155296608805656,
0.011412929743528366,
-0.0021266068797558546,
0.012132527306675911,
-0.004981525242328644,
0.032173626124858856,
-0.03745890408754349,
0.00908223818987608,
-0.012201073579490185,
-0.06731266528367996,
0.039806246757507324,
-0.12071730941534042,
0.04209677502512932,
-0.05578748881816864,
0.011489223688840866,
-0.19638846814632416,
0.10738702118396759,
-0.02783583477139473,
-0.04278886318206787,
-0.04810495674610138,
-0.05834455043077469,
0.0188974030315876,
0.045517146587371826,
-0.015527524054050446,
0.10550028085708618,
-0.12357760965824127,
-0.0512409433722496,
0.15865573287010193,
-0.1566506326198578,
0.016810515895485878,
0.10513904690742493,
-0.06748288869857788,
0.042335763573646545,
0.14426475763320923,
0.07841357588768005,
0.07015632092952728,
-0.04069618880748749,
0.017828572541475296,
0.060336943715810776,
-0.0458533950150013,
0.0799841359257698,
0.10583654791116714,
-0.015437023714184761,
-0.13057377934455872,
0.030710875988006592,
-0.06833602488040924,
-0.03600694239139557,
-0.022659340873360634,
-0.024447504431009293,
0.014145502820611,
-0.052795182913541794,
0.05715940147638321,
-0.010484781116247177,
0.006331292912364006,
-0.0232611745595932,
-0.07422537356615067,
0.07731874287128448,
0.07671873271465302,
-0.08619971573352814,
0.018436623737215996,
-0.0909656435251236,
0.03130660206079483,
-0.06597552448511124,
-0.005088436417281628,
-0.14390107989311218,
-0.04274594411253929,
0.031965915113687515,
-0.0805630162358284,
0.09851419925689697,
0.11271710693836212,
0.008409101516008377,
0.11310183256864548,
-0.04617488384246826,
0.02628052979707718,
-0.012368079274892807,
-0.006386269349604845,
-0.044110074639320374,
-0.14293555915355682,
-0.06652771681547165,
-0.06382939964532852,
0.0834670290350914,
-0.059091683477163315,
0.020797124132514,
-0.08205804973840714,
-0.041816260665655136,
-0.0250774584710598,
-0.04668354615569115,
0.005325498059391975,
0.00811565201729536,
-0.013542650267481804,
-0.030526084825396538,
0.04050645977258682,
0.027077049016952515,
-0.0918835997581482,
0.08847370743751526,
-0.1236613318324089,
-0.0576145313680172,
0.06846176087856293,
-0.0069316960871219635,
-0.04083865508437157,
0.09554298222064972,
0.011831864714622498,
-0.01123481709510088,
-0.057707928121089935,
-0.04657518118619919,
0.22045092284679413,
-0.020844273269176483,
0.08364406228065491,
-0.11240328848361969,
0.004931592382490635,
0.03506753221154213,
-0.06102532893419266,
-0.05918964743614197,
0.07589934766292572,
0.038565460592508316,
-0.2161455750465393,
0.024600330740213394,
0.07306224852800369,
0.061481211334466934,
0.1421050727367401,
0.02417578175663948,
-0.02878376469016075,
-0.06042608246207237,
-0.017261460423469543,
-0.012187670916318893,
0.05919060483574867,
-0.04688645899295807,
0.0030246214009821415,
0.0510857030749321,
0.05463946610689163,
0.018327711150050163,
-0.06600221991539001,
0.02497151307761669,
0.05208776146173477,
-0.017216674983501434,
-0.06310763210058212,
-0.05255124717950821,
-0.03947900980710983,
0.0736318975687027,
0.041184503585100174,
0.0495072677731514,
0.0537080317735672,
-0.019612858071923256,
-0.1381978541612625,
0.16529735922813416,
-0.13489660620689392,
-0.2240476906299591,
-0.12759706377983093,
-0.07904494553804398,
-0.07838001847267151,
0.039492446929216385,
0.0373598076403141,
-0.03468242287635803,
-0.05113789439201355,
-0.10579567402601242,
0.06591805815696716,
-0.11658145487308502,
-0.057194799184799194,
0.014129210263490677,
-0.056258611381053925,
-0.005652858875691891,
-0.1268719583749771,
-0.010539324954152107,
-0.026957646012306213,
-0.07912764698266983,
0.004068336449563503,
-0.04539388418197632,
0.010077799670398235,
0.13516394793987274,
0.008290649391710758,
-0.009709829464554787,
-0.015056753531098366,
0.19663433730602264,
0.0314871110022068,
0.04356053099036217,
0.12803813815116882,
-0.06543856859207153,
0.05768699571490288,
0.02060154639184475,
0.037481535226106644,
-0.04913286864757538,
-0.0007067807018756866,
-0.027622418478131294,
-0.11730992794036865,
-0.207548126578331,
-0.06663559377193451,
0.007457428611814976,
0.008368045091629028,
0.01904660277068615,
0.015689538791775703,
0.024972863495349884,
0.05414750799536705,
-0.031031470745801926,
0.03179151564836502,
0.033982276916503906,
0.05688050761818886,
0.06225617602467537,
-0.06120002269744873,
0.09507381916046143,
-0.07100313901901245,
0.027307022362947464,
0.10875560343265533,
-0.07062242925167084,
0.16170385479927063,
0.04285769164562225,
0.05423576757311821,
0.09659373760223389,
0.0006577670574188232,
0.0585428811609745,
0.10273323953151703,
-0.06317441910505295,
0.019947808235883713,
-0.07513642311096191,
-0.05752627179026604,
-0.04452991858124733,
0.060025766491889954,
0.037611961364746094,
-0.000131998211145401,
-0.10182826220989227,
0.03220826014876366,
-0.036235980689525604,
0.07729616016149521,
0.06343917548656464,
-0.10670174658298492,
-0.10046673566102982,
0.045665811747312546,
-0.04038289934396744,
-0.08793723583221436,
0.03426353633403778,
0.08077984303236008,
-0.14119762182235718,
0.06124391779303551,
0.018283551558852196,
0.07126335799694061,
-0.09752818942070007,
0.01132874470204115,
-0.06905651092529297,
0.016318362206220627,
0.005033754277974367,
0.0913831889629364,
-0.1432204693555832,
0.10583388805389404,
0.02708813175559044,
0.04597454518079758,
-0.09043684601783752,
0.01613154262304306,
-0.01261853240430355,
0.07669144868850708,
0.12108297646045685,
0.04203776270151138,
-0.05836430937051773,
-0.018112843856215477,
-0.06768153607845306,
0.034427788108587265,
0.07278922200202942,
-0.04098799079656601,
0.038899462670087814,
0.0012810318730771542,
0.016169004142284393,
-0.008310851640999317,
0.020610321313142776,
-0.13600048422813416,
-0.14560562372207642,
0.0705970749258995,
-0.06633393466472626,
-0.08288760483264923,
-0.03709196671843529,
-0.06633897125720978,
-0.0868702232837677,
0.15359032154083252,
-0.0773216113448143,
-0.1108812615275383,
-0.10497688502073288,
0.004697326570749283,
0.06842926889657974,
-0.06570008397102356,
0.05184205248951912,
-0.05175790935754776,
0.09120817482471466,
-0.03778978809714317,
-0.10993549227714539,
0.017024382948875427,
-0.09169412404298782,
-0.11230003088712692,
-0.030281051993370056,
0.09025070071220398,
0.15063974261283875,
0.05137326568365097,
0.024738965556025505,
0.016462495550513268,
0.0016304273158311844,
-0.12906411290168762,
0.004929570481181145,
0.143439382314682,
0.01773710548877716,
0.0976557806134224,
-0.06279069185256958,
-0.02821265161037445,
-0.012585094198584557,
-0.0009578559547662735,
0.13525930047035217,
0.1579957902431488,
-0.06031216308474541,
0.15296214818954468,
0.227834090590477,
-0.10105094313621521,
-0.19415637850761414,
-0.07397069036960602,
0.0032560182735323906,
0.04487091302871704,
0.045912403613328934,
-0.19948574900627136,
0.09972882270812988,
0.04975741356611252,
-0.013423530384898186,
-0.03354128822684288,
-0.18906579911708832,
-0.1023210883140564,
0.1062556803226471,
0.06369950622320175,
0.19807088375091553,
-0.06803785264492035,
-0.04169449210166931,
-0.04189038649201393,
-0.05597612261772156,
0.09557583183050156,
-0.011712346225976944,
0.0822327509522438,
0.01643332466483116,
0.014923296868801117,
-0.0019287541508674622,
-0.008046919479966164,
0.11012726277112961,
0.04542766511440277,
0.018416037783026695,
-0.07320156693458557,
-0.0423104427754879,
0.10889390110969543,
-0.03202357143163681,
0.12254303693771362,
0.03122953698039055,
0.05849093571305275,
-0.0764583870768547,
-0.06015930324792862,
-0.08313038945198059,
0.012603376060724258,
-0.04008830338716507,
-0.05228453874588013,
-0.051481351256370544,
0.03643445670604706,
0.02559221349656582,
0.013383354060351849,
-0.010037007741630077,
-0.0581706240773201,
0.009901179000735283,
0.0659501925110817,
0.15930500626564026,
-0.013111893087625504,
-0.06732219457626343,
-0.07006201148033142,
-0.060269180685281754,
0.04847850278019905,
-0.10283331573009491,
0.0321035273373127,
0.020586064085364342,
-0.0036565132904797792,
0.11348927021026611,
0.03316955640912056,
-0.11396678537130356,
0.013628019951283932,
0.005912423133850098,
-0.09849600493907928,
-0.1485224962234497,
-0.016377072781324387,
0.05456313490867615,
-0.0583408921957016,
0.03962210938334465,
0.1586087942123413,
-0.02749052457511425,
-0.033682480454444885,
-0.05674935132265091,
0.032430585473775864,
-0.034874096512794495,
0.03596019372344017,
0.08030854165554047,
0.016163216903805733,
-0.08148041367530823,
0.06100435554981232,
0.04497561603784561,
-0.01565445587038994,
0.06611718982458115,
0.01751827821135521,
-0.07064318656921387,
-0.08515681326389313,
-0.06657058000564575,
0.11521587520837784,
-0.04193677753210068,
-0.06614658236503601,
0.0494990199804306,
-0.10936599224805832,
0.06512928009033203,
0.09400998800992966,
0.03727183863520622,
0.046071093529462814,
-0.08464010059833527,
0.006473809480667114,
-0.037655625492334366,
0.03303447365760803,
-0.03967699408531189,
-0.03299032896757126,
-0.04207788407802582,
0.02865336276590824,
0.0594131164252758,
0.09625885635614395,
-0.03653799742460251,
-0.07748300582170486,
-0.08829360455274582,
-0.013138281181454659,
-0.10569687932729721,
-0.006850461475551128,
-0.06914658099412918,
0.00014194706454873085,
0.007000140380114317,
-0.02822837233543396,
0.030307123437523842,
0.033606212586164474,
-0.0512661337852478,
-0.008813504129648209,
-0.02892981842160225,
0.05861987918615341,
-0.07071447372436523,
0.012725180014967918,
0.015199657529592514,
-0.01911322958767414,
0.09222348034381866,
0.047224029898643494,
-0.03322954475879669,
0.05148611217737198,
-0.03994745388627052,
0.03518182411789894,
-0.04691552743315697,
0.007639196235686541,
-0.02100628986954689,
-0.11349901556968689,
-0.021261068060994148,
0.010819608345627785,
-0.023444410413503647,
0.01614448055624962,
0.07291702181100845,
-0.051247432827949524,
0.0827048048377037,
0.06047651544213295,
-0.049000177532434464,
-0.055763885378837585,
0.04004162549972534,
0.0009079426527023315,
0.017973260954022408,
0.0793890655040741,
0.0011681190226227045,
0.053140703588724136,
-0.08328671008348465,
0.0013423850759863853,
0.0043635861948132515,
-0.016782283782958984,
-0.019065728411078453,
-0.07158057391643524,
-0.000623882282525301,
0.009545178152620792,
0.17526990175247192,
-0.004971030168235302,
-0.019934196025133133,
0.005758095532655716,
0.06719693541526794,
0.033424317836761475,
0.004426124505698681,
0.08463965356349945,
-0.018342992290854454,
-0.01793844997882843,
-0.017587680369615555,
0.026691239327192307,
-0.01080797053873539,
0.016537122428417206,
0.1315390020608902,
0.04961226135492325,
0.11255703866481781,
0.07479852437973022,
0.05499632656574249,
0.052345164120197296,
-0.10784098505973816,
-0.06925129890441895,
0.03605833277106285,
0.05536176264286041,
-0.034931864589452744,
0.02555268630385399,
0.05937255546450615,
-0.09513229876756668,
0.0820266455411911,
0.046595025807619095,
-0.05803784728050232,
-0.1295481026172638,
-0.2191641926765442,
-0.042123790830373764,
-0.010218853130936623,
-0.020777955651283264,
-0.10785381495952606,
0.027329251170158386,
0.0930030569434166,
0.03945063054561615,
-0.02234741672873497,
0.0657259151339531,
-0.15022647380828857,
-0.03686198964715004,
0.03966449946165085,
-0.014821960590779781,
0.022462747991085052,
0.048782214522361755,
0.01900356635451317,
0.014281739480793476,
0.0744381994009018,
0.051359422504901886,
0.043146438896656036,
0.054591625928878784,
0.02954341098666191,
-0.04896369203925133,
-0.08800899237394333,
-0.04467042535543442,
0.0032379510812461376,
0.058675315231084824,
0.12987293303012848,
0.010792074725031853,
-0.06998851895332336,
0.0024203723296523094,
0.06055322289466858,
-0.01847190037369728,
-0.08398778736591339,
-0.11259135603904724,
0.21841737627983093,
-0.022776726633310318,
0.011702751740813255,
-0.0013669170439243317,
-0.03545460104942322,
0.020076904445886612,
0.20618940889835358,
0.26152077317237854,
-0.02222667820751667,
-0.01586262136697769,
0.010568449273705482,
0.0001846584491431713,
0.03695659339427948,
0.12577201426029205,
-0.02777884714305401,
0.22359472513198853,
-0.046777449548244476,
0.06737222522497177,
-0.05537553131580353,
-0.014299402013421059,
-0.07450424134731293,
0.061424657702445984,
-0.001578204333782196,
-0.01836337149143219,
-0.014155775308609009,
0.06984956562519073,
-0.04071302339434624,
-0.12650424242019653,
-0.029551919549703598,
0.005514103919267654,
-0.058359190821647644,
0.011046874336898327,
0.0020564431324601173,
0.03376493230462074,
0.07748642563819885,
-0.01588892936706543,
-0.0020990539342164993,
0.13050198554992676,
0.01098928228020668,
-0.10912102460861206,
-0.037600722163915634,
0.12838557362556458,
0.018519911915063858,
0.1340782791376114,
0.04876743629574776,
0.08712469041347504,
0.07130827009677887,
0.015149479731917381,
-0.06677284836769104,
0.03636588156223297,
-0.028407320380210876,
0.019770564511418343,
0.004539488349109888,
0.10587862133979797,
-0.010519773699343204,
0.07475674152374268,
0.016607699915766716,
-0.0808752030134201,
0.05683104693889618,
0.008673112839460373,
-0.07627810537815094,
-0.03255736455321312,
0.1042289137840271,
-0.11158230900764465,
0.14271792769432068,
0.13774631917476654,
-0.005030146799981594,
-0.07176224142313004,
-0.012138426303863525,
0.027100618928670883,
-0.008060954511165619,
0.04774492606520653,
-0.029893167316913605,
-0.13074781000614166,
0.00018004095181822777,
-0.09478544443845749,
0.04576292634010315,
-0.24173954129219055,
-0.06664414703845978,
0.016213994473218918,
-0.000884735956788063,
-0.028645452111959457,
0.030585195869207382,
0.061639197170734406,
-0.0040400829166173935,
-0.03497268259525299,
0.029452037066221237,
-0.028589975088834763,
0.03562405705451965,
-0.07439378648996353,
-0.0681467354297638
] |
null | null | transformers | # MultiBERTs Seed 0 (uncased)
Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0')
model = BertModel.from_pretrained("multiberts-seed-0")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-10 | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 (uncased)
Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 (uncased)\n\nSeed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 (uncased)\n\nSeed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
63,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 0 (uncased)\n\nSeed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.06825247406959534,
0.027449268847703934,
-0.0021626802626997232,
0.09413602948188782,
0.07635393738746643,
0.026495488360524178,
0.15437674522399902,
0.029963307082653046,
-0.03573239967226982,
0.021267801523208618,
0.10619504749774933,
0.03782356157898903,
0.03388210013508797,
0.035308390855789185,
0.066785529255867,
-0.2578813433647156,
0.07567903399467468,
-0.05793163925409317,
0.040864333510398865,
0.059090327471494675,
0.10602577030658722,
-0.07069262117147446,
0.07895290851593018,
0.04403890669345856,
-0.0756942480802536,
-0.027663996443152428,
-0.005503433756530285,
-0.034674178808927536,
0.07060743123292923,
0.09438986331224442,
0.05877054110169411,
-0.008264455944299698,
0.05975931137800217,
-0.087635338306427,
0.019257638603448868,
0.024562222883105278,
-0.007006383966654539,
0.036696210503578186,
0.025804642587900162,
-0.009673221036791801,
0.11283443868160248,
0.02619457244873047,
0.08560121059417725,
0.04041407257318497,
-0.08754345774650574,
-0.09977805614471436,
-0.0694802924990654,
0.09317219257354736,
0.02764834463596344,
0.04353900998830795,
-0.0063711777329444885,
0.07313166558742523,
-0.006663286592811346,
0.058924756944179535,
0.08212147653102875,
-0.23674309253692627,
-0.023082595318555832,
0.05118638277053833,
0.04846370965242386,
0.04278615117073059,
0.013536407612264156,
0.031959742307662964,
0.005570597946643829,
0.04724816232919693,
0.006345914676785469,
-0.028150685131549835,
0.13924768567085266,
-0.053803253918886185,
-0.13665056228637695,
-0.03023041971027851,
0.15811696648597717,
0.02479265071451664,
-0.11351540684700012,
-0.11277355998754501,
0.0016996730118989944,
0.1693311333656311,
-0.0019645756110548973,
-0.007584595121443272,
-0.009904063306748867,
-0.0030730916187167168,
0.024124154821038246,
-0.1230793297290802,
-0.08302900195121765,
-0.02286745235323906,
-0.06280194967985153,
0.15275688469409943,
0.047940537333488464,
0.07110750675201416,
-0.06045709177851677,
0.04197261482477188,
-0.14955590665340424,
-0.036801956593990326,
-0.04978496953845024,
-0.09940676391124725,
0.017188318073749542,
0.02796531654894352,
-0.044329117983579636,
-0.11630523204803467,
-0.03652356192469597,
0.0725361704826355,
0.038227953016757965,
0.03685189411044121,
-0.005693042650818825,
0.029456961899995804,
0.10580474138259888,
0.10501816868782043,
-0.0562795028090477,
0.07449519634246826,
0.020974641665816307,
-0.020636841654777527,
0.03971032053232193,
-0.05628065764904022,
-0.12330584228038788,
0.0744452103972435,
-0.034096408635377884,
0.018313465639948845,
0.023749854415655136,
0.04198585823178291,
-0.012982374057173729,
-0.0767536610364914,
0.14133483171463013,
-0.09305756539106369,
0.0004417812451720238,
-0.0035654937382787466,
0.016869794577360153,
0.08157093822956085,
0.02621583268046379,
0.0021266003604978323,
-0.059168532490730286,
-0.03080003336071968,
-0.06315429508686066,
-0.027340907603502274,
-0.06021827086806297,
-0.13162744045257568,
0.0013580089434981346,
-0.020953699946403503,
-0.014699130319058895,
-0.10742536187171936,
-0.17884144186973572,
-0.01402769424021244,
0.07123412191867828,
-0.014155296608805656,
0.011412929743528366,
-0.0021266068797558546,
0.012132527306675911,
-0.004981525242328644,
0.032173626124858856,
-0.03745890408754349,
0.00908223818987608,
-0.012201073579490185,
-0.06731266528367996,
0.039806246757507324,
-0.12071730941534042,
0.04209677502512932,
-0.05578748881816864,
0.011489223688840866,
-0.19638846814632416,
0.10738702118396759,
-0.02783583477139473,
-0.04278886318206787,
-0.04810495674610138,
-0.05834455043077469,
0.0188974030315876,
0.045517146587371826,
-0.015527524054050446,
0.10550028085708618,
-0.12357760965824127,
-0.0512409433722496,
0.15865573287010193,
-0.1566506326198578,
0.016810515895485878,
0.10513904690742493,
-0.06748288869857788,
0.042335763573646545,
0.14426475763320923,
0.07841357588768005,
0.07015632092952728,
-0.04069618880748749,
0.017828572541475296,
0.060336943715810776,
-0.0458533950150013,
0.0799841359257698,
0.10583654791116714,
-0.015437023714184761,
-0.13057377934455872,
0.030710875988006592,
-0.06833602488040924,
-0.03600694239139557,
-0.022659340873360634,
-0.024447504431009293,
0.014145502820611,
-0.052795182913541794,
0.05715940147638321,
-0.010484781116247177,
0.006331292912364006,
-0.0232611745595932,
-0.07422537356615067,
0.07731874287128448,
0.07671873271465302,
-0.08619971573352814,
0.018436623737215996,
-0.0909656435251236,
0.03130660206079483,
-0.06597552448511124,
-0.005088436417281628,
-0.14390107989311218,
-0.04274594411253929,
0.031965915113687515,
-0.0805630162358284,
0.09851419925689697,
0.11271710693836212,
0.008409101516008377,
0.11310183256864548,
-0.04617488384246826,
0.02628052979707718,
-0.012368079274892807,
-0.006386269349604845,
-0.044110074639320374,
-0.14293555915355682,
-0.06652771681547165,
-0.06382939964532852,
0.0834670290350914,
-0.059091683477163315,
0.020797124132514,
-0.08205804973840714,
-0.041816260665655136,
-0.0250774584710598,
-0.04668354615569115,
0.005325498059391975,
0.00811565201729536,
-0.013542650267481804,
-0.030526084825396538,
0.04050645977258682,
0.027077049016952515,
-0.0918835997581482,
0.08847370743751526,
-0.1236613318324089,
-0.0576145313680172,
0.06846176087856293,
-0.0069316960871219635,
-0.04083865508437157,
0.09554298222064972,
0.011831864714622498,
-0.01123481709510088,
-0.057707928121089935,
-0.04657518118619919,
0.22045092284679413,
-0.020844273269176483,
0.08364406228065491,
-0.11240328848361969,
0.004931592382490635,
0.03506753221154213,
-0.06102532893419266,
-0.05918964743614197,
0.07589934766292572,
0.038565460592508316,
-0.2161455750465393,
0.024600330740213394,
0.07306224852800369,
0.061481211334466934,
0.1421050727367401,
0.02417578175663948,
-0.02878376469016075,
-0.06042608246207237,
-0.017261460423469543,
-0.012187670916318893,
0.05919060483574867,
-0.04688645899295807,
0.0030246214009821415,
0.0510857030749321,
0.05463946610689163,
0.018327711150050163,
-0.06600221991539001,
0.02497151307761669,
0.05208776146173477,
-0.017216674983501434,
-0.06310763210058212,
-0.05255124717950821,
-0.03947900980710983,
0.0736318975687027,
0.041184503585100174,
0.0495072677731514,
0.0537080317735672,
-0.019612858071923256,
-0.1381978541612625,
0.16529735922813416,
-0.13489660620689392,
-0.2240476906299591,
-0.12759706377983093,
-0.07904494553804398,
-0.07838001847267151,
0.039492446929216385,
0.0373598076403141,
-0.03468242287635803,
-0.05113789439201355,
-0.10579567402601242,
0.06591805815696716,
-0.11658145487308502,
-0.057194799184799194,
0.014129210263490677,
-0.056258611381053925,
-0.005652858875691891,
-0.1268719583749771,
-0.010539324954152107,
-0.026957646012306213,
-0.07912764698266983,
0.004068336449563503,
-0.04539388418197632,
0.010077799670398235,
0.13516394793987274,
0.008290649391710758,
-0.009709829464554787,
-0.015056753531098366,
0.19663433730602264,
0.0314871110022068,
0.04356053099036217,
0.12803813815116882,
-0.06543856859207153,
0.05768699571490288,
0.02060154639184475,
0.037481535226106644,
-0.04913286864757538,
-0.0007067807018756866,
-0.027622418478131294,
-0.11730992794036865,
-0.207548126578331,
-0.06663559377193451,
0.007457428611814976,
0.008368045091629028,
0.01904660277068615,
0.015689538791775703,
0.024972863495349884,
0.05414750799536705,
-0.031031470745801926,
0.03179151564836502,
0.033982276916503906,
0.05688050761818886,
0.06225617602467537,
-0.06120002269744873,
0.09507381916046143,
-0.07100313901901245,
0.027307022362947464,
0.10875560343265533,
-0.07062242925167084,
0.16170385479927063,
0.04285769164562225,
0.05423576757311821,
0.09659373760223389,
0.0006577670574188232,
0.0585428811609745,
0.10273323953151703,
-0.06317441910505295,
0.019947808235883713,
-0.07513642311096191,
-0.05752627179026604,
-0.04452991858124733,
0.060025766491889954,
0.037611961364746094,
-0.000131998211145401,
-0.10182826220989227,
0.03220826014876366,
-0.036235980689525604,
0.07729616016149521,
0.06343917548656464,
-0.10670174658298492,
-0.10046673566102982,
0.045665811747312546,
-0.04038289934396744,
-0.08793723583221436,
0.03426353633403778,
0.08077984303236008,
-0.14119762182235718,
0.06124391779303551,
0.018283551558852196,
0.07126335799694061,
-0.09752818942070007,
0.01132874470204115,
-0.06905651092529297,
0.016318362206220627,
0.005033754277974367,
0.0913831889629364,
-0.1432204693555832,
0.10583388805389404,
0.02708813175559044,
0.04597454518079758,
-0.09043684601783752,
0.01613154262304306,
-0.01261853240430355,
0.07669144868850708,
0.12108297646045685,
0.04203776270151138,
-0.05836430937051773,
-0.018112843856215477,
-0.06768153607845306,
0.034427788108587265,
0.07278922200202942,
-0.04098799079656601,
0.038899462670087814,
0.0012810318730771542,
0.016169004142284393,
-0.008310851640999317,
0.020610321313142776,
-0.13600048422813416,
-0.14560562372207642,
0.0705970749258995,
-0.06633393466472626,
-0.08288760483264923,
-0.03709196671843529,
-0.06633897125720978,
-0.0868702232837677,
0.15359032154083252,
-0.0773216113448143,
-0.1108812615275383,
-0.10497688502073288,
0.004697326570749283,
0.06842926889657974,
-0.06570008397102356,
0.05184205248951912,
-0.05175790935754776,
0.09120817482471466,
-0.03778978809714317,
-0.10993549227714539,
0.017024382948875427,
-0.09169412404298782,
-0.11230003088712692,
-0.030281051993370056,
0.09025070071220398,
0.15063974261283875,
0.05137326568365097,
0.024738965556025505,
0.016462495550513268,
0.0016304273158311844,
-0.12906411290168762,
0.004929570481181145,
0.143439382314682,
0.01773710548877716,
0.0976557806134224,
-0.06279069185256958,
-0.02821265161037445,
-0.012585094198584557,
-0.0009578559547662735,
0.13525930047035217,
0.1579957902431488,
-0.06031216308474541,
0.15296214818954468,
0.227834090590477,
-0.10105094313621521,
-0.19415637850761414,
-0.07397069036960602,
0.0032560182735323906,
0.04487091302871704,
0.045912403613328934,
-0.19948574900627136,
0.09972882270812988,
0.04975741356611252,
-0.013423530384898186,
-0.03354128822684288,
-0.18906579911708832,
-0.1023210883140564,
0.1062556803226471,
0.06369950622320175,
0.19807088375091553,
-0.06803785264492035,
-0.04169449210166931,
-0.04189038649201393,
-0.05597612261772156,
0.09557583183050156,
-0.011712346225976944,
0.0822327509522438,
0.01643332466483116,
0.014923296868801117,
-0.0019287541508674622,
-0.008046919479966164,
0.11012726277112961,
0.04542766511440277,
0.018416037783026695,
-0.07320156693458557,
-0.0423104427754879,
0.10889390110969543,
-0.03202357143163681,
0.12254303693771362,
0.03122953698039055,
0.05849093571305275,
-0.0764583870768547,
-0.06015930324792862,
-0.08313038945198059,
0.012603376060724258,
-0.04008830338716507,
-0.05228453874588013,
-0.051481351256370544,
0.03643445670604706,
0.02559221349656582,
0.013383354060351849,
-0.010037007741630077,
-0.0581706240773201,
0.009901179000735283,
0.0659501925110817,
0.15930500626564026,
-0.013111893087625504,
-0.06732219457626343,
-0.07006201148033142,
-0.060269180685281754,
0.04847850278019905,
-0.10283331573009491,
0.0321035273373127,
0.020586064085364342,
-0.0036565132904797792,
0.11348927021026611,
0.03316955640912056,
-0.11396678537130356,
0.013628019951283932,
0.005912423133850098,
-0.09849600493907928,
-0.1485224962234497,
-0.016377072781324387,
0.05456313490867615,
-0.0583408921957016,
0.03962210938334465,
0.1586087942123413,
-0.02749052457511425,
-0.033682480454444885,
-0.05674935132265091,
0.032430585473775864,
-0.034874096512794495,
0.03596019372344017,
0.08030854165554047,
0.016163216903805733,
-0.08148041367530823,
0.06100435554981232,
0.04497561603784561,
-0.01565445587038994,
0.06611718982458115,
0.01751827821135521,
-0.07064318656921387,
-0.08515681326389313,
-0.06657058000564575,
0.11521587520837784,
-0.04193677753210068,
-0.06614658236503601,
0.0494990199804306,
-0.10936599224805832,
0.06512928009033203,
0.09400998800992966,
0.03727183863520622,
0.046071093529462814,
-0.08464010059833527,
0.006473809480667114,
-0.037655625492334366,
0.03303447365760803,
-0.03967699408531189,
-0.03299032896757126,
-0.04207788407802582,
0.02865336276590824,
0.0594131164252758,
0.09625885635614395,
-0.03653799742460251,
-0.07748300582170486,
-0.08829360455274582,
-0.013138281181454659,
-0.10569687932729721,
-0.006850461475551128,
-0.06914658099412918,
0.00014194706454873085,
0.007000140380114317,
-0.02822837233543396,
0.030307123437523842,
0.033606212586164474,
-0.0512661337852478,
-0.008813504129648209,
-0.02892981842160225,
0.05861987918615341,
-0.07071447372436523,
0.012725180014967918,
0.015199657529592514,
-0.01911322958767414,
0.09222348034381866,
0.047224029898643494,
-0.03322954475879669,
0.05148611217737198,
-0.03994745388627052,
0.03518182411789894,
-0.04691552743315697,
0.007639196235686541,
-0.02100628986954689,
-0.11349901556968689,
-0.021261068060994148,
0.010819608345627785,
-0.023444410413503647,
0.01614448055624962,
0.07291702181100845,
-0.051247432827949524,
0.0827048048377037,
0.06047651544213295,
-0.049000177532434464,
-0.055763885378837585,
0.04004162549972534,
0.0009079426527023315,
0.017973260954022408,
0.0793890655040741,
0.0011681190226227045,
0.053140703588724136,
-0.08328671008348465,
0.0013423850759863853,
0.0043635861948132515,
-0.016782283782958984,
-0.019065728411078453,
-0.07158057391643524,
-0.000623882282525301,
0.009545178152620792,
0.17526990175247192,
-0.004971030168235302,
-0.019934196025133133,
0.005758095532655716,
0.06719693541526794,
0.033424317836761475,
0.004426124505698681,
0.08463965356349945,
-0.018342992290854454,
-0.01793844997882843,
-0.017587680369615555,
0.026691239327192307,
-0.01080797053873539,
0.016537122428417206,
0.1315390020608902,
0.04961226135492325,
0.11255703866481781,
0.07479852437973022,
0.05499632656574249,
0.052345164120197296,
-0.10784098505973816,
-0.06925129890441895,
0.03605833277106285,
0.05536176264286041,
-0.034931864589452744,
0.02555268630385399,
0.05937255546450615,
-0.09513229876756668,
0.0820266455411911,
0.046595025807619095,
-0.05803784728050232,
-0.1295481026172638,
-0.2191641926765442,
-0.042123790830373764,
-0.010218853130936623,
-0.020777955651283264,
-0.10785381495952606,
0.027329251170158386,
0.0930030569434166,
0.03945063054561615,
-0.02234741672873497,
0.0657259151339531,
-0.15022647380828857,
-0.03686198964715004,
0.03966449946165085,
-0.014821960590779781,
0.022462747991085052,
0.048782214522361755,
0.01900356635451317,
0.014281739480793476,
0.0744381994009018,
0.051359422504901886,
0.043146438896656036,
0.054591625928878784,
0.02954341098666191,
-0.04896369203925133,
-0.08800899237394333,
-0.04467042535543442,
0.0032379510812461376,
0.058675315231084824,
0.12987293303012848,
0.010792074725031853,
-0.06998851895332336,
0.0024203723296523094,
0.06055322289466858,
-0.01847190037369728,
-0.08398778736591339,
-0.11259135603904724,
0.21841737627983093,
-0.022776726633310318,
0.011702751740813255,
-0.0013669170439243317,
-0.03545460104942322,
0.020076904445886612,
0.20618940889835358,
0.26152077317237854,
-0.02222667820751667,
-0.01586262136697769,
0.010568449273705482,
0.0001846584491431713,
0.03695659339427948,
0.12577201426029205,
-0.02777884714305401,
0.22359472513198853,
-0.046777449548244476,
0.06737222522497177,
-0.05537553131580353,
-0.014299402013421059,
-0.07450424134731293,
0.061424657702445984,
-0.001578204333782196,
-0.01836337149143219,
-0.014155775308609009,
0.06984956562519073,
-0.04071302339434624,
-0.12650424242019653,
-0.029551919549703598,
0.005514103919267654,
-0.058359190821647644,
0.011046874336898327,
0.0020564431324601173,
0.03376493230462074,
0.07748642563819885,
-0.01588892936706543,
-0.0020990539342164993,
0.13050198554992676,
0.01098928228020668,
-0.10912102460861206,
-0.037600722163915634,
0.12838557362556458,
0.018519911915063858,
0.1340782791376114,
0.04876743629574776,
0.08712469041347504,
0.07130827009677887,
0.015149479731917381,
-0.06677284836769104,
0.03636588156223297,
-0.028407320380210876,
0.019770564511418343,
0.004539488349109888,
0.10587862133979797,
-0.010519773699343204,
0.07475674152374268,
0.016607699915766716,
-0.0808752030134201,
0.05683104693889618,
0.008673112839460373,
-0.07627810537815094,
-0.03255736455321312,
0.1042289137840271,
-0.11158230900764465,
0.14271792769432068,
0.13774631917476654,
-0.005030146799981594,
-0.07176224142313004,
-0.012138426303863525,
0.027100618928670883,
-0.008060954511165619,
0.04774492606520653,
-0.029893167316913605,
-0.13074781000614166,
0.00018004095181822777,
-0.09478544443845749,
0.04576292634010315,
-0.24173954129219055,
-0.06664414703845978,
0.016213994473218918,
-0.000884735956788063,
-0.028645452111959457,
0.030585195869207382,
0.061639197170734406,
-0.0040400829166173935,
-0.03497268259525299,
0.029452037066221237,
-0.028589975088834763,
0.03562405705451965,
-0.07439378648996353,
-0.0681467354297638
] |
null | null | transformers | # MultiBERTs Seed 11 (uncased)
Seed 11 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-11')
model = BertModel.from_pretrained("multiberts-seed-11")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-11 | [
"transformers",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 11 (uncased)
Seed 11 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 11 (uncased)\n\nSeed 11 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 11 (uncased)\n\nSeed 11 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
59,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 11 (uncased)\n\nSeed 11 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"passage: ## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after."
] | [
-0.0630425214767456,
0.07265826314687729,
-0.0054882424883544445,
0.0493985190987587,
0.034698985517024994,
0.005253213457763195,
0.08637510240077972,
0.04643678292632103,
-0.0009192085126414895,
0.06478838622570038,
0.042718660086393356,
0.026962582021951675,
0.06347792595624924,
0.08945716172456741,
0.06047169119119644,
-0.20902226865291595,
0.058989495038986206,
-0.08569195866584778,
0.09707807749509811,
0.07886664569377899,
0.08149906992912292,
-0.07146604359149933,
0.02106989361345768,
0.00007442524656653404,
-0.02481861412525177,
-0.03656715527176857,
0.006739357020705938,
-0.022662509232759476,
0.06959858536720276,
0.07202500104904175,
0.07256042957305908,
-0.021433038637042046,
0.02814585715532303,
-0.11242099106311798,
0.01814332790672779,
0.06457571685314178,
-0.0013525313697755337,
0.029894009232521057,
0.017193062230944633,
0.03527802601456642,
0.10061489045619965,
-0.06148996204137802,
0.03688613697886467,
0.05959409475326538,
-0.1240239143371582,
-0.03481382876634598,
-0.10291338711977005,
0.07968917489051819,
0.05447361618280411,
0.04677252471446991,
-0.03911096602678299,
0.11647993326187134,
-0.020923033356666565,
0.06098499149084091,
0.12136980891227722,
-0.18376986682415009,
-0.002905345056205988,
-0.0031002818141132593,
0.06239115819334984,
-0.010110469534993172,
-0.03927972912788391,
0.0053862230852246284,
-0.025872811675071716,
0.025340674445033073,
0.03977068513631821,
-0.03984961286187172,
0.06595417112112045,
-0.07789667695760727,
-0.13578110933303833,
-0.042180612683296204,
0.06765788048505783,
0.021990301087498665,
-0.132639542222023,
-0.1730031967163086,
-0.044952794909477234,
0.0858568325638771,
-0.0038992837071418762,
-0.017398696392774582,
0.0012638550251722336,
0.00439284136518836,
0.06337924301624298,
-0.09953871369361877,
-0.06493151187896729,
-0.022844117134809494,
-0.04150788113474846,
0.16823789477348328,
0.02052631415426731,
0.014327183365821838,
-0.00305791012942791,
0.04748253524303436,
-0.09898725152015686,
-0.05588122829794884,
-0.07397390902042389,
-0.07981735467910767,
-0.12635044753551483,
-0.03365591540932655,
-0.043713368475437164,
-0.1433597356081009,
-0.0669327825307846,
0.14409305155277252,
-0.04560387134552002,
0.03416161611676216,
-0.02850058674812317,
0.03420432657003403,
0.10850393027067184,
0.07124906778335571,
-0.07391568273305893,
0.017811262980103493,
0.02733956277370453,
-0.0051536839455366135,
0.06828290224075317,
-0.038346245884895325,
-0.015006527304649353,
0.01593148708343506,
0.014154992997646332,
0.03797876089811325,
0.05086483806371689,
0.04018140584230423,
-0.04301256686449051,
-0.05544571205973625,
0.1326143741607666,
-0.11921149492263794,
-0.004362390376627445,
-0.005377068649977446,
-0.05426646023988724,
0.029022905975580215,
0.03785239905118942,
-0.0402601957321167,
-0.08520109206438065,
0.01148658711463213,
-0.055543363094329834,
-0.009956714697182178,
-0.07183392345905304,
-0.1506032943725586,
0.009157468564808369,
-0.04608108103275299,
-0.06577898561954498,
-0.07666879892349243,
-0.16803079843521118,
-0.06990045309066772,
0.0374501571059227,
-0.009120166301727295,
0.010992594063282013,
-0.014573359861969948,
-0.03279106318950653,
-0.0221810694783926,
0.023075157776474953,
-0.06249599903821945,
0.010425267741084099,
0.005370737053453922,
-0.04794403910636902,
0.05677627772092819,
-0.008582375943660736,
-0.007684129290282726,
-0.08720901608467102,
0.010147965513169765,
-0.23892298340797424,
0.11915600299835205,
-0.02118823677301407,
-0.031461525708436966,
-0.03435999155044556,
-0.006810478400439024,
0.002032475546002388,
0.04594869166612625,
0.00769013911485672,
0.15886333584785461,
-0.2038523405790329,
-0.020962003618478775,
0.2117903083562851,
-0.13187271356582642,
0.030992742627859116,
0.11940760910511017,
-0.06640632450580597,
0.060711659491062164,
0.1521618813276291,
0.0298033207654953,
0.07192059606313705,
-0.07150204479694366,
-0.06723469495773315,
-0.025447800755500793,
-0.08751791715621948,
0.14588022232055664,
0.04774045944213867,
-0.04135631024837494,
0.024593763053417206,
-0.003966775722801685,
-0.011737057939171791,
-0.05118345469236374,
0.00392036372795701,
-0.0319623127579689,
0.0369894839823246,
-0.025076985359191895,
0.027561549097299576,
0.011267043650150299,
-0.04226613789796829,
-0.021459851413965225,
-0.11266005039215088,
-0.006678218953311443,
0.06578276306390762,
-0.06924194097518921,
0.04687117412686348,
-0.069051593542099,
0.022701336070895195,
-0.03132029250264168,
0.004749134182929993,
-0.1782546192407608,
-0.0050517115741968155,
0.04104582220315933,
-0.06582073867321014,
0.104871466755867,
0.011644123122096062,
0.023794345557689667,
0.0666186660528183,
-0.046975910663604736,
0.027150902897119522,
-0.02238498441874981,
-0.02211650460958481,
-0.05551164597272873,
-0.12102291733026505,
-0.0490243136882782,
-0.050184257328510284,
0.10143296420574188,
-0.08712965250015259,
0.009861682541668415,
-0.03894943371415138,
0.005392011255025864,
0.024456815794110298,
-0.0859212577342987,
0.047798722982406616,
-0.002674003131687641,
-0.017078686505556107,
-0.04097503796219826,
0.022624125704169273,
0.031115731224417686,
-0.056095510721206665,
0.05782708525657654,
-0.1601138412952423,
-0.2290908694267273,
0.046646129339933395,
-0.0018041182775050402,
-0.12396632134914398,
0.0443037673830986,
0.003139414358884096,
-0.021449198946356773,
-0.08418986946344376,
-0.07528601586818695,
0.15380781888961792,
0.026795610785484314,
0.05174249783158302,
-0.06799189746379852,
-0.014118339866399765,
0.006530553102493286,
-0.025114258751273155,
-0.00488775409758091,
0.037119459360837936,
-0.016425834968686104,
-0.20773945748806,
0.01673070713877678,
-0.05633682757616043,
0.013448391109704971,
0.13407710194587708,
0.05331822857260704,
-0.06734106689691544,
-0.05501791834831238,
0.027220912277698517,
0.044473759829998016,
0.0731821209192276,
-0.009028094820678234,
0.004548124503344297,
0.027405323460698128,
0.05036865174770355,
0.012922325171530247,
-0.02611556276679039,
0.061886243522167206,
0.04718708246946335,
-0.018703041598200798,
-0.060698579996824265,
-0.06625252962112427,
-0.02845132164657116,
0.06780187785625458,
0.051319364458322525,
0.06530474871397018,
0.014146645553410053,
-0.045789431780576706,
-0.12501145899295807,
0.15804260969161987,
-0.09579738974571228,
-0.24771462380886078,
-0.13409169018268585,
-0.010216256603598595,
-0.01082654856145382,
0.04963386803865433,
-0.009914077818393707,
0.008339565247297287,
-0.04509777948260307,
-0.11124357581138611,
0.056716084480285645,
-0.025172509253025055,
-0.06173907592892647,
-0.06142082437872887,
-0.01124302577227354,
-0.011335639283061028,
-0.09809978306293488,
0.01712641492486,
-0.021329345181584358,
-0.03953137993812561,
0.005781370215117931,
-0.0031424108892679214,
0.07495007663965225,
0.10922101140022278,
-0.003750748932361603,
-0.05192907154560089,
-0.014418346807360649,
0.16390101611614227,
-0.05324147641658783,
0.11335941404104233,
0.05332815274596214,
-0.08082100749015808,
0.05637931823730469,
0.06943529844284058,
0.005084754899144173,
-0.049626484513282776,
0.02581034228205681,
0.04444115608930588,
-0.07059738039970398,
-0.16795247793197632,
-0.040807075798511505,
-0.044975645840168,
-0.014543880708515644,
0.040435466915369034,
0.0011255606077611446,
-0.006713869981467724,
0.027339469641447067,
-0.07896315306425095,
0.0005499888211488724,
0.03588656336069107,
0.08518039435148239,
-0.009985895827412605,
-0.023779191076755524,
0.07150928676128387,
-0.04494113847613335,
0.01944326050579548,
0.08369110524654388,
-0.07019209861755371,
0.16529162228107452,
-0.027200663462281227,
0.15082307159900665,
0.09043936431407928,
-0.0414540134370327,
0.025464817881584167,
0.09372054785490036,
-0.034526046365499496,
0.02750549092888832,
-0.058285098522901535,
-0.06769435852766037,
-0.05899246037006378,
0.04031410813331604,
0.03550209850072861,
0.03082730993628502,
-0.04630289226770401,
0.0004992326721549034,
0.046022966504096985,
0.1431349217891693,
0.09491157531738281,
-0.07840423285961151,
-0.08337242156267166,
0.025377720594406128,
-0.060764819383621216,
-0.06399296969175339,
-0.0032925973646342754,
0.08661270141601562,
-0.11280347406864166,
0.04144006222486496,
-0.022849297150969505,
0.06429874151945114,
-0.12654170393943787,
-0.01472470723092556,
-0.08734162151813507,
0.044648751616477966,
-0.017116226255893707,
0.05065619945526123,
-0.15432098507881165,
0.05896884575486183,
0.03201593458652496,
0.1162693053483963,
-0.07537132501602173,
0.016839083284139633,
0.02918020635843277,
-0.03225281462073326,
0.14995405077934265,
0.030316906049847603,
-0.04687991365790367,
-0.015648525208234787,
-0.05196789279580116,
0.0006181569769978523,
0.08469212055206299,
-0.008522622287273407,
0.11429470032453537,
0.031133200973272324,
0.006719565950334072,
-0.017526142299175262,
0.03779071196913719,
-0.1663413941860199,
-0.14692209661006927,
0.05396829545497894,
-0.07479119300842285,
-0.05980110540986061,
-0.018666934221982956,
-0.019396506249904633,
-0.06129594147205353,
0.1914386749267578,
-0.16075170040130615,
-0.06508244574069977,
-0.09752508997917175,
-0.0020904666744172573,
0.08105118572711945,
-0.06491340696811676,
0.030467579141259193,
0.011898158118128777,
0.11405624449253082,
-0.06970009952783585,
-0.040788330137729645,
0.03254569694399834,
-0.0673031136393547,
-0.10815289616584778,
-0.04750896245241165,
0.09005267918109894,
0.1809355914592743,
0.058547571301460266,
0.018898341804742813,
0.0427386537194252,
0.05178512632846832,
-0.10876749455928802,
-0.011471577920019627,
0.10037681460380554,
0.043264687061309814,
0.12914688885211945,
-0.051865626126527786,
-0.04315078258514404,
-0.08574863523244858,
0.01985977031290531,
0.11796677112579346,
0.21487799286842346,
-0.038902707397937775,
0.12014995515346527,
0.2199334353208542,
-0.10646799206733704,
-0.18641096353530884,
-0.01686423271894455,
-0.004533424042165279,
0.06363749504089355,
0.05295209214091301,
-0.182194322347641,
0.010462827980518341,
0.036768436431884766,
-0.003657326102256775,
0.03841863200068474,
-0.20655259490013123,
-0.10362544655799866,
0.0684262067079544,
0.05486186593770981,
0.07501615583896637,
-0.07528450340032578,
-0.038861870765686035,
-0.03275499492883682,
-0.008320463821291924,
0.10911883413791656,
-0.060929786413908005,
0.09665805101394653,
0.04197593778371811,
-0.050498537719249725,
0.02044178731739521,
-0.04651082307100296,
0.10530249774456024,
0.007258553057909012,
0.04264197498559952,
-0.05603605508804321,
-0.013732951134443283,
0.11095215380191803,
-0.05563449114561081,
0.09832142293453217,
0.05867924913764,
0.040126897394657135,
-0.06491445004940033,
-0.05397510901093483,
-0.07108889520168304,
0.03696063533425331,
-0.06211147457361221,
-0.03392244130373001,
-0.06342480331659317,
0.10677947103977203,
0.07099857181310654,
0.006784853525459766,
-0.0153337512165308,
-0.10417662560939789,
0.013947326689958572,
0.11080622673034668,
0.13694840669631958,
0.018334729596972466,
-0.031239617615938187,
0.005924837663769722,
-0.02208106219768524,
0.06554312258958817,
-0.03869589418172836,
0.034303516149520874,
0.030556127429008484,
0.04716753214597702,
0.10833288729190826,
0.032939836382865906,
-0.1598632037639618,
0.018910102546215057,
0.001715291291475296,
-0.11136525869369507,
-0.10523634403944016,
0.020272554829716682,
-0.029165245592594147,
-0.12345147132873535,
-0.03332315385341644,
0.13266241550445557,
-0.028741005808115005,
-0.017662055790424347,
0.000028624199330806732,
0.04046541452407837,
0.009945098310709,
0.10702501237392426,
0.04324702173471451,
0.0239690113812685,
-0.059924155473709106,
0.09366491436958313,
0.05389180779457092,
-0.05139869824051857,
0.05160723626613617,
0.0801842212677002,
-0.07538945972919464,
-0.05264551565051079,
-0.06413841247558594,
0.05155973881483078,
0.04880594462156296,
-0.025515113025903702,
-0.009157795459032059,
-0.047661151736974716,
0.05967355892062187,
0.11680528521537781,
0.027418967336416245,
0.05530254542827606,
-0.046650905162096024,
0.024145489558577538,
-0.07628343999385834,
0.04319261759519577,
0.035504188388586044,
0.004205911420285702,
-0.002045897301286459,
0.09930623322725296,
0.005649574100971222,
0.009409293532371521,
-0.018557623028755188,
-0.05515462905168533,
-0.09976663440465927,
0.002616430865600705,
-0.018111059442162514,
0.01302056759595871,
-0.10056284070014954,
-0.014330565929412842,
-0.01802210323512554,
0.05337362736463547,
0.02867187187075615,
0.010719692334532738,
-0.005422462709248066,
-0.03516775742173195,
-0.035514190793037415,
0.004534659907221794,
-0.0954735055565834,
0.003597671166062355,
0.032351866364479065,
-0.06798005849123001,
0.0731278657913208,
0.0003331585321575403,
-0.039914488792419434,
0.024411940947175026,
-0.06233862787485123,
0.0033027157187461853,
-0.04314519837498665,
-0.014535966329276562,
-0.013422793708741665,
-0.16675329208374023,
-0.0025686947628855705,
-0.01477804034948349,
0.014735834673047066,
0.004966029431670904,
0.050471432507038116,
-0.05305151641368866,
0.07344724982976913,
0.046608105301856995,
-0.004406779073178768,
-0.08578935265541077,
0.07431907951831818,
0.048911549150943756,
0.0028259344398975372,
0.11601540446281433,
-0.0217352993786335,
0.06480579078197479,
-0.09494592994451523,
0.003768562339246273,
0.024269085377454758,
0.011583099141716957,
0.05437838286161423,
-0.016916494816541672,
0.0399780310690403,
-0.004546351730823517,
0.10623253136873245,
-0.05301394313573837,
0.020902201533317566,
0.028600087389349937,
-0.03138526529073715,
-0.061621248722076416,
0.004558455664664507,
0.038354337215423584,
-0.0026956824585795403,
-0.07520847767591476,
-0.003659455105662346,
0.0282641239464283,
-0.025886012241244316,
-0.008930166251957417,
0.17473450303077698,
0.07144922018051147,
0.1274002641439438,
0.08270405977964401,
-0.01204477995634079,
-0.004319638013839722,
-0.03866330534219742,
0.026508625596761703,
0.03659988194704056,
0.01136874407529831,
-0.00862729363143444,
0.003330906853079796,
0.11631874740123749,
-0.08177082240581512,
0.14938676357269287,
0.03279739245772362,
-0.08360010385513306,
-0.08246386051177979,
-0.2245294600725174,
-0.003569698426872492,
0.09154286235570908,
-0.02438436448574066,
-0.08774372935295105,
0.0407356396317482,
0.08179914951324463,
-0.0011827507987618446,
-0.012311202473938465,
0.07962347567081451,
-0.13184025883674622,
-0.09135746955871582,
0.06340897083282471,
-0.023390240967273712,
-0.0004923229571431875,
0.06930936872959137,
0.014977918937802315,
0.0428297221660614,
0.054927241057157516,
0.08939764648675919,
0.06880149245262146,
0.08690790086984634,
0.03336121514439583,
-0.05516591668128967,
-0.08688946068286896,
-0.0011966507881879807,
0.012902749702334404,
0.07932280004024506,
0.18640229105949402,
0.041890207678079605,
-0.06719682365655899,
-0.00037460634484887123,
0.11143401265144348,
-0.056026823818683624,
-0.11074952781200409,
-0.1541101485490799,
0.17280636727809906,
0.020535200834274292,
-0.005087617784738541,
-0.0167727991938591,
-0.09989383071660995,
0.07965532690286636,
0.18832428753376007,
0.16386348009109497,
-0.01867404580116272,
0.0022254534997045994,
0.013361554592847824,
0.0008196055423468351,
0.012368280440568924,
0.08986097574234009,
-0.03101792186498642,
0.2501303553581238,
-0.014437141828238964,
0.11190138012170792,
-0.02852211892604828,
-0.04229619726538658,
-0.048801541328430176,
0.07219495624303818,
0.004597318358719349,
0.0266224704682827,
-0.044900812208652496,
0.0755196213722229,
-0.07113946974277496,
-0.2386167347431183,
-0.011395996436476707,
0.030293965712189674,
-0.05814612656831741,
-0.0001328829675912857,
-0.0658879205584526,
0.03059765137732029,
0.10311247408390045,
0.02697480097413063,
0.019776292145252228,
0.13324066996574402,
0.014974270947277546,
-0.03780669718980789,
-0.05295487865805626,
0.05248316004872322,
-0.014650862663984299,
0.15968897938728333,
0.021389391273260117,
0.09702155739068985,
0.10297805070877075,
-0.0034312938805669546,
-0.08899879455566406,
0.03199129179120064,
-0.010504299774765968,
0.003526328131556511,
0.03704094514250755,
0.20485490560531616,
-0.0011798110790550709,
0.07619412243366241,
0.04590345546603203,
-0.039621271193027496,
0.06467120349407196,
-0.05778256803750992,
-0.017425348982214928,
-0.10170270502567291,
0.11060354113578796,
-0.08177986741065979,
0.16294512152671814,
0.13978710770606995,
0.0062883272767066956,
0.011502647772431374,
-0.026346053928136826,
-0.020328758284449577,
0.003507972229272127,
0.08401001244783401,
-0.0332266241312027,
-0.13662289083003998,
0.003137522842735052,
-0.02743782103061676,
0.03662615269422531,
-0.256990522146225,
-0.0442090667784214,
0.05010554566979408,
-0.04415636137127876,
0.025507675483822823,
0.04855174943804741,
0.01257764920592308,
0.03354719653725624,
-0.04282628744840622,
-0.01857113651931286,
-0.0017326108645647764,
0.08200560510158539,
-0.093110591173172,
-0.06589442491531372
] |
null | null | transformers | # MultiBERTs Seed 12 (uncased)
Seed 12 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-12')
model = BertModel.from_pretrained("multiberts-seed-12")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-12 | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 12 (uncased)
Seed 12 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 12 (uncased)\n\nSeed 12 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 12 (uncased)\n\nSeed 12 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
63,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 12 (uncased)\n\nSeed 12 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.06731268763542175,
0.031453937292099,
-0.0022009192034602165,
0.09543926268815994,
0.0772232711315155,
0.029377229511737823,
0.16204749047756195,
0.02951139211654663,
-0.03195621818304062,
0.02236972562968731,
0.1012520045042038,
0.0391494520008564,
0.03286978602409363,
0.029135914519429207,
0.06890876591205597,
-0.26319435238838196,
0.07594816386699677,
-0.058417901396751404,
0.0399017408490181,
0.060763344168663025,
0.10118283331394196,
-0.07130715250968933,
0.07688675820827484,
0.04207264631986618,
-0.0745038315653801,
-0.025486478582024574,
-0.005358979105949402,
-0.03344196081161499,
0.07576398551464081,
0.09418661147356033,
0.06296397745609283,
-0.010151585564017296,
0.06304917484521866,
-0.08811667561531067,
0.019832517951726913,
0.022469941526651382,
-0.011259138584136963,
0.035840071737766266,
0.02724321186542511,
-0.0024614722933620214,
0.11544056236743927,
0.025552313774824142,
0.0885622501373291,
0.03894433379173279,
-0.08715349435806274,
-0.09583764523267746,
-0.07039428502321243,
0.09396535158157349,
0.034900687634944916,
0.04365763068199158,
-0.007969851605594158,
0.07200975716114044,
-0.010992715135216713,
0.05814577639102936,
0.08068374544382095,
-0.23735204339027405,
-0.02347690612077713,
0.05093102157115936,
0.05146016925573349,
0.041987694799900055,
0.012722926214337349,
0.023125551640987396,
0.006691001355648041,
0.05146819353103638,
-0.0025068367831408978,
-0.029506705701351166,
0.13548648357391357,
-0.05378340184688568,
-0.13892224431037903,
-0.030040059238672256,
0.15332648158073425,
0.025421489030122757,
-0.11222194135189056,
-0.1118958368897438,
-0.0010051913559436798,
0.17229799926280975,
-0.004260968416929245,
-0.0055310968309640884,
-0.0111420638859272,
-0.006404480896890163,
0.024933479726314545,
-0.11796021461486816,
-0.08256010711193085,
-0.024770310148596764,
-0.06347548216581345,
0.1498909890651703,
0.04863674193620682,
0.06904861330986023,
-0.06195120885968208,
0.0415642112493515,
-0.14894214272499084,
-0.03201381862163544,
-0.045643649995326996,
-0.10054086148738861,
0.014022748917341232,
0.028358958661556244,
-0.04264850169420242,
-0.1131066083908081,
-0.038500092923641205,
0.07656572014093399,
0.03766163811087608,
0.03724747523665428,
-0.007333716377615929,
0.028516661375761032,
0.10332933068275452,
0.10151742398738861,
-0.06232088804244995,
0.07926823198795319,
0.013031032867729664,
-0.01753448322415352,
0.04001766815781593,
-0.05498543381690979,
-0.12324109673500061,
0.07074815779924393,
-0.0361306332051754,
0.018340719863772392,
0.021991301327943802,
0.04314306005835533,
-0.01007685624063015,
-0.07675093412399292,
0.14585520327091217,
-0.09697632491588593,
-0.000037815188989043236,
-0.003719855798408389,
0.01421508938074112,
0.07801302522420883,
0.02370697818696499,
0.0023933774791657925,
-0.05956283211708069,
-0.027013864368200302,
-0.06518429517745972,
-0.029903586953878403,
-0.05825287103652954,
-0.13698223233222961,
0.005771799944341183,
-0.02625136263668537,
-0.013561810366809368,
-0.09954819083213806,
-0.1806136518716812,
-0.016153858974575996,
0.06839737296104431,
-0.012282155454158783,
0.011540764942765236,
-0.002286800416186452,
0.008960571140050888,
-0.006838127970695496,
0.03278039023280144,
-0.03921373188495636,
0.010070931166410446,
-0.011792468838393688,
-0.06711632758378983,
0.04105190932750702,
-0.12034786492586136,
0.04084312543272972,
-0.0530916228890419,
0.012647271156311035,
-0.19514203071594238,
0.10781057178974152,
-0.032677628099918365,
-0.03869698941707611,
-0.04495429992675781,
-0.06343016773462296,
0.019821520894765854,
0.04660554975271225,
-0.015136847272515297,
0.10093183070421219,
-0.12629014253616333,
-0.051752038300037384,
0.16537682712078094,
-0.15805163979530334,
0.017994793131947517,
0.10423769056797028,
-0.07046855986118317,
0.039571840316057205,
0.14584945142269135,
0.07448411732912064,
0.07797656208276749,
-0.03783940523862839,
0.011180144734680653,
0.05703708529472351,
-0.04325982928276062,
0.0772610753774643,
0.10870202630758286,
-0.010528734885156155,
-0.12248368561267853,
0.02929152175784111,
-0.06414995342493057,
-0.03465062007308006,
-0.021516256034374237,
-0.022597866132855415,
0.017066121101379395,
-0.051728345453739166,
0.06212028115987778,
-0.0069384267553687096,
0.002852189354598522,
-0.025165516883134842,
-0.07494671642780304,
0.08650775253772736,
0.07702021300792694,
-0.08545203506946564,
0.014873916283249855,
-0.09330863505601883,
0.028455305844545364,
-0.06427732855081558,
-0.005135479383170605,
-0.14523696899414062,
-0.04846223443746567,
0.029503513127565384,
-0.07541979849338531,
0.09789994359016418,
0.11943399906158447,
0.010845285840332508,
0.11412321031093597,
-0.044848352670669556,
0.026525283232331276,
-0.014062156900763512,
-0.009185368195176125,
-0.043502528220415115,
-0.13900651037693024,
-0.06980988383293152,
-0.06654194742441177,
0.082846999168396,
-0.05798874795436859,
0.024572642520070076,
-0.08266264200210571,
-0.04350887984037399,
-0.027353782206773758,
-0.044245898723602295,
0.008095395751297474,
0.005547569133341312,
-0.013248980976641178,
-0.029734767973423004,
0.041069626808166504,
0.021950576454401016,
-0.08899100124835968,
0.08572570979595184,
-0.12816698849201202,
-0.061350561678409576,
0.06908857077360153,
0.0006990991532802582,
-0.038965098559856415,
0.09297005832195282,
0.007685581687837839,
-0.010703167878091335,
-0.056839682161808014,
-0.04796212166547775,
0.21849173307418823,
-0.02043156325817108,
0.0788765698671341,
-0.113503597676754,
0.0033853575587272644,
0.037378326058387756,
-0.05558997765183449,
-0.055394064635038376,
0.07758373767137527,
0.04507329687476158,
-0.2123725414276123,
0.023055657744407654,
0.07173923403024673,
0.06502792984247208,
0.13971617817878723,
0.023273631930351257,
-0.030799586325883865,
-0.05742131918668747,
-0.015644047409296036,
-0.012433681637048721,
0.054753609001636505,
-0.046330973505973816,
0.004016744904220104,
0.049082428216934204,
0.05302976816892624,
0.017067572101950645,
-0.07222959399223328,
0.020989246666431427,
0.0498461090028286,
-0.015093978494405746,
-0.07603923976421356,
-0.053567104041576385,
-0.03572174906730652,
0.07328830659389496,
0.03891729190945625,
0.04625142738223076,
0.05232953280210495,
-0.021615372970700264,
-0.13528157770633698,
0.1683637499809265,
-0.13473650813102722,
-0.22169896960258484,
-0.13134244084358215,
-0.08651984483003616,
-0.07559767365455627,
0.03892600163817406,
0.03978919982910156,
-0.03951500356197357,
-0.04757409170269966,
-0.10245981812477112,
0.0681682899594307,
-0.11248736828565598,
-0.05802922695875168,
0.014499031007289886,
-0.05542413890361786,
-0.009256197139620781,
-0.12808659672737122,
-0.00845751166343689,
-0.027597766369581223,
-0.08050617575645447,
0.0024525411427021027,
-0.04570968076586723,
0.006196841597557068,
0.1356482356786728,
0.005180676467716694,
-0.013690643012523651,
-0.015777479857206345,
0.19944459199905396,
0.029675070196390152,
0.045475803315639496,
0.12838295102119446,
-0.06740205734968185,
0.05916473641991615,
0.026339150965213776,
0.039045169949531555,
-0.04799952358007431,
-0.0008488455787301064,
-0.023173101246356964,
-0.11520227789878845,
-0.21938203275203705,
-0.06266772747039795,
0.00888021569699049,
0.017542287707328796,
0.02165827713906765,
0.01663557067513466,
0.021092891693115234,
0.056217532604932785,
-0.03421500697731972,
0.03669985383749008,
0.03556424379348755,
0.05402783676981926,
0.06596623361110687,
-0.061441995203495026,
0.09460964053869247,
-0.07099103927612305,
0.023281428962945938,
0.10618042945861816,
-0.06779050827026367,
0.1587909758090973,
0.04121946170926094,
0.04695335775613785,
0.0935242772102356,
-0.005752649158239365,
0.05777452886104584,
0.1098257452249527,
-0.06184500455856323,
0.018643124029040337,
-0.07396511733531952,
-0.056390151381492615,
-0.044290199875831604,
0.06251072883605957,
0.032139189541339874,
-0.0024805106222629547,
-0.1007157415151596,
0.03538393974304199,
-0.0400061309337616,
0.0727739930152893,
0.06956823170185089,
-0.10615572333335876,
-0.10057903826236725,
0.04373795539140701,
-0.04640129953622818,
-0.0876702144742012,
0.03838161379098892,
0.08689047396183014,
-0.13817459344863892,
0.06122318282723427,
0.01640530675649643,
0.07339787483215332,
-0.09579629451036453,
0.011838222853839397,
-0.06632322818040848,
0.013218684121966362,
0.0041096885688602924,
0.09375877678394318,
-0.13143403828144073,
0.1067548394203186,
0.026749305427074432,
0.040798306465148926,
-0.08916807919740677,
0.018382981419563293,
-0.010325263254344463,
0.07579654455184937,
0.12016397714614868,
0.04175285995006561,
-0.0631924495100975,
-0.017947357147932053,
-0.06827618181705475,
0.03595934063196182,
0.0721399337053299,
-0.03590334951877594,
0.042704444378614426,
0.0034346352331340313,
0.01578882336616516,
-0.006289338227361441,
0.013196613639593124,
-0.12998980283737183,
-0.14407067000865936,
0.06997808068990707,
-0.06983843445777893,
-0.08381934463977814,
-0.037998706102371216,
-0.06976976990699768,
-0.0859680250287056,
0.1463114619255066,
-0.07452338188886642,
-0.11085853725671768,
-0.1028757095336914,
0.007764891721308231,
0.07608996331691742,
-0.0660645067691803,
0.050791963934898376,
-0.04904317855834961,
0.09096840023994446,
-0.03999754786491394,
-0.11028245091438293,
0.017766069620847702,
-0.08836869895458221,
-0.11028239876031876,
-0.03012755513191223,
0.09303882718086243,
0.14476189017295837,
0.054198116064071655,
0.020914288237690926,
0.01485266163945198,
0.004942260682582855,
-0.12889878451824188,
0.0048225149512290955,
0.13534483313560486,
0.01674896851181984,
0.09831763803958893,
-0.06346289813518524,
-0.028376569971442223,
-0.013930270448327065,
0.006124177947640419,
0.134983628988266,
0.15631413459777832,
-0.05826760083436966,
0.14645914733409882,
0.22750955820083618,
-0.09818042069673538,
-0.19317609071731567,
-0.07422985136508942,
0.003968247212469578,
0.04528796300292015,
0.04806823283433914,
-0.20189017057418823,
0.10200442373752594,
0.05250738933682442,
-0.010020321235060692,
-0.031236737966537476,
-0.19374589622020721,
-0.1012212485074997,
0.10292650759220123,
0.06423549354076385,
0.19292756915092468,
-0.06744009256362915,
-0.04252132773399353,
-0.04145404323935509,
-0.05867202579975128,
0.09393355250358582,
-0.013157576322555542,
0.0828295648097992,
0.017370175570249557,
0.020020809024572372,
-0.002012345939874649,
-0.010660296306014061,
0.10703966021537781,
0.048731617629528046,
0.0176217220723629,
-0.07424676418304443,
-0.04869524762034416,
0.10612460970878601,
-0.031228922307491302,
0.12280532717704773,
0.030048947781324387,
0.05890404433012009,
-0.07812477648258209,
-0.059975530952215195,
-0.08382591605186462,
0.007934982888400555,
-0.04062771052122116,
-0.05028552561998367,
-0.04986431449651718,
0.03805817663669586,
0.023766960948705673,
0.011656643822789192,
-0.01421149168163538,
-0.0553346648812294,
0.017681628465652466,
0.05749289318919182,
0.1613888442516327,
-0.00721336156129837,
-0.06556721031665802,
-0.07156527042388916,
-0.056347399950027466,
0.04505949839949608,
-0.11012184619903564,
0.028852296993136406,
0.024698607623577118,
-0.0018088540527969599,
0.11375920474529266,
0.03492492064833641,
-0.11360204964876175,
0.014475229196250439,
0.007735179737210274,
-0.09987248480319977,
-0.1558641940355301,
-0.016054857522249222,
0.055962078273296356,
-0.06478385627269745,
0.042085468769073486,
0.1561860591173172,
-0.027261115610599518,
-0.03485403209924698,
-0.05470173805952072,
0.03259479999542236,
-0.0323096327483654,
0.039349380880594254,
0.08224442601203918,
0.01815938577055931,
-0.08161778748035431,
0.0546162910759449,
0.04445967823266983,
-0.023254571482539177,
0.06427949666976929,
0.02343866042792797,
-0.06867088377475739,
-0.08457058668136597,
-0.06893521547317505,
0.10598471760749817,
-0.03891976177692413,
-0.06525657325983047,
0.04502495378255844,
-0.10719344019889832,
0.06823207437992096,
0.1025504320859909,
0.03502136841416359,
0.04594748467206955,
-0.08386921882629395,
0.00606686994433403,
-0.036657921969890594,
0.03558368608355522,
-0.04693257063627243,
-0.030846312642097473,
-0.045003876090049744,
0.02814735472202301,
0.05906367301940918,
0.09704816341400146,
-0.034549131989479065,
-0.07584147155284882,
-0.08547940105199814,
-0.013937639072537422,
-0.11253281682729721,
-0.009985873475670815,
-0.07215093821287155,
0.0018161074258387089,
0.004822578746825457,
-0.025982800871133804,
0.02787337638437748,
0.029202498495578766,
-0.052560314536094666,
-0.00815582275390625,
-0.02865549363195896,
0.05757633596658707,
-0.07400767505168915,
0.014782587066292763,
0.01618710532784462,
-0.01908489130437374,
0.08994076400995255,
0.04514254629611969,
-0.031771305948495865,
0.055016905069351196,
-0.03303184732794762,
0.03299252688884735,
-0.04478389024734497,
0.006934380624443293,
-0.016751639544963837,
-0.10818147659301758,
-0.018888140097260475,
0.007638579234480858,
-0.022033801302313805,
0.01481806579977274,
0.06299164891242981,
-0.050574928522109985,
0.08267150074243546,
0.06387624889612198,
-0.05487746745347977,
-0.05755738914012909,
0.038691356778144836,
0.0033855028450489044,
0.016338594257831573,
0.07823201268911362,
0.004520612768828869,
0.059035491198301315,
-0.0862293392419815,
-0.0011815610341727734,
0.0029991483315825462,
-0.018567580729722977,
-0.011155426502227783,
-0.07045818120241165,
-0.0024461019784212112,
0.009398379363119602,
0.17601367831230164,
-0.006573137827217579,
-0.016359899193048477,
0.004040316678583622,
0.06970430165529251,
0.03345930948853493,
0.005371056962758303,
0.08029299974441528,
-0.01836235634982586,
-0.020246431231498718,
-0.01925482787191868,
0.027336567640304565,
-0.014071283861994743,
0.014161407947540283,
0.13201722502708435,
0.04543549194931984,
0.11750602722167969,
0.07287193834781647,
0.06458289176225662,
0.05567695572972298,
-0.10834947228431702,
-0.07455151528120041,
0.03752055764198303,
0.05240117758512497,
-0.03664058446884155,
0.031485915184020996,
0.060513243079185486,
-0.09125907719135284,
0.08416150510311127,
0.04518228396773338,
-0.05875755846500397,
-0.1273535043001175,
-0.20743581652641296,
-0.03836909681558609,
-0.009898630902171135,
-0.01872146688401699,
-0.11143167316913605,
0.02304854243993759,
0.09664605557918549,
0.04108839109539986,
-0.021236034110188484,
0.06921733915805817,
-0.1461244374513626,
-0.03468935564160347,
0.03655143454670906,
-0.012943707406520844,
0.025781922042369843,
0.04985165223479271,
0.020216068252921104,
0.01934286206960678,
0.07217022031545639,
0.05033135786652565,
0.03895779699087143,
0.053952887654304504,
0.029857276007533073,
-0.05144820734858513,
-0.08574973046779633,
-0.0456819050014019,
0.004570793826133013,
0.060153279453516006,
0.12756699323654175,
0.008769867941737175,
-0.06809420883655548,
0.001017894595861435,
0.05585593730211258,
-0.02096010372042656,
-0.0842839777469635,
-0.1163996085524559,
0.21635594964027405,
-0.02378327026963234,
0.012699004262685776,
-0.0009817928075790405,
-0.038266826421022415,
0.02045495994389057,
0.20393548905849457,
0.26020199060440063,
-0.017235442996025085,
-0.016966821625828743,
0.015300161205232143,
0.000905950553715229,
0.03482843190431595,
0.12372073531150818,
-0.026066705584526062,
0.2276231348514557,
-0.04596491903066635,
0.06209411472082138,
-0.054453954100608826,
-0.015417429618537426,
-0.06639726459980011,
0.05656271427869797,
0.0030717484187334776,
-0.01639646664261818,
-0.011406883597373962,
0.07656142860651016,
-0.035236503928899765,
-0.12697261571884155,
-0.03257734328508377,
0.0055762771517038345,
-0.059022750705480576,
0.010405400767922401,
0.00533531978726387,
0.0367257297039032,
0.08054997026920319,
-0.015469112433493137,
-0.0026264488697052,
0.12589812278747559,
0.01184790302067995,
-0.11458972841501236,
-0.038970042020082474,
0.12929704785346985,
0.024684488773345947,
0.12956559658050537,
0.047350283712148666,
0.08929160982370377,
0.07228842377662659,
0.014005620032548904,
-0.06603705137968063,
0.03335050120949745,
-0.024548035115003586,
0.02105775475502014,
0.00013791769742965698,
0.10616163909435272,
-0.0108746737241745,
0.07197113335132599,
0.01628703437745571,
-0.08326075971126556,
0.05673699826002121,
0.008596647530794144,
-0.07444585859775543,
-0.03824077174067497,
0.11053987592458725,
-0.10984323918819427,
0.14360180497169495,
0.14649048447608948,
-0.004172551445662975,
-0.07043804228305817,
-0.011352022178471088,
0.021397501230239868,
-0.010250968858599663,
0.045831307768821716,
-0.03023163601756096,
-0.13056901097297668,
-0.0020573041401803493,
-0.09734687209129333,
0.04769524559378624,
-0.2491331398487091,
-0.06355756521224976,
0.017964307218790054,
-0.0030668051913380623,
-0.03026498481631279,
0.0300237275660038,
0.06240268424153328,
-0.004413241054862738,
-0.03305334225296974,
0.029322447255253792,
-0.028631014749407768,
0.03603507578372955,
-0.07465341687202454,
-0.06815727800130844
] |
null | null | transformers | # MultiBERTs Seed 13 (uncased)
Seed 13 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-13')
model = BertModel.from_pretrained("multiberts-seed-13")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-13 | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 13 (uncased)
Seed 13 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 13 (uncased)\n\nSeed 13 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 13 (uncased)\n\nSeed 13 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
63,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 13 (uncased)\n\nSeed 13 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.06638345867395401,
0.02728714421391487,
-0.0021763681434094906,
0.0966494008898735,
0.0759570375084877,
0.028158728033304214,
0.159017875790596,
0.030556220561265945,
-0.03438849002122879,
0.022006971761584282,
0.10296498239040375,
0.03887429088354111,
0.03328445553779602,
0.031605109572410583,
0.06899212300777435,
-0.2623547315597534,
0.07574969530105591,
-0.05951453745365143,
0.04429580643773079,
0.06098799407482147,
0.10277409851551056,
-0.0706164538860321,
0.07762886583805084,
0.043342627584934235,
-0.0711202472448349,
-0.027698198333382607,
-0.004946790635585785,
-0.03434666618704796,
0.07420045882463455,
0.09441202878952026,
0.06251534819602966,
-0.010935642756521702,
0.06243380904197693,
-0.0876908004283905,
0.019362544640898705,
0.02118239551782608,
-0.010518688708543777,
0.03718513995409012,
0.025389768183231354,
-0.002202700823545456,
0.1071573942899704,
0.023991547524929047,
0.08880440145730972,
0.038855306804180145,
-0.08998594433069229,
-0.09447755664587021,
-0.07254825532436371,
0.09276443719863892,
0.03475416824221611,
0.044577427208423615,
-0.00671505369246006,
0.07437436282634735,
-0.009702249430119991,
0.05939985439181328,
0.08533819019794464,
-0.2326313853263855,
-0.024635091423988342,
0.057706817984580994,
0.05202949792146683,
0.04412560164928436,
0.01345540676265955,
0.02534755878150463,
0.005833331495523453,
0.0495479553937912,
-0.00047831516712903976,
-0.02951451763510704,
0.14005404710769653,
-0.05221099406480789,
-0.13878637552261353,
-0.031777434051036835,
0.1529819369316101,
0.02620573341846466,
-0.11096126586198807,
-0.11357571184635162,
-0.002669629640877247,
0.16399747133255005,
-0.005316600669175386,
-0.009504202753305435,
-0.009570966474711895,
-0.006768549792468548,
0.025434669107198715,
-0.11914673447608948,
-0.08233388513326645,
-0.0245172381401062,
-0.0643342137336731,
0.14936000108718872,
0.048015154898166656,
0.06851417571306229,
-0.059928346425294876,
0.04014700651168823,
-0.15275123715400696,
-0.03257955610752106,
-0.04722947999835014,
-0.09799689054489136,
0.01511695608496666,
0.02579520456492901,
-0.04463572800159454,
-0.1168338805437088,
-0.03953876346349716,
0.0798962414264679,
0.033616118133068085,
0.039108119904994965,
-0.004322154447436333,
0.029538452625274658,
0.10495410859584808,
0.10437040030956268,
-0.06223917007446289,
0.08191139250993729,
0.015769293531775475,
-0.01990313082933426,
0.042253218591213226,
-0.05643996596336365,
-0.12210816144943237,
0.07072354108095169,
-0.03441322594881058,
0.01715533807873726,
0.022157620638608932,
0.04388108476996422,
-0.010744175873696804,
-0.07811766862869263,
0.14941710233688354,
-0.097283735871315,
-0.001324724406003952,
-0.004769779741764069,
0.016660436987876892,
0.07543015480041504,
0.023131107911467552,
0.0039109340868890285,
-0.05790410190820694,
-0.029453439638018608,
-0.06438255310058594,
-0.03131437674164772,
-0.05847056210041046,
-0.13456033170223236,
0.005329126492142677,
-0.0190211720764637,
-0.014148655347526073,
-0.09997527301311493,
-0.18394985795021057,
-0.01520373485982418,
0.06947846710681915,
-0.012556168250739574,
0.011436804197728634,
-0.0006299770902842283,
0.010299498215317726,
-0.006038544699549675,
0.03440600633621216,
-0.03974781185388565,
0.010668156668543816,
-0.013422388583421707,
-0.06457271426916122,
0.03944723680615425,
-0.11877617985010147,
0.04056178033351898,
-0.054084815084934235,
0.01330108754336834,
-0.2007368505001068,
0.10672391206026077,
-0.032383520156145096,
-0.03896553814411163,
-0.04413633048534393,
-0.06232057511806488,
0.017682421952486038,
0.04553648456931114,
-0.016045603901147842,
0.10061296075582504,
-0.12018997967243195,
-0.05147363245487213,
0.15918242931365967,
-0.15723156929016113,
0.018939517438411713,
0.10509879142045975,
-0.06987495720386505,
0.03825951740145683,
0.1438327431678772,
0.0788281112909317,
0.07816900312900543,
-0.03747182339429855,
0.018383875489234924,
0.059401389211416245,
-0.04469474032521248,
0.07942195981740952,
0.10919779539108276,
-0.01131460815668106,
-0.12470394372940063,
0.029951348900794983,
-0.06817103922367096,
-0.0355583131313324,
-0.021463092416524887,
-0.024613291025161743,
0.017033888027071953,
-0.0495639368891716,
0.06061621010303497,
-0.007590029388666153,
0.004757079295814037,
-0.024700215086340904,
-0.07319407165050507,
0.07871273159980774,
0.07601326704025269,
-0.08527901768684387,
0.01606801152229309,
-0.09104254096746445,
0.029914747923612595,
-0.06233987212181091,
-0.005976385436952114,
-0.14207111299037933,
-0.04687923938035965,
0.0308381300419569,
-0.08131949603557587,
0.09575607627630234,
0.11751355230808258,
0.010370166040956974,
0.11312158405780792,
-0.04525437206029892,
0.027890706434845924,
-0.01446213573217392,
-0.008696875534951687,
-0.04358617961406708,
-0.13857002556324005,
-0.06980882585048676,
-0.06579913198947906,
0.08204058557748795,
-0.05738024413585663,
0.02432306855916977,
-0.08444911241531372,
-0.045081645250320435,
-0.028474193066358566,
-0.04674384370446205,
0.008697851561009884,
0.004897166043519974,
-0.012208622880280018,
-0.029331739991903305,
0.04053111374378204,
0.023616956546902657,
-0.08835983276367188,
0.08590652048587799,
-0.12939131259918213,
-0.05907811224460602,
0.06709592789411545,
-0.0021004769951105118,
-0.04197347164154053,
0.09166522324085236,
0.008017715997993946,
-0.011354752816259861,
-0.056592606008052826,
-0.04956065118312836,
0.22211800515651703,
-0.02147071808576584,
0.08045785129070282,
-0.1140071228146553,
0.00118966621812433,
0.038032714277505875,
-0.05718732252717018,
-0.05646184831857681,
0.07711214572191238,
0.04017972573637962,
-0.21292580664157867,
0.02502840757369995,
0.07124759256839752,
0.06388149410486221,
0.1392778754234314,
0.023831849917769432,
-0.02852001041173935,
-0.058386195451021194,
-0.013457993976771832,
-0.01303020864725113,
0.05457112938165665,
-0.047288842499256134,
0.0044435844756662846,
0.04873054474592209,
0.053630806505680084,
0.01716260425746441,
-0.06961197406053543,
0.021027766168117523,
0.05302960053086281,
-0.01522020436823368,
-0.07147695124149323,
-0.05469672009348869,
-0.03734782338142395,
0.07167680561542511,
0.041584521532058716,
0.04771871119737625,
0.05342274159193039,
-0.021921321749687195,
-0.13751260936260223,
0.16851073503494263,
-0.133520245552063,
-0.2232067584991455,
-0.132586270570755,
-0.08701477944850922,
-0.07524639368057251,
0.039263173937797546,
0.038357071578502655,
-0.036434631794691086,
-0.04797276854515076,
-0.10344661772251129,
0.06794515997171402,
-0.11667289584875107,
-0.05991870537400246,
0.017756864428520203,
-0.057358987629413605,
-0.007562451530247927,
-0.12756457924842834,
-0.008996495977044106,
-0.028180355206131935,
-0.0811019241809845,
0.0023055579513311386,
-0.04550797492265701,
0.007705638650804758,
0.13445121049880981,
0.005610671825706959,
-0.013476580381393433,
-0.01609201543033123,
0.20203502476215363,
0.030189406126737595,
0.04586773365736008,
0.12526848912239075,
-0.06563802808523178,
0.058969780802726746,
0.026148155331611633,
0.03945505991578102,
-0.04687731713056564,
-0.002215026877820492,
-0.025318562984466553,
-0.11568525433540344,
-0.2176482379436493,
-0.06411528587341309,
0.00794246792793274,
0.016724511981010437,
0.019117211923003197,
0.01676534302532673,
0.027754221111536026,
0.05577472597360611,
-0.03372354805469513,
0.03619538992643356,
0.03710121288895607,
0.05573730170726776,
0.07168687880039215,
-0.06151723116636276,
0.09421837329864502,
-0.06881332397460938,
0.02393059991300106,
0.10623104125261307,
-0.06668853759765625,
0.15560051798820496,
0.04433521255850792,
0.05315718799829483,
0.09231479465961456,
-0.0032763052731752396,
0.05823058634996414,
0.10665401816368103,
-0.06049208343029022,
0.01757051609456539,
-0.07517945021390915,
-0.05665573477745056,
-0.044502731412649155,
0.06014461815357208,
0.030590776354074478,
-0.0004816390573978424,
-0.09991200268268585,
0.03288547694683075,
-0.03872605040669441,
0.07204438745975494,
0.06515121459960938,
-0.10723716765642166,
-0.10148732364177704,
0.04246586561203003,
-0.04589540883898735,
-0.08595307171344757,
0.03829260170459747,
0.08114518225193024,
-0.14104780554771423,
0.06164255365729332,
0.015720147639513016,
0.07255436480045319,
-0.0960703045129776,
0.012916898354887962,
-0.06678963452577591,
0.014335567131638527,
0.004130640532821417,
0.09444106370210648,
-0.13314776122570038,
0.11011195182800293,
0.027171224355697632,
0.03955516591668129,
-0.08829405903816223,
0.019168373197317123,
-0.011915826238691807,
0.07536830008029938,
0.12222784757614136,
0.04293527826666832,
-0.05997626483440399,
-0.0210016667842865,
-0.06770405918359756,
0.03473038971424103,
0.07253843545913696,
-0.03694438189268112,
0.03962308168411255,
0.0023471200838685036,
0.016023259609937668,
-0.006857654545456171,
0.019054125994443893,
-0.13101494312286377,
-0.14308708906173706,
0.0715007483959198,
-0.06990103423595428,
-0.08933570235967636,
-0.03662561625242233,
-0.06852567940950394,
-0.09495764970779419,
0.15250852704048157,
-0.07503931224346161,
-0.10991518944501877,
-0.10409273207187653,
0.008100946433842182,
0.0739583969116211,
-0.06678664684295654,
0.05131808668375015,
-0.051392100751399994,
0.09152805805206299,
-0.04055074602365494,
-0.11190567910671234,
0.01864585652947426,
-0.08941154181957245,
-0.11119197309017181,
-0.03063623607158661,
0.09227486699819565,
0.14521412551403046,
0.05372917652130127,
0.02091342955827713,
0.016398826614022255,
0.0033132825046777725,
-0.12923815846443176,
0.005410835146903992,
0.137796550989151,
0.012456953525543213,
0.09538521617650986,
-0.0618605837225914,
-0.02638201043009758,
-0.014246759936213493,
0.005070516839623451,
0.13582777976989746,
0.15803390741348267,
-0.058856625109910965,
0.14871135354042053,
0.22843755781650543,
-0.09822161495685577,
-0.19247615337371826,
-0.07533949613571167,
0.0031375279650092125,
0.044750701636075974,
0.05038521811366081,
-0.19985195994377136,
0.10155971348285675,
0.05717325583100319,
-0.012265690602362156,
-0.03799609839916229,
-0.19193807244300842,
-0.10145661234855652,
0.10211484134197235,
0.06548899412155151,
0.19386067986488342,
-0.06613081693649292,
-0.04372983053326607,
-0.03745365887880325,
-0.061043448746204376,
0.08776602894067764,
-0.009147167205810547,
0.08261959254741669,
0.016933128237724304,
0.018384665250778198,
-0.002466849982738495,
-0.010748812928795815,
0.10800470411777496,
0.04725781828165054,
0.018286168575286865,
-0.0750834047794342,
-0.04871641471982002,
0.11062724888324738,
-0.03183289244771004,
0.12149648368358612,
0.034737154841423035,
0.06070481240749359,
-0.07551790773868561,
-0.059984542429447174,
-0.08426516503095627,
0.008946008048951626,
-0.040876202285289764,
-0.05122947692871094,
-0.053401537239551544,
0.037852153182029724,
0.025541409850120544,
0.011514456942677498,
-0.009293727576732635,
-0.05458269268274307,
0.017728088423609734,
0.061521437019109726,
0.15891604125499725,
-0.01029951125383377,
-0.06360335648059845,
-0.07163411378860474,
-0.05823115259408951,
0.0444701723754406,
-0.10480029881000519,
0.028981409966945648,
0.023856688290834427,
-0.0017445993144065142,
0.11250516772270203,
0.034485429525375366,
-0.1140204519033432,
0.012842555530369282,
0.006336009129881859,
-0.09969158470630646,
-0.15452295541763306,
-0.0181131511926651,
0.05441122502088547,
-0.0643831193447113,
0.04254692792892456,
0.15555250644683838,
-0.027205556631088257,
-0.034543998539447784,
-0.05400297790765762,
0.03252842277288437,
-0.031877174973487854,
0.03834881633520126,
0.08078435063362122,
0.01768607459962368,
-0.08237691223621368,
0.05474640429019928,
0.0454079732298851,
-0.019515814259648323,
0.06326711922883987,
0.025379929691553116,
-0.06890537589788437,
-0.08453314006328583,
-0.06902983039617538,
0.1057143285870552,
-0.04153328388929367,
-0.06206083297729492,
0.045997679233551025,
-0.106190524995327,
0.06708580255508423,
0.10213626176118851,
0.03644164279103279,
0.04539306089282036,
-0.0828234925866127,
0.006970670074224472,
-0.03579976409673691,
0.033688291907310486,
-0.048284441232681274,
-0.030195731669664383,
-0.04457738995552063,
0.02593810111284256,
0.05924981087446213,
0.09564786404371262,
-0.03531838580965996,
-0.07770056277513504,
-0.08616991341114044,
-0.013439860194921494,
-0.09940358251333237,
-0.010217822156846523,
-0.07422920316457748,
0.0017519672401249409,
0.0041829803958535194,
-0.025433752685785294,
0.02687723934650421,
0.03108511120080948,
-0.05071985721588135,
-0.00902938935905695,
-0.029932662844657898,
0.05631176382303238,
-0.07225074619054794,
0.015143590047955513,
0.017014920711517334,
-0.01779058203101158,
0.09002348780632019,
0.04472311586141586,
-0.030958591029047966,
0.05371502786874771,
-0.04435543343424797,
0.03523671627044678,
-0.0450766459107399,
0.006684375461190939,
-0.019237808883190155,
-0.1106981635093689,
-0.021019039675593376,
0.008839843794703484,
-0.02411678247153759,
0.015160455368459225,
0.06377453356981277,
-0.05276929587125778,
0.08302482217550278,
0.06309355795383453,
-0.055693332105875015,
-0.057665206491947174,
0.03763436898589134,
0.0024383366107940674,
0.014766602776944637,
0.07508324086666107,
0.0036032083444297314,
0.05822993814945221,
-0.08760219067335129,
-0.001292794942855835,
0.003968692384660244,
-0.01815766841173172,
-0.01836886815726757,
-0.0699869841337204,
-0.0022890493273735046,
0.01020593848079443,
0.18233495950698853,
-0.006987069733440876,
-0.019126348197460175,
0.00534985214471817,
0.06878313422203064,
0.03755149990320206,
0.003641282208263874,
0.0837191641330719,
-0.0177499707788229,
-0.022389687597751617,
-0.015812499448657036,
0.02859477326273918,
-0.014323722571134567,
0.010904699563980103,
0.1282053291797638,
0.048518333584070206,
0.11437826603651047,
0.0725211501121521,
0.06226373836398125,
0.05605439469218254,
-0.10495489090681076,
-0.07259294390678406,
0.036402903497219086,
0.052148155868053436,
-0.035062551498413086,
0.024870704859495163,
0.06339682638645172,
-0.09411516785621643,
0.08513927459716797,
0.044868260622024536,
-0.05864996463060379,
-0.12785233557224274,
-0.213514506816864,
-0.03974895551800728,
-0.010955093428492546,
-0.019482048228383064,
-0.11144768446683884,
0.023800738155841827,
0.09617234766483307,
0.040366411209106445,
-0.021345067769289017,
0.0693531408905983,
-0.14458076655864716,
-0.03483567386865616,
0.035382162779569626,
-0.012789111584424973,
0.02682969719171524,
0.04769153147935867,
0.017544252797961235,
0.01934804953634739,
0.0732746422290802,
0.05104600265622139,
0.03807621821761131,
0.0572967529296875,
0.029476823285222054,
-0.05167002975940704,
-0.08803418278694153,
-0.04504147544503212,
0.0030854023061692715,
0.057657621800899506,
0.12483088672161102,
0.007654100656509399,
-0.06846332550048828,
0.001012787688523531,
0.05672292783856392,
-0.019050423055887222,
-0.07989197969436646,
-0.11570101976394653,
0.21807681024074554,
-0.0219290591776371,
0.012360149063169956,
-0.0012425249442458153,
-0.03812753036618233,
0.022287733852863312,
0.20562103390693665,
0.26334428787231445,
-0.016720201820135117,
-0.01564868912100792,
0.015301940962672234,
0.0008615516126155853,
0.037115201354026794,
0.12290626764297485,
-0.02805851399898529,
0.2229333072900772,
-0.04585333913564682,
0.06264916807413101,
-0.05383043736219406,
-0.016855793073773384,
-0.06912437826395035,
0.05563485622406006,
0.003696165978908539,
-0.016951188445091248,
-0.012996206060051918,
0.07532624900341034,
-0.03415388986468315,
-0.12248822301626205,
-0.033331166952848434,
0.0027616180013865232,
-0.057155806571245193,
0.010458203963935375,
0.0035606566816568375,
0.036498114466667175,
0.07923054695129395,
-0.013424769975244999,
-0.0008628880605101585,
0.1224694475531578,
0.012864239513874054,
-0.1136905774474144,
-0.039158061146736145,
0.1287086009979248,
0.02920478582382202,
0.13385888934135437,
0.049242619425058365,
0.08832418918609619,
0.07183811813592911,
0.014183586463332176,
-0.0659109503030777,
0.03553805500268936,
-0.02415645122528076,
0.022175932303071022,
0.000017335638403892517,
0.10610820353031158,
-0.008592945523560047,
0.07413781434297562,
0.016452057287096977,
-0.08456847071647644,
0.05801019072532654,
0.008779950439929962,
-0.07290992140769958,
-0.03614259883761406,
0.1091088056564331,
-0.10999304056167603,
0.1433717906475067,
0.14528417587280273,
-0.0031176889315247536,
-0.07123532891273499,
-0.01155296340584755,
0.020603958517313004,
-0.009608741849660873,
0.045762501657009125,
-0.03116057813167572,
-0.13157998025417328,
-0.0012021074071526527,
-0.10022583603858948,
0.04691220447421074,
-0.24617081880569458,
-0.06641700118780136,
0.016961487010121346,
-0.0031775194220244884,
-0.028285454958677292,
0.029353920370340347,
0.061237405985593796,
-0.0057419706135988235,
-0.031847499310970306,
0.034668054431676865,
-0.027978986501693726,
0.0366094633936882,
-0.07448138296604156,
-0.06672360002994537
] |
null | null | transformers | # MultiBERTs Seed 14 (uncased)
Seed 14 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-14')
model = BertModel.from_pretrained("multiberts-seed-14")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-14 | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 14 (uncased)
Seed 14 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 14 (uncased)\n\nSeed 14 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 14 (uncased)\n\nSeed 14 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
63,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 14 (uncased)\n\nSeed 14 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.06766946613788605,
0.02731514163315296,
-0.0021855011582374573,
0.09478341788053513,
0.07619083672761917,
0.029569245874881744,
0.15774047374725342,
0.029730524867773056,
-0.035119183361530304,
0.021461980417370796,
0.10329186916351318,
0.03639039769768715,
0.0325341559946537,
0.03190042823553085,
0.06790007650852203,
-0.26353588700294495,
0.07391156256198883,
-0.058196790516376495,
0.03716546297073364,
0.05950615182518959,
0.10431577265262604,
-0.07016551494598389,
0.07773608714342117,
0.04549496993422508,
-0.07352327555418015,
-0.026990439742803574,
-0.0055951327085494995,
-0.03396652638912201,
0.07491572946310043,
0.09500536322593689,
0.06390117853879929,
-0.011668150313198566,
0.06223881617188454,
-0.08786001801490784,
0.01960654929280281,
0.023579448461532593,
-0.010075008496642113,
0.03788205608725548,
0.026582423597574234,
-0.005047016777098179,
0.11479678750038147,
0.02352176606655121,
0.08739182353019714,
0.03921633958816528,
-0.0880572572350502,
-0.10634486377239227,
-0.07033896446228027,
0.0915587991476059,
0.030908208340406418,
0.043716296553611755,
-0.007172415964305401,
0.07033561170101166,
-0.010298105888068676,
0.05907463654875755,
0.07886949181556702,
-0.2406746745109558,
-0.02351308986544609,
0.04919841140508652,
0.05111972242593765,
0.04045398533344269,
0.010472264140844345,
0.027872221544384956,
0.006171759217977524,
0.05033058673143387,
0.0036492422223091125,
-0.02950258180499077,
0.1383475959300995,
-0.05182906985282898,
-0.13793274760246277,
-0.028609057888388634,
0.15531456470489502,
0.026188500225543976,
-0.11198054999113083,
-0.11261247098445892,
-0.0008798008784651756,
0.1729404330253601,
-0.004340111277997494,
-0.008414332754909992,
-0.010256428271532059,
-0.00571511872112751,
0.027224041521549225,
-0.12217274308204651,
-0.0831153392791748,
-0.021864144131541252,
-0.06431017071008682,
0.15118561685085297,
0.04829484224319458,
0.06908814609050751,
-0.05943244695663452,
0.03952404484152794,
-0.14977923035621643,
-0.03463399410247803,
-0.04593084007501602,
-0.09882956743240356,
0.012553919106721878,
0.02704455703496933,
-0.04427703842520714,
-0.11146750301122665,
-0.03998658061027527,
0.08030413836240768,
0.03363752365112305,
0.037372175604104996,
-0.004650081507861614,
0.02801883965730667,
0.10422374308109283,
0.10283657908439636,
-0.061565808951854706,
0.07586828619241714,
0.017323751002550125,
-0.02239442989230156,
0.04005982726812363,
-0.05627792328596115,
-0.12366890907287598,
0.07119592279195786,
-0.0366915799677372,
0.016818638890981674,
0.021978072822093964,
0.04282035678625107,
-0.009383490309119225,
-0.07618119567632675,
0.14548282325267792,
-0.09708424657583237,
0.0008015679195523262,
-0.0032342036720365286,
0.01589391753077507,
0.078890360891819,
0.02446996420621872,
0.0023539988324046135,
-0.05848828703165054,
-0.02792181819677353,
-0.06448373198509216,
-0.030241776257753372,
-0.05889216810464859,
-0.1364240199327469,
0.0042903730645775795,
-0.01987258344888687,
-0.014492517337203026,
-0.10150716453790665,
-0.18148578703403473,
-0.015117350965738297,
0.070323646068573,
-0.014004070311784744,
0.012684079818427563,
-0.0002816873602569103,
0.010439356788992882,
-0.004437346011400223,
0.033933788537979126,
-0.038542646914720535,
0.009861835278570652,
-0.013427039608359337,
-0.06541910022497177,
0.04091043025255203,
-0.12037913501262665,
0.0404842309653759,
-0.05272625386714935,
0.012494264170527458,
-0.19804775714874268,
0.10500617325305939,
-0.03069423884153366,
-0.04033099114894867,
-0.04524469003081322,
-0.06259612739086151,
0.01811150833964348,
0.04743724316358566,
-0.016146710142493248,
0.10077622532844543,
-0.12207567691802979,
-0.05314979702234268,
0.15822894871234894,
-0.15758515894412994,
0.01797865331172943,
0.10705272853374481,
-0.0689762532711029,
0.04168753698468208,
0.1446746587753296,
0.07791472971439362,
0.07611735910177231,
-0.04324847459793091,
0.017833363264799118,
0.061543770134449005,
-0.046532366424798965,
0.08096633851528168,
0.10875580459833145,
-0.010873902589082718,
-0.1224522516131401,
0.029418788850307465,
-0.062211774289608,
-0.03419434279203415,
-0.02203977108001709,
-0.024299126118421555,
0.01714053750038147,
-0.050978899002075195,
0.05955641344189644,
-0.007441886700689793,
0.005787069909274578,
-0.02414218708872795,
-0.0751892477273941,
0.08294621109962463,
0.0774180144071579,
-0.08612719923257828,
0.01608937233686447,
-0.09276246279478073,
0.031396206468343735,
-0.061286766082048416,
-0.004405166022479534,
-0.14396435022354126,
-0.04711022228002548,
0.02895243838429451,
-0.08123695850372314,
0.09818161278963089,
0.11848630011081696,
0.011102736927568913,
0.1146407201886177,
-0.047716379165649414,
0.028788872063159943,
-0.011576836928725243,
-0.008009640499949455,
-0.0431646928191185,
-0.14280225336551666,
-0.06987109780311584,
-0.06472884863615036,
0.08954387158155441,
-0.06394489109516144,
0.02301800064742565,
-0.0827401876449585,
-0.040710531175136566,
-0.02669406309723854,
-0.04269111528992653,
0.005972941406071186,
0.006092781201004982,
-0.013195378705859184,
-0.029223749414086342,
0.04057086259126663,
0.02462911047041416,
-0.08886304497718811,
0.08782953023910522,
-0.12506315112113953,
-0.05637478828430176,
0.06876380741596222,
-0.008199108764529228,
-0.041702091693878174,
0.09269075840711594,
0.009645702317357063,
-0.011722119525074959,
-0.05527545511722565,
-0.050762444734573364,
0.21893352270126343,
-0.022078894078731537,
0.08345727622509003,
-0.11268948763608932,
0.002722969278693199,
0.0367102175951004,
-0.05709508806467056,
-0.056488003581762314,
0.07784229516983032,
0.03670097887516022,
-0.21480822563171387,
0.02449956163764,
0.07168415188789368,
0.06210062652826309,
0.14245577156543732,
0.023240849375724792,
-0.02982471138238907,
-0.059385333210229874,
-0.01703040488064289,
-0.012957673519849777,
0.05725977569818497,
-0.04473709315061569,
0.0036559049040079117,
0.05038342624902725,
0.052705422043800354,
0.017748024314641953,
-0.06927566230297089,
0.02277461253106594,
0.051683321595191956,
-0.015004405751824379,
-0.0726122260093689,
-0.05378188565373421,
-0.037412166595458984,
0.07355780899524689,
0.041394881904125214,
0.048187315464019775,
0.05301379784941673,
-0.020929770544171333,
-0.13634231686592102,
0.16784916818141937,
-0.1367616206407547,
-0.2239953875541687,
-0.1317204087972641,
-0.08487944304943085,
-0.07388140261173248,
0.03940589725971222,
0.03780309110879898,
-0.037555187940597534,
-0.049302853643894196,
-0.1017952710390091,
0.06672024726867676,
-0.11491113156080246,
-0.0593409463763237,
0.014052487909793854,
-0.05500585585832596,
-0.007403112947940826,
-0.1271076798439026,
-0.008287111297249794,
-0.027314724400639534,
-0.07599375396966934,
0.0018204925581812859,
-0.045089974999427795,
0.009628859348595142,
0.13465985655784607,
0.006469959393143654,
-0.012986892834305763,
-0.015160482376813889,
0.20110967755317688,
0.02768871933221817,
0.04645247384905815,
0.12807434797286987,
-0.06614327430725098,
0.05811594799160957,
0.02274625562131405,
0.04005030542612076,
-0.04730333760380745,
-0.0032204333692789078,
-0.02524426020681858,
-0.11447940766811371,
-0.2156418263912201,
-0.06590994447469711,
0.006104875821620226,
0.012239237315952778,
0.0199905838817358,
0.015676748007535934,
0.024374302476644516,
0.05233842879533768,
-0.033790405839681625,
0.03695805370807648,
0.03704727441072464,
0.05488647148013115,
0.07413426041603088,
-0.06184924393892288,
0.09518586099147797,
-0.0709371566772461,
0.02288520336151123,
0.1076565682888031,
-0.07140183448791504,
0.1572062224149704,
0.04300173372030258,
0.053847961127758026,
0.09234704077243805,
-0.0031164344400167465,
0.05902310088276863,
0.10593241453170776,
-0.06307390332221985,
0.01845410093665123,
-0.07471823692321777,
-0.057673998177051544,
-0.04293661564588547,
0.06252777576446533,
0.03264351934194565,
-0.003834884613752365,
-0.09967564046382904,
0.030646199360489845,
-0.038403019309043884,
0.07354985922574997,
0.06768912076950073,
-0.10779494047164917,
-0.10227653384208679,
0.04316078871488571,
-0.043404631316661835,
-0.08849561959505081,
0.03507654368877411,
0.08509797602891922,
-0.1399349570274353,
0.06309448182582855,
0.01731414906680584,
0.07292210310697556,
-0.09520888328552246,
0.01286297757178545,
-0.06589796394109726,
0.015989316627383232,
0.003743501380085945,
0.09399658441543579,
-0.13342297077178955,
0.10799651592969894,
0.02880691923201084,
0.042528897523880005,
-0.08893612772226334,
0.017872696742415428,
-0.010509268380701542,
0.07868829369544983,
0.11953351646661758,
0.0421220138669014,
-0.061160650104284286,
-0.019840937107801437,
-0.06388434767723083,
0.034818727523088455,
0.07396607846021652,
-0.0351717546582222,
0.0397912859916687,
0.0020478907972574234,
0.01386282779276371,
-0.006473688408732414,
0.014653611928224564,
-0.13119173049926758,
-0.14489296078681946,
0.0701291561126709,
-0.06865350902080536,
-0.08403056859970093,
-0.03615190461277962,
-0.06928471475839615,
-0.09141957014799118,
0.15159191191196442,
-0.07299637794494629,
-0.10934647172689438,
-0.10513818264007568,
0.006983393803238869,
0.07317645847797394,
-0.0660683885216713,
0.05208584666252136,
-0.04937973618507385,
0.09318243712186813,
-0.040210966020822525,
-0.11086004972457886,
0.017496874555945396,
-0.08916182816028595,
-0.11018897593021393,
-0.031223611906170845,
0.09241682291030884,
0.14547424018383026,
0.053133539855480194,
0.023348156362771988,
0.016142383217811584,
0.0008548982441425323,
-0.12861648201942444,
0.004640500992536545,
0.1332186758518219,
0.01660417765378952,
0.09791790693998337,
-0.06542991101741791,
-0.026450632140040398,
-0.01169360987842083,
0.004722783342003822,
0.13379095494747162,
0.15828141570091248,
-0.05964745581150055,
0.1476401686668396,
0.23165401816368103,
-0.09919722378253937,
-0.19339948892593384,
-0.07515265047550201,
0.0029361844062805176,
0.046092838048934937,
0.04648027569055557,
-0.20203262567520142,
0.10303087532520294,
0.05312700197100639,
-0.012728792615234852,
-0.032387055456638336,
-0.18419797718524933,
-0.1020827367901802,
0.10398154705762863,
0.06201335787773132,
0.19287478923797607,
-0.06734521687030792,
-0.04249885678291321,
-0.04288552328944206,
-0.057880252599716187,
0.09149609506130219,
-0.01561470702290535,
0.08193358778953552,
0.017437350004911423,
0.01622229442000389,
-0.0029785502701997757,
-0.008873693645000458,
0.10994504392147064,
0.04586626589298248,
0.018655892461538315,
-0.07426917552947998,
-0.042403172701597214,
0.11175309121608734,
-0.030515192076563835,
0.12099982053041458,
0.03352910652756691,
0.05861639603972435,
-0.07796065509319305,
-0.060492273420095444,
-0.08296753466129303,
0.011799084022641182,
-0.04042521119117737,
-0.05114543437957764,
-0.052784040570259094,
0.03852245956659317,
0.022129857912659645,
0.011688066646456718,
-0.011172803118824959,
-0.055610354989767075,
0.0168718583881855,
0.060669008642435074,
0.15946181118488312,
-0.011506874114274979,
-0.06411807239055634,
-0.0693092793226242,
-0.05850083753466606,
0.046104203909635544,
-0.10668445378541946,
0.029956866055727005,
0.02388724312186241,
-0.0012280847877264023,
0.11444895714521408,
0.0332658514380455,
-0.11416211724281311,
0.011978331953287125,
0.007770804688334465,
-0.10080818831920624,
-0.15131017565727234,
-0.017620744183659554,
0.04645773023366928,
-0.06109645217657089,
0.038722164928913116,
0.15637806057929993,
-0.02592221274971962,
-0.03338773921132088,
-0.05422316491603851,
0.030935566872358322,
-0.033103279769420624,
0.03807048127055168,
0.0827135294675827,
0.01768505945801735,
-0.0816802978515625,
0.05275781452655792,
0.046749845147132874,
-0.020099161192774773,
0.06375347077846527,
0.0242153350263834,
-0.07035170495510101,
-0.08551361411809921,
-0.07019324600696564,
0.11016947776079178,
-0.03909768909215927,
-0.06679566204547882,
0.04703543335199356,
-0.1088150292634964,
0.06653556227684021,
0.09906776249408722,
0.03712255880236626,
0.04565567523241043,
-0.08375034481287003,
0.006316004320979118,
-0.03692083805799484,
0.03507871553301811,
-0.046005673706531525,
-0.032720696181058884,
-0.04406069964170456,
0.0291410144418478,
0.060424163937568665,
0.09709569811820984,
-0.03644650802016258,
-0.07752025872468948,
-0.08820058405399323,
-0.014072306454181671,
-0.10712552815675735,
-0.009779246523976326,
-0.07095660269260406,
0.001844536978751421,
0.0045545101165771484,
-0.027265876531600952,
0.027255631983280182,
0.030832966789603233,
-0.05090298503637314,
-0.008336999453604221,
-0.025980953127145767,
0.058071911334991455,
-0.07299445569515228,
0.013640252873301506,
0.017763571813702583,
-0.019526617601513863,
0.0902002826333046,
0.04620978236198425,
-0.03042544238269329,
0.05454457551240921,
-0.03737683966755867,
0.03339257463812828,
-0.04596049338579178,
0.008232238702476025,
-0.017611853778362274,
-0.11184388399124146,
-0.019806386902928352,
0.008860502392053604,
-0.022567737847566605,
0.01577582210302353,
0.06607969105243683,
-0.05055700242519379,
0.0839255303144455,
0.06264376640319824,
-0.05452407896518707,
-0.05708509683609009,
0.039143938571214676,
0.002080075442790985,
0.014498335309326649,
0.0786600336432457,
0.003753883531317115,
0.05643680691719055,
-0.0873836874961853,
-0.001076564658433199,
0.0028994670137763023,
-0.019170675426721573,
-0.01186356134712696,
-0.07126124948263168,
-0.0015296591445803642,
0.008707555942237377,
0.1792505979537964,
-0.006764346733689308,
-0.017003536224365234,
0.00532746035605669,
0.06787678599357605,
0.030935857445001602,
0.00486611295491457,
0.08230359852313995,
-0.016546623781323433,
-0.020363163203001022,
-0.018134048208594322,
0.028403528034687042,
-0.012624973431229591,
0.019515864551067352,
0.12704694271087646,
0.047322530299425125,
0.11568060517311096,
0.07239916920661926,
0.06201119348406792,
0.05330893024802208,
-0.10806730389595032,
-0.0694211944937706,
0.03641718626022339,
0.05317719280719757,
-0.03551262244582176,
0.026361729949712753,
0.06340143829584122,
-0.09394796192646027,
0.0844784751534462,
0.04417870566248894,
-0.058827586472034454,
-0.1284792721271515,
-0.2110665738582611,
-0.03981013223528862,
-0.00923197902739048,
-0.019486986100673676,
-0.11010229587554932,
0.02504676580429077,
0.0952359065413475,
0.04047426953911781,
-0.0207569170743227,
0.06699022650718689,
-0.14884594082832336,
-0.03597768023610115,
0.036139145493507385,
-0.01265285536646843,
0.025545788928866386,
0.05014768987894058,
0.01845659501850605,
0.017234299331903458,
0.07157443463802338,
0.04946060851216316,
0.03901378810405731,
0.05577249079942703,
0.030035510659217834,
-0.05117397755384445,
-0.0867978036403656,
-0.04526633769273758,
0.005285541992634535,
0.05958951264619827,
0.12607939541339874,
0.00823173113167286,
-0.06907659769058228,
0.0007175779901444912,
0.05920973792672157,
-0.01839451491832733,
-0.0831751674413681,
-0.11548297107219696,
0.2210521697998047,
-0.022981133311986923,
0.012067683041095734,
-0.00168510340154171,
-0.03665286675095558,
0.021775098517537117,
0.20751722157001495,
0.26473942399024963,
-0.019172001630067825,
-0.01647035777568817,
0.014791121706366539,
0.0005954978987574577,
0.03670255094766617,
0.12298695743083954,
-0.026063021272420883,
0.22661170363426208,
-0.04521241784095764,
0.06299334764480591,
-0.056498970836400986,
-0.015632137656211853,
-0.07214058190584183,
0.060557156801223755,
0.0024920578580349684,
-0.017716199159622192,
-0.010416852310299873,
0.07421593368053436,
-0.0338614359498024,
-0.13059218227863312,
-0.03103606030344963,
0.0036436968948692083,
-0.05853278189897537,
0.01047335285693407,
0.002334199845790863,
0.035175394266843796,
0.07827845960855484,
-0.01569151133298874,
-0.003541228361427784,
0.12702102959156036,
0.011279155500233173,
-0.11088810116052628,
-0.03945548087358475,
0.1298334300518036,
0.01840192824602127,
0.13123148679733276,
0.04775505140423775,
0.09001757204532623,
0.07257198542356491,
0.015386895276606083,
-0.06431170552968979,
0.03343425318598747,
-0.02548624388873577,
0.02006126195192337,
0.001947704702615738,
0.10658165067434311,
-0.010183079168200493,
0.07231469452381134,
0.01709069311618805,
-0.08430492132902145,
0.05750080198049545,
0.007590595632791519,
-0.07685711979866028,
-0.03374028578400612,
0.10720410943031311,
-0.11072144657373428,
0.14309580624103546,
0.14419864118099213,
-0.00262909266166389,
-0.07073894888162613,
-0.011710069142282009,
0.02244889736175537,
-0.007352170068770647,
0.04419001564383507,
-0.03134181350469589,
-0.12999621033668518,
-0.0013597700744867325,
-0.09723836183547974,
0.046466972678899765,
-0.24518901109695435,
-0.06632101535797119,
0.01693587377667427,
-0.0022931010462343693,
-0.026999425143003464,
0.02931380271911621,
0.06293109804391861,
-0.004629327915608883,
-0.032916124910116196,
0.028419790789484978,
-0.027909299358725548,
0.03512569144368172,
-0.07444282621145248,
-0.06763224303722382
] |
null | null | transformers | # MultiBERTs Seed 15 (uncased)
Seed 15 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-15')
model = BertModel.from_pretrained("multiberts-seed-15")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-15 | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 15 (uncased)
Seed 15 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 15 (uncased)\n\nSeed 15 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 15 (uncased)\n\nSeed 15 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
63,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 15 (uncased)\n\nSeed 15 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.06801767647266388,
0.02733975648880005,
-0.002209444297477603,
0.09310776740312576,
0.07715272158384323,
0.028425183147192,
0.15710321068763733,
0.029685121029615402,
-0.03321170434355736,
0.022377969697117805,
0.1042739599943161,
0.03647865355014801,
0.033235177397727966,
0.03198809549212456,
0.06617504358291626,
-0.260547935962677,
0.07511068880558014,
-0.0576411634683609,
0.03446527197957039,
0.05990825966000557,
0.104975625872612,
-0.07242971658706665,
0.0776146650314331,
0.04649311676621437,
-0.07479828596115112,
-0.02714776061475277,
-0.005483473651111126,
-0.03347540646791458,
0.07357267290353775,
0.09719152003526688,
0.06273015588521957,
-0.010663914494216442,
0.06085451692342758,
-0.08866770565509796,
0.019983958452939987,
0.023797810077667236,
-0.009427778422832489,
0.03907348960638046,
0.027318840846419334,
-0.0056680915877223015,
0.11165137588977814,
0.024721384048461914,
0.08679705113172531,
0.04053296893835068,
-0.087434783577919,
-0.09900318086147308,
-0.06848901510238647,
0.0934833362698555,
0.031099719926714897,
0.0419573076069355,
-0.006621633656322956,
0.07031432539224625,
-0.011579405516386032,
0.059393011033535004,
0.08380421996116638,
-0.23657268285751343,
-0.02344275452196598,
0.05146945267915726,
0.0485815592110157,
0.04022820293903351,
0.011457355692982674,
0.027914127334952354,
0.0067403800785541534,
0.04769430682063103,
0.005573576781898737,
-0.02846847102046013,
0.14060640335083008,
-0.052626948803663254,
-0.13869792222976685,
-0.02910303696990013,
0.1530783474445343,
0.026964344084262848,
-0.11088331788778305,
-0.11503688991069794,
-0.001546250656247139,
0.17378470301628113,
-0.005439067259430885,
-0.009337161667644978,
-0.009520197287201881,
-0.006116175092756748,
0.02882409282028675,
-0.1217413991689682,
-0.0828448161482811,
-0.023782268166542053,
-0.06368658691644669,
0.15362580120563507,
0.04857438802719116,
0.06872177124023438,
-0.060524359345436096,
0.03894634544849396,
-0.15343791246414185,
-0.03485897555947304,
-0.047317977994680405,
-0.09829787164926529,
0.013819713145494461,
0.025640839710831642,
-0.04416763409972191,
-0.11960469186306,
-0.038510024547576904,
0.07691235840320587,
0.03597232326865196,
0.0369081124663353,
-0.00390957947820425,
0.027975469827651978,
0.10613285005092621,
0.10240961611270905,
-0.06174901872873306,
0.07586996257305145,
0.01631307229399681,
-0.019993970170617104,
0.038780875504016876,
-0.05530089884996414,
-0.12169940769672394,
0.07036411762237549,
-0.03615232929587364,
0.017410999163985252,
0.022053077816963196,
0.040971189737319946,
-0.010875926353037357,
-0.07719536870718002,
0.14695584774017334,
-0.095918670296669,
-0.0011171000078320503,
-0.004585434682667255,
0.01583152264356613,
0.0792350172996521,
0.02379557117819786,
0.003042919095605612,
-0.05818124860525131,
-0.02569660171866417,
-0.06551646441221237,
-0.02955315262079239,
-0.0585169792175293,
-0.13587772846221924,
0.0030291760340332985,
-0.022747047245502472,
-0.013490446843206882,
-0.1012154147028923,
-0.18122431635856628,
-0.014003386721014977,
0.06962514668703079,
-0.014079304412007332,
0.012576723471283913,
-0.0004900181666016579,
0.011439817026257515,
-0.004164945334196091,
0.03428799286484718,
-0.03628023713827133,
0.010652001015841961,
-0.013201356865465641,
-0.06419925391674042,
0.04065530374646187,
-0.11699288338422775,
0.04243439808487892,
-0.052161805331707,
0.012226104736328125,
-0.20032449066638947,
0.10719196498394012,
-0.0315968357026577,
-0.03993551805615425,
-0.046622443944215775,
-0.061792097985744476,
0.017530366778373718,
0.04617448151111603,
-0.015755081549286842,
0.10170736908912659,
-0.12004236876964569,
-0.053852830082178116,
0.160623699426651,
-0.1578417718410492,
0.018423089757561684,
0.10613048821687698,
-0.06938951462507248,
0.03918829560279846,
0.14473532140254974,
0.0755314975976944,
0.07649475336074829,
-0.04356652498245239,
0.01844022050499916,
0.06226080283522606,
-0.04691909998655319,
0.07859832048416138,
0.10932324826717377,
-0.011215089820325375,
-0.12438296526670456,
0.029182959347963333,
-0.06681220978498459,
-0.031758517026901245,
-0.023395782336592674,
-0.02369620092213154,
0.01604521833360195,
-0.052970219403505325,
0.057291969656944275,
-0.00733808521181345,
0.006719639524817467,
-0.02410559356212616,
-0.07261348515748978,
0.08562964200973511,
0.07802748680114746,
-0.08524532616138458,
0.014558888971805573,
-0.09143311530351639,
0.03081211820244789,
-0.06487783789634705,
-0.004033094737678766,
-0.14568191766738892,
-0.04053668677806854,
0.02961302548646927,
-0.0840173065662384,
0.09775905311107635,
0.11832436919212341,
0.010980810970067978,
0.11409062147140503,
-0.04667139798402786,
0.028843455016613007,
-0.01184885948896408,
-0.00857438612729311,
-0.0450272336602211,
-0.14231592416763306,
-0.06750474870204926,
-0.06483080983161926,
0.08295422792434692,
-0.060238346457481384,
0.02294081822037697,
-0.08447779715061188,
-0.04054137319326401,
-0.025496624410152435,
-0.04433853179216385,
0.007376694120466709,
0.004466930404305458,
-0.01329808495938778,
-0.028648288920521736,
0.039711855351924896,
0.02418231964111328,
-0.0876108855009079,
0.08731984347105026,
-0.12315891683101654,
-0.05444458872079849,
0.06866420805454254,
-0.004993150010704994,
-0.04139220342040062,
0.09135152399539948,
0.010732410475611687,
-0.010296973399817944,
-0.05819740146398544,
-0.0496283583343029,
0.21596935391426086,
-0.020870275795459747,
0.08455020189285278,
-0.11470566689968109,
0.002035739365965128,
0.03813740238547325,
-0.057185471057891846,
-0.057711243629455566,
0.07839672267436981,
0.03940662369132042,
-0.2191781997680664,
0.02497648447751999,
0.07315922528505325,
0.0646178349852562,
0.14201465249061584,
0.023915698751807213,
-0.030853338539600372,
-0.05990632250905037,
-0.015538377687335014,
-0.013207999989390373,
0.05615654215216637,
-0.04790620505809784,
0.0028495388105511665,
0.04966789484024048,
0.052249930799007416,
0.017270954325795174,
-0.06883886456489563,
0.02322293445467949,
0.0539543516933918,
-0.015048954635858536,
-0.07015694677829742,
-0.05322239547967911,
-0.03854386508464813,
0.07355396449565887,
0.04084135591983795,
0.04967004805803299,
0.05082561448216438,
-0.02137567475438118,
-0.1351478099822998,
0.16800454258918762,
-0.13428407907485962,
-0.22642964124679565,
-0.13034987449645996,
-0.0834873765707016,
-0.07514756172895432,
0.038653664290905,
0.03748911991715431,
-0.03849627822637558,
-0.05117334425449371,
-0.10154999792575836,
0.07041426002979279,
-0.11514695733785629,
-0.05825003981590271,
0.016683444380760193,
-0.05414537340402603,
-0.006974825635552406,
-0.12810936570167542,
-0.008695574477314949,
-0.028031999245285988,
-0.07343796640634537,
0.0008774511516094208,
-0.04334644973278046,
0.009644703939557076,
0.13330407440662384,
0.006820104084908962,
-0.012382647022604942,
-0.014232487417757511,
0.20314693450927734,
0.029056157916784286,
0.04339108243584633,
0.12984341382980347,
-0.06484474986791611,
0.058053694665431976,
0.021349022164940834,
0.039990052580833435,
-0.047479912638664246,
-0.0025984840467572212,
-0.024405166506767273,
-0.11368069052696228,
-0.21333807706832886,
-0.06476668268442154,
0.006510309875011444,
0.010073037818074226,
0.021385259926319122,
0.014929888769984245,
0.02487098053097725,
0.053664010018110275,
-0.03448716923594475,
0.035610876977443695,
0.03619324415922165,
0.05448261275887489,
0.07034975290298462,
-0.06218820810317993,
0.09620094299316406,
-0.0714545026421547,
0.02299565076828003,
0.10762062668800354,
-0.07260213792324066,
0.15743374824523926,
0.044204823672771454,
0.05529268831014633,
0.09246848523616791,
-0.004088059067726135,
0.0594383105635643,
0.10784648358821869,
-0.06378577649593353,
0.016635218635201454,
-0.0744529739022255,
-0.05656294524669647,
-0.043349552899599075,
0.060511909425258636,
0.0341322124004364,
-0.0023532509803771973,
-0.10014849901199341,
0.034354642033576965,
-0.03766059875488281,
0.07740636169910431,
0.06503450125455856,
-0.10335945338010788,
-0.10214915871620178,
0.042632006108760834,
-0.04694148153066635,
-0.08840319514274597,
0.03489279747009277,
0.0850222110748291,
-0.13922712206840515,
0.061803657561540604,
0.016698189079761505,
0.07428406178951263,
-0.09408943355083466,
0.012409533374011517,
-0.06928297132253647,
0.01818820647895336,
0.0031976427417248487,
0.09329688549041748,
-0.1397344172000885,
0.10894399881362915,
0.02885807491838932,
0.04146571084856987,
-0.08975687623023987,
0.017900265753269196,
-0.011110913939774036,
0.07570172846317291,
0.12120048701763153,
0.041649650782346725,
-0.0547238327562809,
-0.02164139226078987,
-0.06480379402637482,
0.03235588222742081,
0.07400237023830414,
-0.03624141961336136,
0.038660187274217606,
0.0016328408382833004,
0.01354074664413929,
-0.005621560849249363,
0.0157255120575428,
-0.13312718272209167,
-0.14476823806762695,
0.06968577206134796,
-0.07011398673057556,
-0.08792044967412949,
-0.03682331740856171,
-0.06891969591379166,
-0.09319676458835602,
0.14760389924049377,
-0.07669373601675034,
-0.10937386751174927,
-0.10544929653406143,
0.010677093639969826,
0.07214120030403137,
-0.06696148216724396,
0.05274827033281326,
-0.05225956439971924,
0.09467761218547821,
-0.041258372366428375,
-0.11037816107273102,
0.016589393839240074,
-0.09083355218172073,
-0.11044374108314514,
-0.030244119465351105,
0.09151653200387955,
0.14588743448257446,
0.053093090653419495,
0.02184678055346012,
0.015374935232102871,
0.0017464645206928253,
-0.12843817472457886,
0.004014251753687859,
0.13359816372394562,
0.014643833041191101,
0.09938591718673706,
-0.06128048151731491,
-0.030102945864200592,
-0.01294098049402237,
0.003327980637550354,
0.13547757267951965,
0.15759779512882233,
-0.0591837614774704,
0.14799973368644714,
0.23068568110466003,
-0.09812887012958527,
-0.19259056448936462,
-0.07364214956760406,
0.0046167271211743355,
0.04631028324365616,
0.04667575657367706,
-0.20151686668395996,
0.10366877913475037,
0.05346614122390747,
-0.012432323768734932,
-0.036350443959236145,
-0.18718966841697693,
-0.10257309675216675,
0.10405688732862473,
0.06441257894039154,
0.19554096460342407,
-0.06565789878368378,
-0.04225609451532364,
-0.042174339294433594,
-0.06048931926488876,
0.09481550753116608,
-0.009487133473157883,
0.08166129887104034,
0.0163823701441288,
0.01172625832259655,
-0.001633642241358757,
-0.010271107777953148,
0.10874724388122559,
0.04445628076791763,
0.02037077397108078,
-0.0734962522983551,
-0.0452994778752327,
0.10901910066604614,
-0.030129889026284218,
0.12292073667049408,
0.029180306941270828,
0.05933797359466553,
-0.07303783297538757,
-0.061105355620384216,
-0.08265805244445801,
0.010399931110441685,
-0.039858438074588776,
-0.050454072654247284,
-0.05273435264825821,
0.0386592298746109,
0.02251191809773445,
0.01201761793345213,
-0.00764189288020134,
-0.055286116898059845,
0.014526491053402424,
0.06033218279480934,
0.15950794517993927,
-0.014508269727230072,
-0.06521156430244446,
-0.07142211496829987,
-0.05699213594198227,
0.046845972537994385,
-0.1087140142917633,
0.03125614672899246,
0.023229297250509262,
-0.0003989854594692588,
0.11577535420656204,
0.03178020194172859,
-0.11419936269521713,
0.011141208000481129,
0.007353989407420158,
-0.09857958555221558,
-0.15668001770973206,
-0.018353383988142014,
0.0509166419506073,
-0.06195593625307083,
0.037710078060626984,
0.15506112575531006,
-0.026496337726712227,
-0.033750150352716446,
-0.05534093827009201,
0.03115592896938324,
-0.03166767582297325,
0.03994719684123993,
0.08175107836723328,
0.01805812306702137,
-0.08280611038208008,
0.055081650614738464,
0.046484824270009995,
-0.01856733299791813,
0.06232617050409317,
0.02132498472929001,
-0.06931425631046295,
-0.08423230051994324,
-0.0680202841758728,
0.11207474023103714,
-0.04100019484758377,
-0.0642738789319992,
0.04613994061946869,
-0.10828666388988495,
0.06800872087478638,
0.10306714475154877,
0.03812727332115173,
0.04432307183742523,
-0.08378028124570847,
0.006476093083620071,
-0.035556841641664505,
0.03597958758473396,
-0.04556360840797424,
-0.03312242776155472,
-0.0443594753742218,
0.027720045298337936,
0.06094682216644287,
0.09536390006542206,
-0.03588946908712387,
-0.07717827707529068,
-0.08690186589956284,
-0.014754571951925755,
-0.10696345567703247,
-0.009988170117139816,
-0.07237505167722702,
0.0017653671093285084,
0.004266973584890366,
-0.028220050036907196,
0.028286445885896683,
0.031473346054553986,
-0.05106564611196518,
-0.007949243299663067,
-0.027562422677874565,
0.0584523007273674,
-0.07306187599897385,
0.014012714847922325,
0.017616314813494682,
-0.018323276191949844,
0.09015798568725586,
0.04492589086294174,
-0.030408797785639763,
0.05392798036336899,
-0.03903347998857498,
0.033473849296569824,
-0.04550665616989136,
0.00879489816725254,
-0.01838056370615959,
-0.11230182647705078,
-0.01826014555990696,
0.009446879848837852,
-0.022253699600696564,
0.015001055784523487,
0.06519225239753723,
-0.05139213055372238,
0.08359935134649277,
0.06178460642695427,
-0.05224957689642906,
-0.057877231389284134,
0.03925667703151703,
0.004592999815940857,
0.015222488902509212,
0.07894237339496613,
0.005467044189572334,
0.05648723244667053,
-0.08829940855503082,
0.00037457793951034546,
0.0030847308225929737,
-0.019949059933423996,
-0.01820327900350094,
-0.07040055096149445,
-0.0019356366246938705,
0.008444065228104591,
0.17887844145298004,
-0.006161019206047058,
-0.016867000609636307,
0.006322900764644146,
0.07194780558347702,
0.032347749918699265,
0.003843066282570362,
0.0819084495306015,
-0.01607987843453884,
-0.02004610374569893,
-0.01780887320637703,
0.029267989099025726,
-0.010423021391034126,
0.014492195099592209,
0.12887540459632874,
0.05028816685080528,
0.11271486431360245,
0.07129580527544022,
0.061018723994493484,
0.052487827837467194,
-0.10601049661636353,
-0.06842045485973358,
0.03441891446709633,
0.051991235464811325,
-0.03431263938546181,
0.027526352554559708,
0.06250009685754776,
-0.09502074122428894,
0.08581481873989105,
0.04547467082738876,
-0.05718832090497017,
-0.12840262055397034,
-0.2131512463092804,
-0.039935484528541565,
-0.009873585775494576,
-0.01922614686191082,
-0.11109761893749237,
0.02520843967795372,
0.09162327647209167,
0.04036521911621094,
-0.021350940689444542,
0.06720556318759918,
-0.14774993062019348,
-0.03634478524327278,
0.038315027952194214,
-0.013328861445188522,
0.02445928379893303,
0.04746094346046448,
0.020126227289438248,
0.018880968913435936,
0.07177616655826569,
0.04965335875749588,
0.0387987419962883,
0.05352300405502319,
0.029465684667229652,
-0.050740912556648254,
-0.08779212087392807,
-0.04417913779616356,
0.003312519285827875,
0.05838843807578087,
0.12860743701457977,
0.007893456146121025,
-0.06672640144824982,
-0.0005606655031442642,
0.059257470071315765,
-0.01745421439409256,
-0.08183063566684723,
-0.11530982702970505,
0.21539592742919922,
-0.02154221013188362,
0.014750671572983265,
-0.0003958027809858322,
-0.03619948774576187,
0.020963653922080994,
0.20801842212677002,
0.25981637835502625,
-0.018817130476236343,
-0.015564962290227413,
0.013678934425115585,
0.0002549942582845688,
0.037528492510318756,
0.12045406550168991,
-0.026460489258170128,
0.22437149286270142,
-0.044869765639305115,
0.06578925251960754,
-0.05546065792441368,
-0.016523469239473343,
-0.06788409501314163,
0.05811745673418045,
0.003950690384954214,
-0.017310939729213715,
-0.010099239647388458,
0.07400206476449966,
-0.03327129781246185,
-0.12773984670639038,
-0.034078795462846756,
0.002761659910902381,
-0.05764060094952583,
0.008759692311286926,
0.0006819805130362511,
0.03641011565923691,
0.07635286450386047,
-0.013971428386867046,
-0.0023505548015236855,
0.12515690922737122,
0.012124733068048954,
-0.11220923066139221,
-0.04333432763814926,
0.12959881126880646,
0.02082996442914009,
0.12898869812488556,
0.04897955805063248,
0.08701746165752411,
0.07241803407669067,
0.014450157061219215,
-0.06532974541187286,
0.0327463373541832,
-0.026475954800844193,
0.02178187109529972,
0.0017081638798117638,
0.10803282260894775,
-0.008470045402646065,
0.07358233630657196,
0.01634775660932064,
-0.08492640405893326,
0.05699995160102844,
0.006185747683048248,
-0.07682749629020691,
-0.03534218668937683,
0.10947079956531525,
-0.10952538996934891,
0.14282333850860596,
0.14426636695861816,
-0.0029219123534858227,
-0.07180128991603851,
-0.01199413649737835,
0.023716744035482407,
-0.0077284397557377815,
0.05005871132016182,
-0.0319749191403389,
-0.13164390623569489,
0.00016561010852456093,
-0.0997379869222641,
0.04688645154237747,
-0.24333512783050537,
-0.06707587093114853,
0.016557250171899796,
-0.0030432194471359253,
-0.026371199637651443,
0.028031691908836365,
0.06605608016252518,
-0.0028129168786108494,
-0.03218532353639603,
0.026879331097006798,
-0.02822859212756157,
0.03592715412378311,
-0.07301615923643112,
-0.06638869643211365
] |
null | null | transformers | # MultiBERTs Seed 16 (uncased)
Seed 16 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-16')
model = BertModel.from_pretrained("multiberts-seed-16")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-16 | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 16 (uncased)
Seed 16 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 16 (uncased)\n\nSeed 16 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 16 (uncased)\n\nSeed 16 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
63,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 16 (uncased)\n\nSeed 16 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.06843218952417374,
0.025287076830863953,
-0.0022592272143810987,
0.09511170536279678,
0.07741425186395645,
0.02927837148308754,
0.15796053409576416,
0.02954433113336563,
-0.031132323667407036,
0.022100141271948814,
0.10254935920238495,
0.03634270280599594,
0.03305242955684662,
0.027655206620693207,
0.06880556792020798,
-0.25895607471466064,
0.07430186122655869,
-0.05862624570727348,
0.033812690526247025,
0.05812797695398331,
0.10398262739181519,
-0.07215708494186401,
0.07899860292673111,
0.04592788964509964,
-0.07427357882261276,
-0.026444606482982635,
-0.006177939474582672,
-0.03230050951242447,
0.07470116764307022,
0.09692502021789551,
0.06324969977140427,
-0.010736004449427128,
0.06296363472938538,
-0.08652788400650024,
0.019211582839488983,
0.023851163685321808,
-0.009312909096479416,
0.03755151852965355,
0.028310313820838928,
-0.00544182863086462,
0.11275266110897064,
0.02510746195912361,
0.08677197992801666,
0.04026104509830475,
-0.08700194209814072,
-0.09899833798408508,
-0.07008801400661469,
0.08712252974510193,
0.02997969463467598,
0.041872624307870865,
-0.006461290642619133,
0.06944404542446136,
-0.010838656686246395,
0.06084596365690231,
0.07904205471277237,
-0.23782867193222046,
-0.02275531180202961,
0.051842521876096725,
0.049730561673641205,
0.03841950744390488,
0.0127843227237463,
0.026974458247423172,
0.006957970559597015,
0.0502922460436821,
0.0020029728766530752,
-0.02928788587450981,
0.1365339607000351,
-0.0513334646821022,
-0.13704761862754822,
-0.0286533385515213,
0.15127448737621307,
0.023598775267601013,
-0.11218978464603424,
-0.11420539021492004,
-0.0003606090322136879,
0.17222830653190613,
-0.003641463816165924,
-0.007817285135388374,
-0.009387664496898651,
-0.005114029161632061,
0.02720736712217331,
-0.12248240411281586,
-0.08204281330108643,
-0.023987796157598495,
-0.06365484744310379,
0.1530895084142685,
0.048840247094631195,
0.0711565613746643,
-0.05877905339002609,
0.038320768624544144,
-0.15159854292869568,
-0.03464438021183014,
-0.046208325773477554,
-0.09589928388595581,
0.014482397586107254,
0.02909674681723118,
-0.0445268452167511,
-0.11820746958255768,
-0.03856411203742027,
0.07504775375127792,
0.033007457852363586,
0.037159718573093414,
-0.006932328455150127,
0.028611328452825546,
0.1050284206867218,
0.10571414232254028,
-0.06137782335281372,
0.08017876744270325,
0.01564777083694935,
-0.021092358976602554,
0.03850284218788147,
-0.056595489382743835,
-0.1231531947851181,
0.071718230843544,
-0.03780825436115265,
0.018202027305960655,
0.02081025019288063,
0.04243728891015053,
-0.010702695697546005,
-0.076555997133255,
0.1480722278356552,
-0.0976346880197525,
0.0009745159186422825,
-0.0049132537096738815,
0.015567433089017868,
0.07798659801483154,
0.024852216243743896,
0.002882356056943536,
-0.06003108620643616,
-0.02696504443883896,
-0.06379655003547668,
-0.029974471777677536,
-0.05961591377854347,
-0.1363622546195984,
0.0029291026294231415,
-0.024539997801184654,
-0.014150647446513176,
-0.10078701376914978,
-0.18281902372837067,
-0.014731919392943382,
0.0683603584766388,
-0.015134110115468502,
0.01366317830979824,
-0.0013619344681501389,
0.010072315111756325,
-0.00408889539539814,
0.03444147855043411,
-0.038140520453453064,
0.010550212115049362,
-0.01314820721745491,
-0.06286466121673584,
0.04152464121580124,
-0.11862343549728394,
0.042207349091768265,
-0.05298926681280136,
0.012495040893554688,
-0.1947425901889801,
0.10844023525714874,
-0.03142029047012329,
-0.04131345450878143,
-0.04436784237623215,
-0.061707522720098495,
0.019620675593614578,
0.047837503254413605,
-0.014169573783874512,
0.10009358078241348,
-0.1226838082075119,
-0.052783362567424774,
0.15807005763053894,
-0.15840108692646027,
0.020073948428034782,
0.10860441625118256,
-0.06904544681310654,
0.03724672272801399,
0.14464205503463745,
0.07651244103908539,
0.07619728893041611,
-0.04151236265897751,
0.017746035009622574,
0.06072764843702316,
-0.04582522064447403,
0.08020269870758057,
0.10728684067726135,
-0.010244236327707767,
-0.12418796122074127,
0.030841846019029617,
-0.06376972794532776,
-0.03403341770172119,
-0.023336365818977356,
-0.023115187883377075,
0.015269989147782326,
-0.052789490669965744,
0.06010546162724495,
-0.007568000815808773,
0.0038364622741937637,
-0.026125462725758553,
-0.07483150064945221,
0.08059364557266235,
0.07748159766197205,
-0.08571702986955643,
0.015011888928711414,
-0.09432441741228104,
0.030475903302431107,
-0.0625603124499321,
-0.005165082402527332,
-0.14370955526828766,
-0.04294664412736893,
0.030875110998749733,
-0.07359716296195984,
0.09884636104106903,
0.12116780132055283,
0.010604621842503548,
0.11534187197685242,
-0.04587939381599426,
0.028041861951351166,
-0.012389305979013443,
-0.00984781701117754,
-0.04346788302063942,
-0.1407451182603836,
-0.06907361000776291,
-0.06546932458877563,
0.07845871150493622,
-0.061238355934619904,
0.02233702689409256,
-0.08152081072330475,
-0.042361244559288025,
-0.026355460286140442,
-0.04330513998866081,
0.0057023996487259865,
0.006480911746621132,
-0.013369996100664139,
-0.02895865961909294,
0.03931175917387009,
0.024304477497935295,
-0.08754031360149384,
0.09030650556087494,
-0.1241527795791626,
-0.05601765587925911,
0.06856853514909744,
-0.00758463516831398,
-0.039609335362911224,
0.09585240483283997,
0.010626145638525486,
-0.009849770925939083,
-0.0549684539437294,
-0.05067330598831177,
0.22230729460716248,
-0.021995939314365387,
0.08496858179569244,
-0.11490160971879959,
0.0024838957469910383,
0.03649298846721649,
-0.0571625642478466,
-0.05744781345129013,
0.07816706597805023,
0.03646382689476013,
-0.21041984856128693,
0.024065770208835602,
0.0714656338095665,
0.06443584710359573,
0.14197789132595062,
0.023299260064959526,
-0.02936018630862236,
-0.0594838447868824,
-0.014607426710426807,
-0.012405280023813248,
0.05492544546723366,
-0.046533890068531036,
0.0027081237640231848,
0.04999750107526779,
0.05316811427474022,
0.018620966002345085,
-0.07081347703933716,
0.024096351116895676,
0.05304664745926857,
-0.01648266240954399,
-0.06788153946399689,
-0.05491888150572777,
-0.03682999312877655,
0.0742255300283432,
0.03907259926199913,
0.049526646733284,
0.05051831156015396,
-0.02150079607963562,
-0.1358242928981781,
0.16775643825531006,
-0.13729044795036316,
-0.22638177871704102,
-0.13373532891273499,
-0.08342380821704865,
-0.07489001005887985,
0.04059598594903946,
0.03581943362951279,
-0.04090673476457596,
-0.051278796046972275,
-0.10286277532577515,
0.07265210896730423,
-0.11626759171485901,
-0.059949927031993866,
0.017288412898778915,
-0.05390576273202896,
-0.007869705557823181,
-0.1295398473739624,
-0.00932765007019043,
-0.0265757255256176,
-0.07998194545507431,
0.0029893210157752037,
-0.04145938530564308,
0.008531760424375534,
0.13254579901695251,
0.0044862376525998116,
-0.013708788901567459,
-0.014109635725617409,
0.20101144909858704,
0.030030477792024612,
0.045231521129608154,
0.13102996349334717,
-0.06657536327838898,
0.058047015219926834,
0.0198298878967762,
0.039458028972148895,
-0.04884949326515198,
-0.0028225649148225784,
-0.02485082671046257,
-0.11581293493509293,
-0.21590988337993622,
-0.06616590917110443,
0.006469169165939093,
0.012707974761724472,
0.019331319257616997,
0.01527206040918827,
0.022518852725625038,
0.05326266959309578,
-0.033788543194532394,
0.03796406090259552,
0.034059617668390274,
0.05260729417204857,
0.07201109826564789,
-0.06331834942102432,
0.09613782912492752,
-0.07110511511564255,
0.02227705717086792,
0.10855486243963242,
-0.06988578289747238,
0.15599501132965088,
0.04180825874209404,
0.057573102414608,
0.09266480058431625,
-0.00430075079202652,
0.058503977954387665,
0.10882087051868439,
-0.06477393954992294,
0.017933260649442673,
-0.07434545457363129,
-0.056851424276828766,
-0.04631132632493973,
0.06028926372528076,
0.03359292075037956,
-0.001545928418636322,
-0.1002250462770462,
0.036167193204164505,
-0.03874512016773224,
0.07421615719795227,
0.06583540141582489,
-0.10744106769561768,
-0.10170676559209824,
0.04310186207294464,
-0.04397530108690262,
-0.08840035647153854,
0.03601754829287529,
0.08888539671897888,
-0.13855531811714172,
0.061642203480005264,
0.017798641696572304,
0.07325367629528046,
-0.09705677628517151,
0.01214396022260189,
-0.06767335534095764,
0.01726139709353447,
0.004644750617444515,
0.09410297870635986,
-0.1371058076620102,
0.1067095622420311,
0.029207266867160797,
0.04118534177541733,
-0.09092235565185547,
0.019327819347381592,
-0.013126074336469173,
0.07495761662721634,
0.11927482485771179,
0.042137254029512405,
-0.060716014355421066,
-0.02437814325094223,
-0.06444524973630905,
0.034064747393131256,
0.07453392446041107,
-0.034016311168670654,
0.04018285498023033,
0.0011199815198779106,
0.014828482642769814,
-0.006915959995239973,
0.013246433809399605,
-0.13036322593688965,
-0.14629486203193665,
0.06856675446033478,
-0.06790950894355774,
-0.09029125422239304,
-0.03636135160923004,
-0.06729575246572495,
-0.09072379767894745,
0.15162211656570435,
-0.07569988071918488,
-0.11056430637836456,
-0.10563124716281891,
0.008474806323647499,
0.07385830581188202,
-0.06611298769712448,
0.05049094930291176,
-0.050736524164676666,
0.09355761110782623,
-0.04112013429403305,
-0.11157645285129547,
0.015143784694373608,
-0.09135884791612625,
-0.11292403191328049,
-0.029822107404470444,
0.0943172350525856,
0.14608962833881378,
0.053448066115379333,
0.022517407312989235,
0.016610169783234596,
0.0021873358637094498,
-0.12775442004203796,
0.006356213241815567,
0.13190250098705292,
0.013882819563150406,
0.10014393925666809,
-0.0625646784901619,
-0.024082666262984276,
-0.01147008128464222,
0.004572942852973938,
0.13466280698776245,
0.1555166393518448,
-0.057882413268089294,
0.14906638860702515,
0.22982877492904663,
-0.09964586794376373,
-0.19492043554782867,
-0.07505162060260773,
0.0034511154517531395,
0.047491077333688736,
0.04624705761671066,
-0.20247882604599,
0.10418955981731415,
0.05108506977558136,
-0.01082523912191391,
-0.031313780695199966,
-0.18538504838943481,
-0.10249387472867966,
0.1025627851486206,
0.06277959793806076,
0.19384807348251343,
-0.06736693531274796,
-0.04259756952524185,
-0.042661331593990326,
-0.05892954766750336,
0.09326490759849548,
-0.008108682930469513,
0.08119462430477142,
0.01608370617032051,
0.0181170292198658,
-0.0008646436035633087,
-0.008520836010575294,
0.10870785266160965,
0.04506970942020416,
0.02023235335946083,
-0.07534687221050262,
-0.04455399513244629,
0.11103551834821701,
-0.02874402515590191,
0.12335251271724701,
0.02780544012784958,
0.059650667011737823,
-0.07774119079113007,
-0.06082526221871376,
-0.08174323290586472,
0.00901712290942669,
-0.0399981290102005,
-0.05018986016511917,
-0.05324225872755051,
0.03748824819922447,
0.021323909983038902,
0.01190401241183281,
-0.012886673212051392,
-0.05380711704492569,
0.013829125091433525,
0.06474905461072922,
0.16118569672107697,
-0.015604503452777863,
-0.06283029913902283,
-0.06881941854953766,
-0.055999837815761566,
0.04644515737891197,
-0.10590644180774689,
0.031383004039525986,
0.023874636739492416,
-0.001869529252871871,
0.11355061829090118,
0.032861191779375076,
-0.11503543704748154,
0.011791788972914219,
0.007265083491802216,
-0.09865899384021759,
-0.1550726294517517,
-0.018373358994722366,
0.05127527564764023,
-0.05986408516764641,
0.03849392756819725,
0.1568911075592041,
-0.026129167526960373,
-0.03358836844563484,
-0.05612359941005707,
0.03231396526098251,
-0.03179622441530228,
0.03999538719654083,
0.08149461448192596,
0.018351873382925987,
-0.0824541449546814,
0.05474882572889328,
0.04637539014220238,
-0.01667497120797634,
0.0646241307258606,
0.022199274972081184,
-0.06781093776226044,
-0.08506011962890625,
-0.06819535046815872,
0.1115688607096672,
-0.03682529181241989,
-0.06625242531299591,
0.046368785202503204,
-0.10926491767168045,
0.06776081025600433,
0.09714997559785843,
0.037095047533512115,
0.04540341719985008,
-0.08357270061969757,
0.00625385157763958,
-0.036888591945171356,
0.036075323820114136,
-0.046973392367362976,
-0.032524287700653076,
-0.04302437603473663,
0.02956405095756054,
0.05901613086462021,
0.0982743427157402,
-0.03563253581523895,
-0.0766075849533081,
-0.08616559207439423,
-0.01378970593214035,
-0.10498888790607452,
-0.007675850763916969,
-0.06972382217645645,
0.0014680675230920315,
0.005510678049176931,
-0.02831805869936943,
0.028242800384759903,
0.03204817324876785,
-0.05025727301836014,
-0.008712989278137684,
-0.027972551062703133,
0.05670490860939026,
-0.07334809750318527,
0.012948542833328247,
0.017900604754686356,
-0.018556972965598106,
0.08993001282215118,
0.04300327226519585,
-0.03173309564590454,
0.05522962287068367,
-0.039204519242048264,
0.03232274949550629,
-0.04508315771818161,
0.009440774098038673,
-0.01881864108145237,
-0.1097336858510971,
-0.018572159111499786,
0.009410511702299118,
-0.022281166166067123,
0.014959749765694141,
0.06806876510381699,
-0.05186561495065689,
0.08472907543182373,
0.06164681166410446,
-0.05417931452393532,
-0.056685999035835266,
0.04097885265946388,
0.0032008178532123566,
0.015440150164067745,
0.07822390645742416,
0.003670924110338092,
0.05713411420583725,
-0.08868926018476486,
-0.0006725629791617393,
0.003923787269741297,
-0.0178929902613163,
-0.016827082261443138,
-0.07126379013061523,
-0.0010176943615078926,
0.008942862041294575,
0.1790584772825241,
-0.002214021049439907,
-0.01795724406838417,
0.004425108432769775,
0.07148699462413788,
0.031524501740932465,
0.004761195741593838,
0.07894758880138397,
-0.01859964430332184,
-0.020024698227643967,
-0.02084130421280861,
0.025794662535190582,
-0.011737353168427944,
0.014730148017406464,
0.12812401354312897,
0.04840648174285889,
0.11431404948234558,
0.07277534902095795,
0.06401850283145905,
0.052029822021722794,
-0.10645008832216263,
-0.07103537768125534,
0.03394562005996704,
0.052054259926080704,
-0.03481914848089218,
0.02922024205327034,
0.05957597494125366,
-0.09427031874656677,
0.08820599317550659,
0.04505525156855583,
-0.05801251530647278,
-0.12835821509361267,
-0.21311891078948975,
-0.040287941694259644,
-0.008982634171843529,
-0.019421067088842392,
-0.11184443533420563,
0.025418493896722794,
0.09368474781513214,
0.04030037671327591,
-0.022877907380461693,
0.06656907498836517,
-0.15012314915657043,
-0.033797986805438995,
0.037022098898887634,
-0.01275310106575489,
0.026537999510765076,
0.05105805769562721,
0.01895391382277012,
0.018173761665821075,
0.07546921074390411,
0.051050856709480286,
0.04042421653866768,
0.05426982790231705,
0.030231868848204613,
-0.05020613223314285,
-0.08704809844493866,
-0.04464396461844444,
0.0026918542571365833,
0.05976665019989014,
0.1260172575712204,
0.007803276181221008,
-0.06736253201961517,
-0.0003091171383857727,
0.05642944946885109,
-0.017087236046791077,
-0.08140391856431961,
-0.11413678526878357,
0.21202906966209412,
-0.02308196946978569,
0.01483325008302927,
-0.0007451213896274567,
-0.03686142712831497,
0.018893010914325714,
0.20761719346046448,
0.26449158787727356,
-0.0203158650547266,
-0.016440223902463913,
0.012935437262058258,
0.0009617256000638008,
0.03815459460020065,
0.12419574707746506,
-0.02648315392434597,
0.22583773732185364,
-0.045242972671985626,
0.06093816086649895,
-0.05628277361392975,
-0.016860678791999817,
-0.06985531747341156,
0.05847644805908203,
0.0013619564706459641,
-0.01734623685479164,
-0.010555289685726166,
0.07174161821603775,
-0.033096909523010254,
-0.12843875586986542,
-0.02967214584350586,
0.0008202830795198679,
-0.05728404223918915,
0.009121461771428585,
0.000685487873852253,
0.03514479100704193,
0.07837921380996704,
-0.015739168971776962,
-0.002372847869992256,
0.12553077936172485,
0.010176708921790123,
-0.11317405849695206,
-0.036456480622291565,
0.12728241086006165,
0.020453669130802155,
0.13144969940185547,
0.04902791231870651,
0.08981616795063019,
0.07159259915351868,
0.015089846216142178,
-0.0647144541144371,
0.03328430652618408,
-0.02648051083087921,
0.020729245617985725,
0.0016118930652737617,
0.10728815197944641,
-0.009041296318173409,
0.07139217853546143,
0.017462903633713722,
-0.08548102527856827,
0.05723573639988899,
0.008413426578044891,
-0.07645109295845032,
-0.03431102633476257,
0.10798947513103485,
-0.11219412088394165,
0.14167040586471558,
0.14559316635131836,
-0.0023363810032606125,
-0.07180281728506088,
-0.011847575195133686,
0.023115824908018112,
-0.009780367836356163,
0.04881574958562851,
-0.03137633204460144,
-0.13055697083473206,
-0.001430378295481205,
-0.09618175774812698,
0.04600363224744797,
-0.24773389101028442,
-0.0659949779510498,
0.017147667706012726,
-0.0025130524300038815,
-0.027346782386302948,
0.03073454275727272,
0.0638020932674408,
-0.00336946127936244,
-0.0320308655500412,
0.028497567400336266,
-0.02766585163772106,
0.03575187921524048,
-0.07263164967298508,
-0.06620178371667862
] |
null | null | transformers | # MultiBERTs Seed 17 (uncased)
Seed 17 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-17')
model = BertModel.from_pretrained("multiberts-seed-17")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-17 | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 17 (uncased)
Seed 17 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 17 (uncased)\n\nSeed 17 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 17 (uncased)\n\nSeed 17 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
63,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 17 (uncased)\n\nSeed 17 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.067503422498703,
0.02141827531158924,
-0.002213094849139452,
0.09443269670009613,
0.07498180121183395,
0.02794327586889267,
0.1589856594800949,
0.03151562437415123,
-0.03138815984129906,
0.02209601365029812,
0.10488973557949066,
0.04015721008181572,
0.033101536333560944,
0.035693008452653885,
0.06712768971920013,
-0.2625011205673218,
0.074310302734375,
-0.05935688316822052,
0.04167960584163666,
0.0589548721909523,
0.10305428504943848,
-0.07140012085437775,
0.07930392026901245,
0.04417954757809639,
-0.07308058440685272,
-0.024151884019374847,
-0.004493884742259979,
-0.03313235193490982,
0.07436682283878326,
0.09802259504795074,
0.062386997044086456,
-0.013016490265727043,
0.06283821165561676,
-0.08921302855014801,
0.018835190683603287,
0.02536207251250744,
-0.010619208216667175,
0.03682352602481842,
0.026613706722855568,
-0.005120881367474794,
0.11852019280195236,
0.020612400025129318,
0.0870826467871666,
0.03933516889810562,
-0.08725343644618988,
-0.10597118735313416,
-0.07209661602973938,
0.09124213457107544,
0.03294751048088074,
0.040306929498910904,
-0.007077813148498535,
0.07321753352880478,
-0.009184807538986206,
0.06126035377383232,
0.08286573737859726,
-0.24251675605773926,
-0.025555439293384552,
0.05140338093042374,
0.052647124975919724,
0.04042696952819824,
0.015468820929527283,
0.030522199347615242,
0.007747247815132141,
0.047927480190992355,
0.002252653008326888,
-0.029520832002162933,
0.14521628618240356,
-0.05400200933218002,
-0.13809239864349365,
-0.029488779604434967,
0.1560949683189392,
0.02547401562333107,
-0.11175455152988434,
-0.11677978187799454,
-0.0002225516363978386,
0.17267867922782898,
-0.004859128035604954,
-0.00924413651227951,
-0.009941681288182735,
-0.005261940881609917,
0.027175895869731903,
-0.1216244027018547,
-0.08126193284988403,
-0.023371979594230652,
-0.061202045530080795,
0.15034732222557068,
0.04845278710126877,
0.06999073922634125,
-0.060573138296604156,
0.036477312445640564,
-0.15518179535865784,
-0.03364581614732742,
-0.045634396374225616,
-0.0987742692232132,
0.013412047177553177,
0.028252221643924713,
-0.042955346405506134,
-0.12039010226726532,
-0.03998266160488129,
0.07959824800491333,
0.03759582340717316,
0.03745982050895691,
-0.012524289079010487,
0.029625318944454193,
0.10340240597724915,
0.10585451126098633,
-0.06060585379600525,
0.07662170380353928,
0.01744288019835949,
-0.02049172669649124,
0.04216887801885605,
-0.056520894169807434,
-0.12189650535583496,
0.07190161198377609,
-0.039366453886032104,
0.02137494832277298,
0.023431027308106422,
0.04117822274565697,
-0.009909354150295258,
-0.07427668571472168,
0.1455388069152832,
-0.09630714356899261,
0.004087073728442192,
-0.005773664452135563,
0.015504710376262665,
0.07741385698318481,
0.02562493458390236,
0.0013745161704719067,
-0.05967354401946068,
-0.030071046203374863,
-0.06434935331344604,
-0.03000233694911003,
-0.06066522002220154,
-0.13820073008537292,
0.0049070920795202255,
-0.01895635575056076,
-0.01457932684570551,
-0.10125041007995605,
-0.1841278225183487,
-0.014935160055756569,
0.06791167706251144,
-0.015478812158107758,
0.013864785432815552,
-0.001206583809107542,
0.011331502348184586,
-0.0037879925221204758,
0.03419054299592972,
-0.038819219917058945,
0.011123644188046455,
-0.012539529241621494,
-0.06527964025735855,
0.04229439049959183,
-0.11756671220064163,
0.040899112820625305,
-0.05337512120604515,
0.013022040948271751,
-0.1931426227092743,
0.10825660079717636,
-0.031552888453006744,
-0.041315674781799316,
-0.04551618546247482,
-0.06196480616927147,
0.019769523292779922,
0.048981763422489166,
-0.012790950015187263,
0.09944277256727219,
-0.12667511403560638,
-0.05434766039252281,
0.1610117107629776,
-0.15968742966651917,
0.01913679949939251,
0.1078292652964592,
-0.06762883812189102,
0.038885895162820816,
0.14609116315841675,
0.07574692368507385,
0.07819432020187378,
-0.04281788691878319,
0.018150296062231064,
0.059476785361766815,
-0.04718872904777527,
0.07856391370296478,
0.10646391659975052,
-0.0069376807659864426,
-0.1290738582611084,
0.029941685497760773,
-0.06147834658622742,
-0.03347835689783096,
-0.024478213861584663,
-0.024416834115982056,
0.014820527285337448,
-0.051450252532958984,
0.057673025876283646,
-0.006560139358043671,
0.004061744548380375,
-0.028105400502681732,
-0.07615075260400772,
0.08123332262039185,
0.07656075805425644,
-0.08567029237747192,
0.016325797885656357,
-0.09366440773010254,
0.0312948077917099,
-0.05913647264242172,
-0.002952295821160078,
-0.1418735831975937,
-0.04217644780874252,
0.03051096946001053,
-0.07293712347745895,
0.09928697347640991,
0.12035776674747467,
0.011028279550373554,
0.11364845931529999,
-0.04596477001905441,
0.026589790359139442,
-0.011518249288201332,
-0.010173756629228592,
-0.04515719413757324,
-0.14233312010765076,
-0.06994679570198059,
-0.06599919497966766,
0.07900018244981766,
-0.06448281556367874,
0.0229032039642334,
-0.08773092925548553,
-0.039849720895290375,
-0.025930006057024002,
-0.043879613280296326,
0.006284976843744516,
0.00806949008256197,
-0.010781772434711456,
-0.02849634736776352,
0.04022718966007233,
0.02123725786805153,
-0.08828228712081909,
0.08740955591201782,
-0.12467222660779953,
-0.054013848304748535,
0.06874380260705948,
-0.005008850246667862,
-0.04059109091758728,
0.0947439968585968,
0.009239242412149906,
-0.010031310841441154,
-0.057216912508010864,
-0.0502161867916584,
0.21564146876335144,
-0.021588880568742752,
0.08292403817176819,
-0.11456509679555893,
0.0031130181159824133,
0.036968477070331573,
-0.05698203295469284,
-0.05947216972708702,
0.07984435558319092,
0.03535696119070053,
-0.21676373481750488,
0.02300751954317093,
0.07287760078907013,
0.06491895765066147,
0.14269426465034485,
0.025700856000185013,
-0.029781587421894073,
-0.0607006810605526,
-0.015155422501266003,
-0.010772956535220146,
0.05401176959276199,
-0.043780066072940826,
0.0028945067897439003,
0.04951466619968414,
0.04982699453830719,
0.017360109835863113,
-0.07044899463653564,
0.02180682122707367,
0.05131754279136658,
-0.014938944950699806,
-0.07058285176753998,
-0.04936802014708519,
-0.03577536344528198,
0.0755748301744461,
0.03803553432226181,
0.04722747206687927,
0.05094682052731514,
-0.02096567302942276,
-0.13479425013065338,
0.16814611852169037,
-0.13815060257911682,
-0.23125094175338745,
-0.13185185194015503,
-0.07839476317167282,
-0.07475918531417847,
0.038208529353141785,
0.038519486784935,
-0.04060559719800949,
-0.05148753523826599,
-0.10281785577535629,
0.06959342956542969,
-0.11766980588436127,
-0.05805288255214691,
0.015107423067092896,
-0.05371858924627304,
-0.007882177829742432,
-0.12991729378700256,
-0.00877443514764309,
-0.027343720197677612,
-0.07961961627006531,
0.0012052832171320915,
-0.04312358796596527,
0.006292011588811874,
0.13016557693481445,
0.004672368988394737,
-0.012118306942284107,
-0.015974221751093864,
0.1993105411529541,
0.029931005090475082,
0.044144660234451294,
0.13098827004432678,
-0.06282629072666168,
0.05826491117477417,
0.02052929438650608,
0.03916199877858162,
-0.05114586651325226,
-0.0018025171011686325,
-0.023682894185185432,
-0.11746722459793091,
-0.21589866280555725,
-0.0666518583893776,
0.006650662515312433,
0.01628301665186882,
0.02217475138604641,
0.015277508646249771,
0.02081390842795372,
0.051301296800374985,
-0.0355960913002491,
0.03951897844672203,
0.03677316755056381,
0.05328281596302986,
0.0731048583984375,
-0.0658518522977829,
0.09661577641963959,
-0.07107055932283401,
0.02400789223611355,
0.10798504203557968,
-0.0741153135895729,
0.15289130806922913,
0.04342855513095856,
0.047046929597854614,
0.09251734614372253,
-0.004751894623041153,
0.057470571249723434,
0.10936199128627777,
-0.06411474198102951,
0.017850331962108612,
-0.07435709238052368,
-0.05782575532793999,
-0.04829084128141403,
0.06388919800519943,
0.03681172430515289,
-0.001907235011458397,
-0.10099312663078308,
0.03528663516044617,
-0.037185944616794586,
0.07051455974578857,
0.06716687232255936,
-0.10626362264156342,
-0.0997396856546402,
0.0442410409450531,
-0.04627106338739395,
-0.08915888518095016,
0.03725045174360275,
0.08547109365463257,
-0.13676562905311584,
0.06457704305648804,
0.016649775207042694,
0.07350663840770721,
-0.0961364358663559,
0.011907198466360569,
-0.07224711030721664,
0.018793202936649323,
0.005051975138485432,
0.09295819699764252,
-0.13570162653923035,
0.10983680188655853,
0.029794758185744286,
0.04195024073123932,
-0.09169404208660126,
0.01944573223590851,
-0.01386859081685543,
0.07468520849943161,
0.12098798155784607,
0.04280918836593628,
-0.06398085504770279,
-0.02522755227982998,
-0.06511221826076508,
0.034704796969890594,
0.07385267317295074,
-0.03172238916158676,
0.03745513781905174,
0.00134366936981678,
0.01349165290594101,
-0.007283596787601709,
0.015707697719335556,
-0.1350136250257492,
-0.1429181694984436,
0.06915324926376343,
-0.06680463254451752,
-0.08743371069431305,
-0.03870067372918129,
-0.06869514286518097,
-0.08647220581769943,
0.15388384461402893,
-0.07233770191669464,
-0.10827820003032684,
-0.10574344545602798,
0.013387682847678661,
0.07566803693771362,
-0.06596789509057999,
0.04961797595024109,
-0.049172818660736084,
0.09206011891365051,
-0.043975070118904114,
-0.11293056607246399,
0.016281455755233765,
-0.09006573259830475,
-0.1115713119506836,
-0.029936863109469414,
0.09619066119194031,
0.1458248347043991,
0.053490981459617615,
0.023565130308270454,
0.015034642070531845,
0.004627179354429245,
-0.12786658108234406,
0.007315848022699356,
0.13626894354820251,
0.007641211152076721,
0.09903685003519058,
-0.06094330549240112,
-0.029205814003944397,
-0.009469430893659592,
0.004978178068995476,
0.1332392394542694,
0.15876346826553345,
-0.05872802808880806,
0.1486915946006775,
0.2274894416332245,
-0.10027536749839783,
-0.19248563051223755,
-0.07687722891569138,
0.0025214171037077904,
0.046950601041316986,
0.04553172364830971,
-0.20036330819129944,
0.10243304073810577,
0.05380861833691597,
-0.011203031055629253,
-0.036051195114851,
-0.18925544619560242,
-0.10416156053543091,
0.10562381148338318,
0.06240030378103256,
0.19833791255950928,
-0.06527277082204819,
-0.042975395917892456,
-0.04263509437441826,
-0.04855514317750931,
0.09764155745506287,
-0.01466243714094162,
0.08187155425548553,
0.015913192182779312,
0.014415886253118515,
-0.0007923692464828491,
-0.008005289360880852,
0.10811933875083923,
0.04132936894893646,
0.02134402096271515,
-0.07274927198886871,
-0.04304597154259682,
0.10632550716400146,
-0.031147923320531845,
0.12489520013332367,
0.030099939554929733,
0.05899905413389206,
-0.0799257755279541,
-0.059758782386779785,
-0.08172933757305145,
0.012345526367425919,
-0.0400831401348114,
-0.050473012030124664,
-0.054094307124614716,
0.03906061500310898,
0.021136829629540443,
0.01010778360068798,
-0.006704917177557945,
-0.05423025041818619,
0.01821175031363964,
0.06288240849971771,
0.1612115353345871,
-0.01605052500963211,
-0.05762109160423279,
-0.0687941312789917,
-0.057562947273254395,
0.04900934547185898,
-0.10687297582626343,
0.030551454052329063,
0.02306654118001461,
-0.0020597907714545727,
0.11247046291828156,
0.03143870085477829,
-0.11681148409843445,
0.011990468949079514,
0.009135245345532894,
-0.09880369901657104,
-0.1542382389307022,
-0.018749404698610306,
0.05294045805931091,
-0.0573420450091362,
0.039020854979753494,
0.15868106484413147,
-0.028001312166452408,
-0.03423719108104706,
-0.05535004287958145,
0.03202114626765251,
-0.033627986907958984,
0.040396057069301605,
0.0798807293176651,
0.017295729368925095,
-0.0819128155708313,
0.05237917602062225,
0.04442689195275307,
-0.01853027008473873,
0.06537657976150513,
0.026244070380926132,
-0.06517962366342545,
-0.08448711782693863,
-0.06887836754322052,
0.11027295887470245,
-0.040851056575775146,
-0.06649477779865265,
0.04841364920139313,
-0.10724157094955444,
0.06856030225753784,
0.10451589524745941,
0.03535475209355354,
0.04480015113949776,
-0.08387209475040436,
0.006057461723685265,
-0.03515074402093887,
0.03572076931595802,
-0.046036265790462494,
-0.03358675166964531,
-0.04358662664890289,
0.03038591891527176,
0.06021590530872345,
0.09518350660800934,
-0.03655112534761429,
-0.07737191021442413,
-0.09068034589290619,
-0.012220900505781174,
-0.10173771530389786,
-0.010829816572368145,
-0.07342659682035446,
0.0026572300121188164,
0.005163986701518297,
-0.027640394866466522,
0.026836145669221878,
0.03099406324326992,
-0.050114087760448456,
-0.008449096232652664,
-0.02805035002529621,
0.06007365137338638,
-0.07650666683912277,
0.011929929256439209,
0.01620594970881939,
-0.017624594271183014,
0.08797363191843033,
0.04264284670352936,
-0.030058767646551132,
0.05507596582174301,
-0.043783038854599,
0.034025274217128754,
-0.04445680230855942,
0.009458459913730621,
-0.016511978581547737,
-0.11147849261760712,
-0.017631439492106438,
0.010574983432888985,
-0.024921918287873268,
0.01679258793592453,
0.06670372188091278,
-0.05246088653802872,
0.08074261993169785,
0.06278723478317261,
-0.055770836770534515,
-0.05624014511704445,
0.041365839540958405,
0.0033511705696582794,
0.0156472809612751,
0.07696404308080673,
0.0017405569087713957,
0.05722879618406296,
-0.08953151106834412,
-0.0005984180606901646,
0.0028204452246427536,
-0.020984813570976257,
-0.007719637826085091,
-0.07031358778476715,
-0.0002526422031223774,
0.009153841063380241,
0.18003042042255402,
-0.00021494831889867783,
-0.014800000935792923,
0.00547481793910265,
0.07006221264600754,
0.02950267307460308,
0.004044370725750923,
0.07737697660923004,
-0.017750490456819534,
-0.017599398270249367,
-0.019422873854637146,
0.026383161544799805,
-0.010430379770696163,
0.01920991763472557,
0.12799149751663208,
0.04205135628581047,
0.11879729479551315,
0.0732041522860527,
0.06476697325706482,
0.051812849938869476,
-0.1084054708480835,
-0.07441218197345734,
0.03477209061384201,
0.053076911717653275,
-0.03600676357746124,
0.022481273859739304,
0.06272435933351517,
-0.0950550064444542,
0.08570514619350433,
0.0471232533454895,
-0.05765385925769806,
-0.1297813355922699,
-0.21611487865447998,
-0.04065151512622833,
-0.011131135746836662,
-0.018148649483919144,
-0.11196097731590271,
0.026196889579296112,
0.08438848704099655,
0.040272291749715805,
-0.019279908388853073,
0.06302466988563538,
-0.14885680377483368,
-0.032058265060186386,
0.03663601726293564,
-0.012843383476138115,
0.02545205131173134,
0.04915058612823486,
0.01730850525200367,
0.016652526333928108,
0.07044520229101181,
0.0480404756963253,
0.039096176624298096,
0.05495758354663849,
0.03163345903158188,
-0.04995986074209213,
-0.0868929997086525,
-0.045548684895038605,
0.002196814399212599,
0.0610840767621994,
0.1271200031042099,
0.009635759517550468,
-0.06989189237356186,
-0.00023869192227721214,
0.055881671607494354,
-0.018020283430814743,
-0.07885146141052246,
-0.11505493521690369,
0.22028140723705292,
-0.023634135723114014,
0.014236431568861008,
-0.0020600585266947746,
-0.03452759236097336,
0.019220424816012383,
0.20690511167049408,
0.2648163437843323,
-0.013090662658214569,
-0.015031861141324043,
0.012198489159345627,
0.0007161875255405903,
0.0336185023188591,
0.12238378822803497,
-0.025254245847463608,
0.22609320282936096,
-0.04574279487133026,
0.06457420438528061,
-0.056207701563835144,
-0.018297266215085983,
-0.06897440552711487,
0.05649706721305847,
0.003371458500623703,
-0.016520101577043533,
-0.008057532832026482,
0.07550645619630814,
-0.0338561087846756,
-0.1281585991382599,
-0.025843363255262375,
0.004057355225086212,
-0.05657392740249634,
0.008691474795341492,
0.002531544305384159,
0.033450908958911896,
0.07845880091190338,
-0.01552650798112154,
-0.002502421848475933,
0.12214266508817673,
0.010229475796222687,
-0.11304686963558197,
-0.04580073058605194,
0.12681350111961365,
0.023136354982852936,
0.1313658058643341,
0.04886242374777794,
0.08720434457063675,
0.07177013158798218,
0.014623083174228668,
-0.06480144709348679,
0.034251805394887924,
-0.025539441034197807,
0.02068920060992241,
-0.00015307124704122543,
0.10969530045986176,
-0.011425609700381756,
0.08419137448072433,
0.01735389232635498,
-0.08421831578016281,
0.057636432349681854,
0.00522271916270256,
-0.07603438198566437,
-0.03315151110291481,
0.11015661805868149,
-0.11158249527215958,
0.14266246557235718,
0.1468929946422577,
-0.0021469888743013144,
-0.07280030101537704,
-0.012775944545865059,
0.02430173009634018,
-0.009597173891961575,
0.047989413142204285,
-0.030432723462581635,
-0.12997642159461975,
-0.0012986022047698498,
-0.09092056751251221,
0.046288277953863144,
-0.24096113443374634,
-0.06731109321117401,
0.016131717711687088,
-0.00335618294775486,
-0.029360264539718628,
0.030730340629816055,
0.06282301247119904,
-0.005715301726013422,
-0.0314151756465435,
0.017813552170991898,
-0.02603880502283573,
0.0352928563952446,
-0.07263661921024323,
-0.06484352797269821
] |
null | null | transformers | # MultiBERTs Seed 18 (uncased)
Seed 18 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-18')
model = BertModel.from_pretrained("multiberts-seed-18")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-18 | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 18 (uncased)
Seed 18 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 18 (uncased)\n\nSeed 18 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 18 (uncased)\n\nSeed 18 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
63,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 18 (uncased)\n\nSeed 18 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.06809774786233902,
0.02133440598845482,
-0.0021886234171688557,
0.09422524273395538,
0.07651011645793915,
0.028593335300683975,
0.1590646654367447,
0.02949034795165062,
-0.029440343379974365,
0.023906871676445007,
0.1043788343667984,
0.037506699562072754,
0.033259909600019455,
0.029471486806869507,
0.06691280752420425,
-0.25719988346099854,
0.07517266273498535,
-0.05836494266986847,
0.04402368515729904,
0.06037258356809616,
0.10291576385498047,
-0.07170329988002777,
0.07871309667825699,
0.044411979615688324,
-0.07722023874521255,
-0.02608393132686615,
-0.0053204698488116264,
-0.03390669822692871,
0.07533329725265503,
0.09645282477140427,
0.06371892988681793,
-0.010999753139913082,
0.06221698224544525,
-0.08532312512397766,
0.01945864036679268,
0.025404542684555054,
-0.009737815707921982,
0.037814076989889145,
0.025930972769856453,
-0.0048416429199278355,
0.11814016103744507,
0.025124389678239822,
0.08806450664997101,
0.04055915027856827,
-0.08717381954193115,
-0.09892039000988007,
-0.06958836317062378,
0.0899910032749176,
0.033184733241796494,
0.04362160712480545,
-0.00847647711634636,
0.07236964255571365,
-0.0070334793999791145,
0.059093840420246124,
0.08237596601247787,
-0.2402641773223877,
-0.02384684607386589,
0.05100255832076073,
0.05300220847129822,
0.04449571669101715,
0.01581788808107376,
0.028948351740837097,
0.0081169493496418,
0.05067913234233856,
0.005488561000674963,
-0.028639592230319977,
0.14286202192306519,
-0.053533099591732025,
-0.13840076327323914,
-0.030772794038057327,
0.15306055545806885,
0.02590920589864254,
-0.11251509189605713,
-0.1161278486251831,
-0.0002954648807644844,
0.17342272400856018,
-0.0039996188133955,
-0.009241871535778046,
-0.010281539522111416,
-0.004870343953371048,
0.02771742269396782,
-0.1229257881641388,
-0.08349227905273438,
-0.0238278079777956,
-0.06282749027013779,
0.15337243676185608,
0.0487503707408905,
0.06929149478673935,
-0.06095193699002266,
0.03811381384730339,
-0.1464347392320633,
-0.0347081795334816,
-0.044458162039518356,
-0.09803649038076401,
0.013810474425554276,
0.02925860323011875,
-0.0441594123840332,
-0.12007646262645721,
-0.03947470709681511,
0.07073359191417694,
0.035886578261852264,
0.03688252344727516,
-0.011299499310553074,
0.0312647819519043,
0.10548323392868042,
0.1052141785621643,
-0.061128295958042145,
0.08025544881820679,
0.014630011282861233,
-0.020424824208021164,
0.03971708193421364,
-0.05708032473921776,
-0.12308374047279358,
0.07292183488607407,
-0.03769586607813835,
0.018333250656723976,
0.022175557911396027,
0.0397680439054966,
-0.011189469136297703,
-0.07626385241746902,
0.146315336227417,
-0.094635009765625,
0.00010717567056417465,
-0.006042156368494034,
0.01760431006550789,
0.07826006412506104,
0.02475762367248535,
0.003110830206423998,
-0.059354424476623535,
-0.02547646127641201,
-0.0643584281206131,
-0.03143533691763878,
-0.05833529680967331,
-0.1343778669834137,
0.002346525900065899,
-0.022078363224864006,
-0.014535059221088886,
-0.10190002620220184,
-0.17828211188316345,
-0.015307126566767693,
0.06876333057880402,
-0.014184312894940376,
0.013112523593008518,
0.0004238674882799387,
0.012994328513741493,
-0.005452912300825119,
0.0330546572804451,
-0.038677480071783066,
0.010872529819607735,
-0.0123900780454278,
-0.0675797238945961,
0.03959979861974716,
-0.11645770072937012,
0.042604006826877594,
-0.053326141089200974,
0.012596063315868378,
-0.19442656636238098,
0.1058981791138649,
-0.029974881559610367,
-0.04225311800837517,
-0.04579612612724304,
-0.06159263849258423,
0.019406411796808243,
0.0509941540658474,
-0.014606436714529991,
0.10021080076694489,
-0.12316270172595978,
-0.053895559161901474,
0.16020432114601135,
-0.15852266550064087,
0.015719091519713402,
0.1081046462059021,
-0.06820835173130035,
0.03652559965848923,
0.14464883506298065,
0.0758926272392273,
0.07003892958164215,
-0.04098957031965256,
0.016969963908195496,
0.05856000632047653,
-0.04621107131242752,
0.07796115428209305,
0.10696440190076828,
-0.010813474655151367,
-0.12550801038742065,
0.02876408025622368,
-0.06486696004867554,
-0.03404048830270767,
-0.022396421059966087,
-0.022776663303375244,
0.0145407784730196,
-0.050948236137628555,
0.05563993752002716,
-0.006791180931031704,
0.0026687979698181152,
-0.026502074673771858,
-0.07627987116575241,
0.07641513645648956,
0.07729429006576538,
-0.08405230939388275,
0.01629498042166233,
-0.09416305273771286,
0.027865607291460037,
-0.06254752725362778,
-0.004644792526960373,
-0.14361661672592163,
-0.04401400685310364,
0.02965467795729637,
-0.07443147897720337,
0.09952162206172943,
0.11772553622722626,
0.008687302470207214,
0.11275655031204224,
-0.044855013489723206,
0.02561047114431858,
-0.011600283905863762,
-0.009244469925761223,
-0.044943906366825104,
-0.1423967331647873,
-0.06865432858467102,
-0.06508432328701019,
0.07859060168266296,
-0.059373851865530014,
0.0222898181527853,
-0.08529806137084961,
-0.04342304915189743,
-0.026482559740543365,
-0.04482097178697586,
0.005853290669620037,
0.00626031681895256,
-0.01322229579091072,
-0.028699558228254318,
0.03921973705291748,
0.024614978581666946,
-0.09098008275032043,
0.09101532399654388,
-0.12573972344398499,
-0.057117775082588196,
0.06827544420957565,
-0.005868908017873764,
-0.041514646261930466,
0.09551574289798737,
0.009818953461945057,
-0.010564853437244892,
-0.0562414675951004,
-0.048954859375953674,
0.22240003943443298,
-0.020299505442380905,
0.08285064995288849,
-0.11302337050437927,
0.0024894545786082745,
0.03774965554475784,
-0.057135939598083496,
-0.0598117895424366,
0.07698623836040497,
0.03943546861410141,
-0.21475708484649658,
0.0235130712389946,
0.07621847093105316,
0.06596508622169495,
0.13947135210037231,
0.024433307349681854,
-0.030194230377674103,
-0.05812540650367737,
-0.016672682017087936,
-0.012112138792872429,
0.05547385290265083,
-0.04339619353413582,
0.0023153575602918863,
0.049465835094451904,
0.05293818190693855,
0.017835300415754318,
-0.06804370880126953,
0.022483687847852707,
0.053033340722322464,
-0.01417144387960434,
-0.06291612982749939,
-0.052831005305051804,
-0.03801505267620087,
0.07573854923248291,
0.04027549922466278,
0.04634043574333191,
0.051577430218458176,
-0.021431712433695793,
-0.13682937622070312,
0.1672627031803131,
-0.1348022222518921,
-0.22569754719734192,
-0.13005073368549347,
-0.08189686387777328,
-0.07461726665496826,
0.03945702686905861,
0.03830185532569885,
-0.04249157756567001,
-0.049926675856113434,
-0.1028144508600235,
0.06860890984535217,
-0.11587375402450562,
-0.05984930694103241,
0.01652142032980919,
-0.05637678503990173,
-0.00916104856878519,
-0.12714838981628418,
-0.010730214416980743,
-0.028560182079672813,
-0.07793620228767395,
0.00247302558273077,
-0.04261038079857826,
0.009331047534942627,
0.13568592071533203,
0.007414461579173803,
-0.01305755041539669,
-0.014950458891689777,
0.1950569897890091,
0.028811808675527573,
0.04441853612661362,
0.1311221420764923,
-0.06656591594219208,
0.057699982076883316,
0.01926920935511589,
0.038657739758491516,
-0.04965640977025032,
-0.0012917965650558472,
-0.024142185226082802,
-0.11699943989515305,
-0.21668322384357452,
-0.06880904734134674,
0.008013153448700905,
0.012071631848812103,
0.020502714440226555,
0.01627497375011444,
0.021597322076559067,
0.05410480499267578,
-0.03544978052377701,
0.03405068814754486,
0.032980483025312424,
0.05412307381629944,
0.06618249416351318,
-0.06305958330631256,
0.09647706151008606,
-0.06968121975660324,
0.02390965446829796,
0.10903457552194595,
-0.07265904545783997,
0.15747098624706268,
0.04444235563278198,
0.05245501920580864,
0.09515135735273361,
-0.007994111627340317,
0.05691162496805191,
0.10501343011856079,
-0.06332330405712128,
0.016817402094602585,
-0.0754423439502716,
-0.0569465309381485,
-0.04569821059703827,
0.06241721659898758,
0.030063070356845856,
-0.0015195179730653763,
-0.1023992970585823,
0.03697090595960617,
-0.03791990876197815,
0.07596264779567719,
0.06482509523630142,
-0.10683593899011612,
-0.09975051879882812,
0.044310636818408966,
-0.04420036822557449,
-0.08980420231819153,
0.036973003298044205,
0.08494312316179276,
-0.13877028226852417,
0.06042848154902458,
0.016228705644607544,
0.07318083941936493,
-0.09635210037231445,
0.013463953509926796,
-0.069752998650074,
0.01569737307727337,
0.003553608199581504,
0.09141293168067932,
-0.13871319591999054,
0.11103706061840057,
0.02797839231789112,
0.04095885902643204,
-0.09084358811378479,
0.018604280427098274,
-0.012475916184484959,
0.0752105861902237,
0.12330906838178635,
0.04138690233230591,
-0.06434059143066406,
-0.017930228263139725,
-0.06285884231328964,
0.036385148763656616,
0.07178464531898499,
-0.036814555525779724,
0.03823602944612503,
0.002083786763250828,
0.016073420643806458,
-0.0064392429776489735,
0.0160943865776062,
-0.13114166259765625,
-0.14391076564788818,
0.06858064234256744,
-0.06607650220394135,
-0.09019848704338074,
-0.037596628069877625,
-0.06804992258548737,
-0.08723317086696625,
0.15721607208251953,
-0.072783924639225,
-0.11118134111166,
-0.10527785867452621,
0.007563969120383263,
0.07073686271905899,
-0.06697127223014832,
0.049141280353069305,
-0.04983946681022644,
0.09535481035709381,
-0.042940713465213776,
-0.11133623123168945,
0.018154170364141464,
-0.09154516458511353,
-0.1101498007774353,
-0.030564306303858757,
0.09298037737607956,
0.14866630733013153,
0.0534263551235199,
0.02186919003725052,
0.016308631747961044,
0.0027634352445602417,
-0.12694424390792847,
0.009273774921894073,
0.1328073889017105,
0.015531923621892929,
0.09876877814531326,
-0.0601559579372406,
-0.028953570872545242,
-0.012830918654799461,
0.0020272191613912582,
0.13475017249584198,
0.15696296095848083,
-0.05785219371318817,
0.14770129323005676,
0.23130245506763458,
-0.09921209514141083,
-0.19128505885601044,
-0.07440458238124847,
-0.00022132229059934616,
0.04567722976207733,
0.04986780881881714,
-0.20223049819469452,
0.10225291550159454,
0.05236886069178581,
-0.012839911505579948,
-0.03582854941487312,
-0.1879350244998932,
-0.10193902254104614,
0.10935039818286896,
0.06443504989147186,
0.19699838757514954,
-0.06650470197200775,
-0.04182477295398712,
-0.042894091457128525,
-0.053276628255844116,
0.09118010848760605,
-0.008402090519666672,
0.07983577251434326,
0.015524492599070072,
0.01788908615708351,
-0.0010672025382518768,
-0.008774731308221817,
0.1066671758890152,
0.04471837729215622,
0.019593412056565285,
-0.07446662336587906,
-0.043145325034856796,
0.1026877909898758,
-0.030888454988598824,
0.12326937168836594,
0.024380400776863098,
0.05863390117883682,
-0.07694284617900848,
-0.060351744294166565,
-0.08154352009296417,
0.01021750271320343,
-0.039399512112140656,
-0.05306261032819748,
-0.05313268303871155,
0.03794633597135544,
0.020715858787298203,
0.011514349840581417,
-0.010367984883487225,
-0.05611196160316467,
0.017629461362957954,
0.05676676705479622,
0.1616942137479782,
-0.01601005345582962,
-0.06413140147924423,
-0.0711936205625534,
-0.057386722415685654,
0.04826829582452774,
-0.10821671783924103,
0.029865451157093048,
0.02319144643843174,
-0.0038147366140037775,
0.11503562331199646,
0.0326116569340229,
-0.11704153567552567,
0.01283311191946268,
0.006703609600663185,
-0.09978409856557846,
-0.1516488641500473,
-0.01785220392048359,
0.057008691132068634,
-0.05946821719408035,
0.037912532687187195,
0.15621282160282135,
-0.026977472007274628,
-0.03458070755004883,
-0.05601669102907181,
0.03356130048632622,
-0.031990326941013336,
0.0360013023018837,
0.08089111745357513,
0.017902513965964317,
-0.08116413652896881,
0.05322278290987015,
0.045250989496707916,
-0.01857348158955574,
0.06756677478551865,
0.02362818457186222,
-0.06574755907058716,
-0.08569779247045517,
-0.06782696396112442,
0.11180634051561356,
-0.041394419968128204,
-0.0658552497625351,
0.04805035516619682,
-0.10948352515697479,
0.06731800734996796,
0.09537254273891449,
0.03664674609899521,
0.04431203380227089,
-0.08402865380048752,
0.00581490620970726,
-0.03701663762331009,
0.03518035262823105,
-0.04793953150510788,
-0.03131146356463432,
-0.043882474303245544,
0.02545970119535923,
0.06179063767194748,
0.09712136536836624,
-0.036189861595630646,
-0.07583184540271759,
-0.08643443137407303,
-0.012224087491631508,
-0.10937458276748657,
-0.010858478024601936,
-0.0702195093035698,
-0.000933036906644702,
0.005047603510320187,
-0.027123738080263138,
0.028447162359952927,
0.03170078620314598,
-0.049152977764606476,
-0.00809637550264597,
-0.026042843237519264,
0.05966358631849289,
-0.07476351410150528,
0.014325518161058426,
0.016036488115787506,
-0.0184126365929842,
0.09019099920988083,
0.04302909970283508,
-0.030753636732697487,
0.053991079330444336,
-0.0396934375166893,
0.035690367221832275,
-0.044490329921245575,
0.0077796634286642075,
-0.01810530200600624,
-0.10873571038246155,
-0.018547452986240387,
0.008836835622787476,
-0.025893710553646088,
0.015457515604794025,
0.0668535828590393,
-0.05174421891570091,
0.0852937251329422,
0.06302794814109802,
-0.05301658436655998,
-0.05614546686410904,
0.04020391404628754,
0.00410958006978035,
0.017239637672901154,
0.07663167268037796,
0.002889706287533045,
0.0574793741106987,
-0.08823496103286743,
-0.00018742214888334274,
0.0033699781633913517,
-0.01732199266552925,
-0.01142933964729309,
-0.07135237753391266,
-0.002477318048477173,
0.007103288546204567,
0.1770308017730713,
-0.001957337372004986,
-0.01841205731034279,
0.007413382641971111,
0.06984353810548782,
0.03268393874168396,
0.003957301378250122,
0.07440577447414398,
-0.02012806199491024,
-0.019791817292571068,
-0.017950214445590973,
0.025746803730726242,
-0.012027586810290813,
0.015755891799926758,
0.1310301572084427,
0.04706304147839546,
0.11617565155029297,
0.07285375893115997,
0.06436872482299805,
0.05261573567986488,
-0.11139804124832153,
-0.07596074044704437,
0.0352022647857666,
0.05208209156990051,
-0.03306049481034279,
0.02666730061173439,
0.06259916722774506,
-0.09256863594055176,
0.0847514420747757,
0.044227905571460724,
-0.05557052791118622,
-0.1284072995185852,
-0.21020489931106567,
-0.03937589377164841,
-0.007655879482626915,
-0.019304532557725906,
-0.11061946302652359,
0.024023190140724182,
0.0930601954460144,
0.03959425911307335,
-0.02195245586335659,
0.06806336343288422,
-0.14509153366088867,
-0.033393073827028275,
0.03855964168906212,
-0.013010678812861443,
0.024786600843071938,
0.04779720678925514,
0.020118936896324158,
0.017457108944654465,
0.07512079179286957,
0.05058448761701584,
0.039617620408535004,
0.05252544954419136,
0.0306685883551836,
-0.051383811980485916,
-0.08791214227676392,
-0.0456274189054966,
0.0015386580489575863,
0.060257744044065475,
0.12725412845611572,
0.009399797767400742,
-0.0680137649178505,
0.0002483576536178589,
0.05623240023851395,
-0.017401348799467087,
-0.07776576280593872,
-0.1125255674123764,
0.2192690372467041,
-0.024322129786014557,
0.011674829758703709,
-0.00214022072032094,
-0.03456498309969902,
0.01875070109963417,
0.2042645364999771,
0.2614647150039673,
-0.019915344193577766,
-0.01588049903512001,
0.011898515745997429,
0.0004604179412126541,
0.03386448696255684,
0.1255558431148529,
-0.02520793117582798,
0.2275267243385315,
-0.04650704562664032,
0.06494266539812088,
-0.058259617537260056,
-0.01679063029587269,
-0.06803791224956512,
0.058744996786117554,
0.00436368677765131,
-0.015121190808713436,
-0.008507215417921543,
0.07437441498041153,
-0.03319750353693962,
-0.12751445174217224,
-0.031240489333868027,
0.003600885858759284,
-0.057612963020801544,
0.0111849345266819,
0.002584255300462246,
0.033702701330184937,
0.08017462491989136,
-0.0156954824924469,
-0.00442870706319809,
0.12121898680925369,
0.010574715211987495,
-0.11063072085380554,
-0.038907241076231,
0.1272168755531311,
0.018345391377806664,
0.13592344522476196,
0.0492689348757267,
0.08639146387577057,
0.07048836350440979,
0.015952210873365402,
-0.06568455696105957,
0.035085488110780716,
-0.02724321559071541,
0.023801440373063087,
0.0021876106038689613,
0.1067405715584755,
-0.010077186860144138,
0.07343169301748276,
0.020182229578495026,
-0.08657508343458176,
0.056206412613391876,
0.0019983723759651184,
-0.07634252309799194,
-0.03292194381356239,
0.10950462520122528,
-0.11084921658039093,
0.14336621761322021,
0.14272987842559814,
-0.003162120468914509,
-0.07330585271120071,
-0.011768471449613571,
0.023748526349663734,
-0.008299265056848526,
0.05380747839808464,
-0.02911505661904812,
-0.12949445843696594,
-0.0003377525135874748,
-0.09173808991909027,
0.04504953324794769,
-0.25047823786735535,
-0.06589354574680328,
0.017403660342097282,
-0.0018035704270005226,
-0.029029905796051025,
0.028045102953910828,
0.06423033773899078,
-0.00486014736816287,
-0.032743971794843674,
0.031138716265559196,
-0.02678912878036499,
0.0351870097219944,
-0.07362209260463715,
-0.06548263877630234
] |
null | null | transformers | # MultiBERTs Seed 19 (uncased)
Seed 19 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-19')
model = BertModel.from_pretrained("multiberts-seed-19")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-19 | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 19 (uncased)
Seed 19 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 19 (uncased)\n\nSeed 19 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 19 (uncased)\n\nSeed 19 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
63,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 19 (uncased)\n\nSeed 19 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.06831999123096466,
0.02215775102376938,
-0.0021432642824947834,
0.09173483401536942,
0.07266251742839813,
0.026612095534801483,
0.15924453735351562,
0.029648732393980026,
-0.032845448702573776,
0.02447573095560074,
0.10480044782161713,
0.039351657032966614,
0.03441721200942993,
0.03749274089932442,
0.0675538033246994,
-0.2586367726325989,
0.074383944272995,
-0.05801837518811226,
0.03727968782186508,
0.060953281819820404,
0.10354027152061462,
-0.07236044108867645,
0.07867583632469177,
0.04476460814476013,
-0.07327697426080704,
-0.02700413577258587,
-0.006517213769257069,
-0.036703407764434814,
0.07530705630779266,
0.09619227051734924,
0.06372745335102081,
-0.009215240366756916,
0.06250117719173431,
-0.08429543673992157,
0.01918097771704197,
0.024175427854061127,
-0.009780153632164001,
0.03828650340437889,
0.026608049869537354,
-0.007600346114486456,
0.11477572470903397,
0.022617429494857788,
0.08866946399211884,
0.039860062301158905,
-0.08851160109043121,
-0.10060805082321167,
-0.07118338346481323,
0.09257994592189789,
0.03318995237350464,
0.04320981353521347,
-0.008483144454658031,
0.07520636916160583,
-0.011046372354030609,
0.05946331098675728,
0.07785274088382721,
-0.23758050799369812,
-0.02455502189695835,
0.04990556463599205,
0.050619930028915405,
0.041978977620601654,
0.015176094137132168,
0.028890687972307205,
0.00627642497420311,
0.049525272101163864,
0.0032077799551188946,
-0.029873866587877274,
0.1397925317287445,
-0.05240233987569809,
-0.1380021572113037,
-0.028103377670049667,
0.15929213166236877,
0.026287002488970757,
-0.11234349012374878,
-0.11521197855472565,
0.0007693702355027199,
0.1714450716972351,
-0.004704719875007868,
-0.009905423037707806,
-0.009475008584558964,
-0.006549333222210407,
0.024018945172429085,
-0.12045999616384506,
-0.08335544168949127,
-0.024098027497529984,
-0.06486738473176956,
0.15288501977920532,
0.04684044420719147,
0.06948737800121307,
-0.06076446548104286,
0.037315577268600464,
-0.15257757902145386,
-0.03477092832326889,
-0.04447200149297714,
-0.09955065697431564,
0.015046149492263794,
0.027672363445162773,
-0.045265160501003265,
-0.12475991249084473,
-0.04120780527591705,
0.07593555748462677,
0.03886554390192032,
0.039144039154052734,
-0.011574211530387402,
0.03030679188668728,
0.1064901351928711,
0.10579192638397217,
-0.059261634945869446,
0.07652299851179123,
0.01655571348965168,
-0.018717613071203232,
0.04308140277862549,
-0.056671127676963806,
-0.12210370600223541,
0.07431039959192276,
-0.03843065723776817,
0.017735889181494713,
0.021737288683652878,
0.04190196096897125,
-0.010148423723876476,
-0.07675938308238983,
0.14548775553703308,
-0.09496400505304337,
-0.00025938497856259346,
-0.004728348925709724,
0.015424724668264389,
0.07567106187343597,
0.02394230104982853,
0.0026106825098395348,
-0.05735458433628082,
-0.026836762204766273,
-0.06505153328180313,
-0.03192002326250076,
-0.05880170315504074,
-0.13410739600658417,
0.004194626584649086,
-0.021485384553670883,
-0.014825227670371532,
-0.10150188207626343,
-0.18160231411457062,
-0.014403484761714935,
0.06873945146799088,
-0.01484203152358532,
0.014206957072019577,
0.0005156934494152665,
0.012089531868696213,
-0.003960752859711647,
0.03276834636926651,
-0.03463895246386528,
0.009589043445885181,
-0.012294894084334373,
-0.06649034470319748,
0.040361881256103516,
-0.11837366223335266,
0.042090002447366714,
-0.05456596612930298,
0.012677760794758797,
-0.20654383301734924,
0.10660367459058762,
-0.030891913920640945,
-0.041827790439128876,
-0.04508330300450325,
-0.06168607994914055,
0.0159294493496418,
0.048253707587718964,
-0.015175995416939259,
0.10045048594474792,
-0.12641507387161255,
-0.05481620877981186,
0.15793552994728088,
-0.15896938741207123,
0.018841661512851715,
0.10578828305006027,
-0.06990078836679459,
0.040246594697237015,
0.1451980173587799,
0.08320067822933197,
0.06475215405225754,
-0.0394572913646698,
0.016193866729736328,
0.057995639741420746,
-0.04697538912296295,
0.08005107939243317,
0.1081266924738884,
-0.011624513193964958,
-0.1246434673666954,
0.028412900865077972,
-0.06459733843803406,
-0.03361097723245621,
-0.021173879504203796,
-0.02389015257358551,
0.017526011914014816,
-0.05104823037981987,
0.05742070823907852,
-0.006834099069237709,
0.0049855513498187065,
-0.029705358669161797,
-0.07484576106071472,
0.07276444137096405,
0.07752324640750885,
-0.08387535810470581,
0.01403848361223936,
-0.09477484971284866,
0.029555074870586395,
-0.059915412217378616,
-0.004194467794150114,
-0.14183290302753448,
-0.04494988173246384,
0.029914025217294693,
-0.0759587362408638,
0.10132177174091339,
0.11733278632164001,
0.010132594034075737,
0.11381837725639343,
-0.04516807198524475,
0.0253694336861372,
-0.01218780130147934,
-0.00827337522059679,
-0.04457288235425949,
-0.14465627074241638,
-0.0686604380607605,
-0.0645022988319397,
0.08572658151388168,
-0.06056694686412811,
0.02165253832936287,
-0.08414548635482788,
-0.03986697643995285,
-0.02631024643778801,
-0.044687166810035706,
0.006986181251704693,
0.006596695631742477,
-0.011261210776865482,
-0.028073403984308243,
0.039286769926548004,
0.024164404720067978,
-0.09022761881351471,
0.0909273773431778,
-0.12697270512580872,
-0.06000681594014168,
0.07111866027116776,
-0.00932784378528595,
-0.042643144726753235,
0.0967765599489212,
0.010440614074468613,
-0.011063061654567719,
-0.054157499223947525,
-0.04882708936929703,
0.2209763377904892,
-0.02025529369711876,
0.08326958119869232,
-0.1141364648938179,
0.002284431364387274,
0.03718362748622894,
-0.05773172527551651,
-0.05802440270781517,
0.07795308530330658,
0.03698080778121948,
-0.22214370965957642,
0.024418510496616364,
0.07569032162427902,
0.06470730155706406,
0.14057457447052002,
0.023667998611927032,
-0.030491428449749947,
-0.06006154790520668,
-0.016346674412488937,
-0.012742705643177032,
0.05990152060985565,
-0.045867033302783966,
0.001482496503740549,
0.04968595504760742,
0.051901496946811676,
0.01757604442536831,
-0.06870989501476288,
0.022393956780433655,
0.05107557028532028,
-0.012892387807369232,
-0.06921353191137314,
-0.05281756445765495,
-0.037285786122083664,
0.07390038669109344,
0.0423591285943985,
0.04600825905799866,
0.052570890635252,
-0.02179056592285633,
-0.13651585578918457,
0.1665748655796051,
-0.1371789574623108,
-0.22862792015075684,
-0.13046012818813324,
-0.08261977881193161,
-0.07633931189775467,
0.039474282413721085,
0.03626587241888046,
-0.04137212038040161,
-0.051964107900857925,
-0.10276954621076584,
0.06983433663845062,
-0.11744146049022675,
-0.0617799311876297,
0.0165037140250206,
-0.054582491517066956,
-0.008368927985429764,
-0.12815910577774048,
-0.010082399472594261,
-0.029627127572894096,
-0.07634559273719788,
0.00322861410677433,
-0.04380407929420471,
0.009721934795379639,
0.13325345516204834,
0.007680999580770731,
-0.0121859610080719,
-0.015034275129437447,
0.19743698835372925,
0.027268700301647186,
0.044045884162187576,
0.12884384393692017,
-0.06492528319358826,
0.056681446731090546,
0.020789025351405144,
0.03977533429861069,
-0.04813234880566597,
-0.0021966053172945976,
-0.023493073880672455,
-0.11626869440078735,
-0.21544185280799866,
-0.06834056228399277,
0.008398277685046196,
0.012002799659967422,
0.020185740664601326,
0.015500986017286777,
0.028795164078474045,
0.0539267435669899,
-0.03323584049940109,
0.03447116166353226,
0.03577755391597748,
0.05582800507545471,
0.07241472601890564,
-0.06424671411514282,
0.09820780158042908,
-0.06976377964019775,
0.021732432767748833,
0.10837642103433609,
-0.07162712514400482,
0.1570684015750885,
0.044247712939977646,
0.05545174330472946,
0.09285237640142441,
-0.005028391256928444,
0.05807468295097351,
0.10754849016666412,
-0.061078086495399475,
0.0171796977519989,
-0.07609445601701736,
-0.05760353058576584,
-0.04799920320510864,
0.06344641000032425,
0.033764906227588654,
-0.0027456339448690414,
-0.09981781244277954,
0.03246179223060608,
-0.03750215470790863,
0.07537958025932312,
0.06380217522382736,
-0.10820238292217255,
-0.10012929141521454,
0.0442655086517334,
-0.04462628811597824,
-0.08786039054393768,
0.0365351177752018,
0.08182690292596817,
-0.13989798724651337,
0.06376175582408905,
0.017754070460796356,
0.07249519228935242,
-0.09535082429647446,
0.012598606757819653,
-0.07280514389276505,
0.015211792662739754,
0.0038322352338582277,
0.0928785651922226,
-0.13284721970558167,
0.11134476214647293,
0.02951817773282528,
0.04182674363255501,
-0.09136983752250671,
0.018135350197553635,
-0.01142821740359068,
0.07536078989505768,
0.12118704617023468,
0.04129217565059662,
-0.06293652951717377,
-0.02432233840227127,
-0.06542237848043442,
0.03485017269849777,
0.07388582080602646,
-0.034617431461811066,
0.038680803030729294,
0.003071769140660763,
0.014601782895624638,
-0.007289467379450798,
0.01596055179834366,
-0.1312037706375122,
-0.1429106742143631,
0.06950807571411133,
-0.06786460429430008,
-0.08668079227209091,
-0.03827651962637901,
-0.06807979196310043,
-0.08840605616569519,
0.15323999524116516,
-0.0691855400800705,
-0.10945712774991989,
-0.10407236963510513,
0.007167613133788109,
0.07494081556797028,
-0.06530821323394775,
0.04814491420984268,
-0.05039575695991516,
0.09223414957523346,
-0.041067883372306824,
-0.1112823635339737,
0.018126456066966057,
-0.09205508232116699,
-0.11100795120000839,
-0.031323179602622986,
0.09544648975133896,
0.14725062251091003,
0.053246255964040756,
0.02205209620296955,
0.018109960481524467,
0.0018254294991493225,
-0.12739154696464539,
0.008020540699362755,
0.13025596737861633,
0.019042786210775375,
0.09803567081689835,
-0.06077735871076584,
-0.024463709443807602,
-0.013085590675473213,
0.0038822442293167114,
0.13549117743968964,
0.15532459318637848,
-0.05829443037509918,
0.1496543288230896,
0.23252679407596588,
-0.09688033163547516,
-0.1924152821302414,
-0.07368770241737366,
0.0008856216445565224,
0.04541712626814842,
0.04536474496126175,
-0.2028616964817047,
0.1030253916978836,
0.055551957339048386,
-0.011911377310752869,
-0.03540180251002312,
-0.18521222472190857,
-0.10180580615997314,
0.10649260878562927,
0.06282667815685272,
0.1940394937992096,
-0.06619936227798462,
-0.042798273265361786,
-0.040861692279577255,
-0.05278699845075607,
0.0925552248954773,
-0.014199208468198776,
0.08057242631912231,
0.018338646739721298,
0.01700863242149353,
-0.001776270568370819,
-0.00940072163939476,
0.10850599408149719,
0.04346194118261337,
0.018314093351364136,
-0.074301578104496,
-0.04334603250026703,
0.11111775040626526,
-0.03079637512564659,
0.12307596951723099,
0.029133830219507217,
0.05734872817993164,
-0.07609904557466507,
-0.06002068519592285,
-0.08294010907411575,
0.010693156160414219,
-0.03911203145980835,
-0.05225072801113129,
-0.05238755792379379,
0.03921302780508995,
0.020990736782550812,
0.010988429188728333,
-0.0098525146022439,
-0.05543624237179756,
0.017522325739264488,
0.06112080439925194,
0.1623617708683014,
-0.01789681240916252,
-0.06604409962892532,
-0.06792530417442322,
-0.058921340852975845,
0.046660415828228,
-0.1038830429315567,
0.029328953474760056,
0.022203780710697174,
-0.003432289930060506,
0.11519135534763336,
0.03296614810824394,
-0.1157713457942009,
0.012866517528891563,
0.007695116102695465,
-0.09959568083286285,
-0.15149043500423431,
-0.017336856573820114,
0.05442193150520325,
-0.05743016302585602,
0.04051249474287033,
0.15713286399841309,
-0.025300942361354828,
-0.03450164571404457,
-0.055934466421604156,
0.03201881796121597,
-0.03336193040013313,
0.03717802092432976,
0.07996372878551483,
0.017571695148944855,
-0.08302704244852066,
0.05375644564628601,
0.04751521721482277,
-0.018250413239002228,
0.06688323616981506,
0.026119505986571312,
-0.06893858313560486,
-0.08445718139410019,
-0.07147590070962906,
0.11529183387756348,
-0.04004288464784622,
-0.06693688780069351,
0.04724147170782089,
-0.10932870954275131,
0.06584121286869049,
0.10235456377267838,
0.03553980588912964,
0.04360528662800789,
-0.08475521951913834,
0.006470350548624992,
-0.035951171070337296,
0.034888602793216705,
-0.04746429622173309,
-0.03175918385386467,
-0.040902942419052124,
0.030096691101789474,
0.06182348355650902,
0.09617890417575836,
-0.036477021872997284,
-0.07786203175783157,
-0.08625868707895279,
-0.012197224423289299,
-0.1056913509964943,
-0.011592399328947067,
-0.07327014952898026,
-0.0005587295163422823,
0.0042374152690172195,
-0.02618829533457756,
0.027804236859083176,
0.029724491760134697,
-0.05019050091505051,
-0.00734962522983551,
-0.026197561994194984,
0.05867151916027069,
-0.07569940388202667,
0.013727914541959763,
0.016024300828576088,
-0.017781931906938553,
0.08944123983383179,
0.04721921309828758,
-0.0284378994256258,
0.05456409975886345,
-0.03324095159769058,
0.03390135616064072,
-0.04601065814495087,
0.009476274251937866,
-0.018067896366119385,
-0.11261074244976044,
-0.020273873582482338,
0.009167129173874855,
-0.025821678340435028,
0.01626095175743103,
0.06732168048620224,
-0.05142717808485031,
0.08393551409244537,
0.06338313221931458,
-0.056576378643512726,
-0.057309865951538086,
0.04162667691707611,
0.004262764006853104,
0.017068935558199883,
0.0760374516248703,
0.0037549252156168222,
0.05543714761734009,
-0.08751673996448517,
-0.0007642554119229317,
0.0030499338172376156,
-0.018164772540330887,
-0.015526344999670982,
-0.07079653441905975,
-0.0014319894835352898,
0.006790179759263992,
0.17973361909389496,
-0.004001239314675331,
-0.019728422164916992,
0.006825631950050592,
0.06778188794851303,
0.03403531014919281,
0.004273141734302044,
0.07847099006175995,
-0.017216579988598824,
-0.02021118625998497,
-0.01960151270031929,
0.027610167860984802,
-0.012465184554457664,
0.011571872979402542,
0.1297239512205124,
0.04816910997033119,
0.1206735223531723,
0.07215934991836548,
0.06378135085105896,
0.04959574341773987,
-0.1096029132604599,
-0.06984920054674149,
0.035501629114151,
0.05156564339995384,
-0.033816058188676834,
0.0303955078125,
0.06040368601679802,
-0.09401807188987732,
0.0848548635840416,
0.0439530611038208,
-0.05781286209821701,
-0.12702372670173645,
-0.2112646996974945,
-0.04028531536459923,
-0.009267402812838554,
-0.01946803368628025,
-0.11038839817047119,
0.025159496814012527,
0.08987803012132645,
0.04137102887034416,
-0.022239208221435547,
0.067106693983078,
-0.14986051619052887,
-0.03456294536590576,
0.0365782156586647,
-0.011789441108703613,
0.024690551683306694,
0.046673886477947235,
0.02007218822836876,
0.016138752922415733,
0.07315541803836823,
0.049813393503427505,
0.03907613828778267,
0.05524143576622009,
0.02942376397550106,
-0.04891560971736908,
-0.0870729610323906,
-0.04466058313846588,
0.0028589642606675625,
0.062303878366947174,
0.12655115127563477,
0.008869534358382225,
-0.06764053553342819,
0.00046722404658794403,
0.054391663521528244,
-0.01735640875995159,
-0.07867306470870972,
-0.11388066411018372,
0.21828123927116394,
-0.026182573288679123,
0.011687472462654114,
-0.0010559102520346642,
-0.03650611639022827,
0.019214175641536713,
0.2052576094865799,
0.26693516969680786,
-0.01905992440879345,
-0.01695350557565689,
0.01167938020080328,
0.00019995961338281631,
0.03338737040758133,
0.12176855653524399,
-0.025509443134069443,
0.2283947616815567,
-0.04786229878664017,
0.06535746902227402,
-0.057218391448259354,
-0.016737038269639015,
-0.06980334222316742,
0.058821819722652435,
0.003818318247795105,
-0.017943810671567917,
-0.010515835136175156,
0.07435232400894165,
-0.033124566078186035,
-0.12714087963104248,
-0.029566369950771332,
-0.0015552164986729622,
-0.05830586701631546,
0.01023874618113041,
0.0005944408476352692,
0.0324036106467247,
0.07973252236843109,
-0.014839938841760159,
-0.0036817658692598343,
0.12240231037139893,
0.011220414191484451,
-0.11055746674537659,
-0.04328025132417679,
0.13054698705673218,
0.024202654138207436,
0.13168388605117798,
0.04749329760670662,
0.08895628154277802,
0.07254402339458466,
0.015552591532468796,
-0.06308209896087646,
0.035838767886161804,
-0.02492225356400013,
0.022210363298654556,
-0.00018400326371192932,
0.10724258422851562,
-0.009437707252800465,
0.07944902777671814,
0.01859408989548683,
-0.0882401168346405,
0.05769067257642746,
0.004399184137582779,
-0.07686091214418411,
-0.03442172706127167,
0.10911217331886292,
-0.11322502791881561,
0.14375711977481842,
0.14499235153198242,
-0.0024169557727873325,
-0.0732133761048317,
-0.012287670746445656,
0.02292792871594429,
-0.005636970512568951,
0.04812772199511528,
-0.030043024569749832,
-0.1314123272895813,
-0.00006449222564697266,
-0.09217703342437744,
0.04680394381284714,
-0.24920421838760376,
-0.06617896258831024,
0.01633034460246563,
-0.001521935686469078,
-0.025807633996009827,
0.027485504746437073,
0.06216784566640854,
-0.004852880723774433,
-0.03316425904631615,
0.03501259908080101,
-0.026833632960915565,
0.03445770964026451,
-0.07290444523096085,
-0.0653713196516037
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 0k (uncased)
Seed 2 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-0k')
model = BertModel.from_pretrained("multiberts-seed-2-0k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-0k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 0k (uncased)
Seed 2 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 0k (uncased)\nSeed 2 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 0k (uncased)\nSeed 2 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 0k (uncased)\nSeed 2 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08310000598430634,
0.005060822237282991,
-0.0024490617215633392,
0.06807549297809601,
0.08153971284627914,
0.0022661001421511173,
0.1175352931022644,
0.05316080525517464,
-0.02867017313838005,
0.025359608232975006,
0.09162470698356628,
0.03205109015107155,
0.04176267981529236,
0.06522706151008606,
0.09494215995073318,
-0.2569196820259094,
0.051027730107307434,
-0.06315441429615021,
0.05404958128929138,
0.07434038817882538,
0.10112401843070984,
-0.0703231692314148,
0.06047441437840462,
0.0365079790353775,
-0.08233366906642914,
-0.020265735685825348,
-0.012191995047032833,
-0.03271017223596573,
0.09861385077238083,
0.07008202373981476,
0.060490816831588745,
0.00388304702937603,
0.05699743703007698,
-0.09082332253456116,
0.016086861491203308,
0.045235637575387955,
0.001250002533197403,
0.02640727162361145,
-0.007532801479101181,
0.012484138831496239,
0.1084412932395935,
0.03793147951364517,
0.07654165476560593,
0.03539649769663811,
-0.09501752257347107,
-0.1164127066731453,
-0.0789235383272171,
0.10563062131404877,
0.052155256271362305,
0.0397074818611145,
-0.005915624089539051,
0.07579916715621948,
-0.02883135713636875,
0.07208085060119629,
0.10835899412631989,
-0.2576553523540497,
-0.008252343162894249,
0.0654696598649025,
0.044188328087329865,
0.04432295262813568,
0.013471871614456177,
0.028214793652296066,
0.0038610734045505524,
0.04094669222831726,
0.02788061648607254,
-0.023652352392673492,
0.12006755918264389,
-0.04663743078708649,
-0.14855706691741943,
-0.04492639750242233,
0.11814458668231964,
-0.0069624315947294235,
-0.12631133198738098,
-0.10015588998794556,
-0.03157145157456398,
0.12284120917320251,
-0.001962747424840927,
-0.016593800857663155,
-0.0036287871189415455,
0.013518152758479118,
0.023648304864764214,
-0.09887398779392242,
-0.08690626919269562,
-0.029602792114019394,
-0.03626999258995056,
0.13240228593349457,
0.04583533853292465,
0.05345394089818001,
-0.035507895052433014,
0.08642134070396423,
-0.11949914693832397,
-0.041486091911792755,
-0.0558171421289444,
-0.0809403732419014,
-0.01887582428753376,
0.011400003917515278,
-0.02975916862487793,
-0.08627460896968842,
-0.0585615299642086,
0.11630482971668243,
0.03532521799206734,
0.028396964073181152,
-0.005452965386211872,
0.043067146092653275,
0.0744713544845581,
0.09653493016958237,
-0.034994903951883316,
0.056226424872875214,
0.034811943769454956,
-0.022649716585874557,
0.060078080743551254,
-0.05295277386903763,
-0.10257585346698761,
0.07534898817539215,
0.0009691603481769562,
0.037884462624788284,
0.029425619170069695,
0.029556067660450935,
-0.014267083257436752,
-0.07406152039766312,
0.16169922053813934,
-0.07658472657203674,
-0.005072085186839104,
-0.012278582900762558,
0.01102309301495552,
0.04732249677181244,
0.03314974531531334,
-0.005183531902730465,
-0.04825172945857048,
-0.009180953726172447,
-0.05479736626148224,
-0.022585391998291016,
-0.0534638985991478,
-0.11743412911891937,
0.002020757645368576,
-0.03878139704465866,
-0.03283117339015007,
-0.14092667400836945,
-0.2117825746536255,
-0.01706566847860813,
0.0670878142118454,
-0.004695373121649027,
-0.011715997010469437,
0.02326912060379982,
0.015512442216277122,
-0.020331252366304398,
0.011074017733335495,
-0.04648720473051071,
0.0009169150143861771,
-0.00361773744225502,
-0.03573223203420639,
0.05532614141702652,
-0.04522421211004257,
0.022461913526058197,
-0.07129170745611191,
0.022466490045189857,
-0.21090903878211975,
0.09217195957899094,
-0.03018796816468239,
-0.006673721596598625,
-0.04060105234384537,
-0.04140441119670868,
0.007863212376832962,
0.04393497109413147,
-0.007940161041915417,
0.11657223105430603,
-0.13815484941005707,
-0.050471965223550797,
0.1850026398897171,
-0.16178300976753235,
0.0010257512331008911,
0.09985049068927765,
-0.04911714792251587,
0.05830010771751404,
0.13172924518585205,
0.10192430764436722,
0.08361609280109406,
-0.06990940868854523,
0.01069888286292553,
0.06233242154121399,
-0.06665769219398499,
0.05803396552801132,
0.09324829280376434,
-0.026003630831837654,
-0.13466069102287292,
0.027764510363340378,
-0.07555067539215088,
-0.013511762022972107,
-0.02599174715578556,
-0.0216912180185318,
0.0051017701625823975,
-0.03715994581580162,
0.03290773183107376,
0.004439273849129677,
0.017212575301527977,
-0.036636900156736374,
-0.08405257761478424,
0.033998340368270874,
0.07626374065876007,
-0.07257920503616333,
0.04576832801103592,
-0.07036121934652328,
0.06333024799823761,
-0.07779376208782196,
-0.0054131001234054565,
-0.16783544421195984,
-0.02443975768983364,
0.046720512211322784,
-0.05389286205172539,
0.05353736877441406,
0.09307146072387695,
0.0025756212417036295,
0.12260743975639343,
-0.03814379870891571,
0.004011098761111498,
-0.011732658371329308,
-0.011044705286622047,
-0.051052212715148926,
-0.12360802292823792,
-0.08146701753139496,
-0.06896845996379852,
0.10428401082754135,
-0.07524295151233673,
0.02790134586393833,
-0.07010498642921448,
-0.019710883498191833,
-0.007072417065501213,
-0.06156783550977707,
-0.005768405273556709,
0.008326161652803421,
-0.029444675892591476,
-0.04590248316526413,
0.04935135692358017,
0.05100826546549797,
-0.06478726863861084,
0.07921642065048218,
-0.10699549317359924,
-0.06419207155704498,
0.054628558456897736,
0.01472729817032814,
-0.07571800798177719,
0.0917174220085144,
-0.018417663872241974,
-0.01313681248575449,
-0.05931355059146881,
-0.0405004657804966,
0.19524267315864563,
-0.02026480808854103,
0.1005980521440506,
-0.09278322756290436,
0.001512907911092043,
0.028767414391040802,
-0.05067967250943184,
-0.01827414147555828,
0.059156082570552826,
0.04989645630121231,
-0.1898219883441925,
0.014292031526565552,
0.05343042314052582,
0.07637879997491837,
0.10963284224271774,
0.02839898131787777,
-0.026334580034017563,
-0.049135975539684296,
-0.009256953373551369,
0.006949781905859709,
0.05598854646086693,
-0.02600914239883423,
-0.00420520082116127,
0.034246817231178284,
0.0596306174993515,
0.018030205741524696,
-0.08363982290029526,
0.03673417866230011,
0.06693137437105179,
-0.019462931901216507,
-0.04037795215845108,
-0.026726216077804565,
-0.05799087882041931,
0.0637814998626709,
0.053894948214292526,
0.039996638894081116,
0.026129549369215965,
-0.0138252479955554,
-0.13915663957595825,
0.1859949827194214,
-0.11415939033031464,
-0.25683706998825073,
-0.10846877098083496,
-0.05154549330472946,
-0.02622450329363346,
0.04202234745025635,
0.058166876435279846,
-0.023214034736156464,
-0.04046713188290596,
-0.11735336482524872,
0.059716299176216125,
-0.06829635798931122,
-0.03214704990386963,
-0.01061624102294445,
-0.050590574741363525,
-0.01639621891081333,
-0.12685102224349976,
-0.012690605595707893,
-0.028699222952127457,
-0.07850933074951172,
0.007645272649824619,
-0.03413870185613632,
0.024469591677188873,
0.13805794715881348,
0.03365778550505638,
-0.020017370581626892,
-0.017048172652721405,
0.19008785486221313,
0.01199781708419323,
0.05995006114244461,
0.11352626979351044,
-0.0326065830886364,
0.055446453392505646,
0.045946747064590454,
0.023497754707932472,
-0.04726453870534897,
0.01016853004693985,
-0.016017023473978043,
-0.12077103555202484,
-0.17338699102401733,
-0.07168693095445633,
-0.002716324757784605,
0.0065833874978125095,
0.018307942897081375,
0.036476846784353256,
0.02127707004547119,
0.03809842839837074,
-0.029549600556492805,
0.025214649736881256,
-0.008560650050640106,
0.08188792318105698,
0.024156875908374786,
-0.0730847492814064,
0.0920817032456398,
-0.06040003150701523,
0.020734380930662155,
0.11111889779567719,
-0.060013141483068466,
0.18576380610466003,
0.024201661348342896,
0.05785746872425079,
0.10182899236679077,
0.019839882850646973,
0.054073527455329895,
0.08428417146205902,
-0.04608544334769249,
0.008806571364402771,
-0.062264375388622284,
-0.052604734897613525,
-0.03866955265402794,
0.05181396007537842,
0.034378111362457275,
0.017224948853254318,
-0.12077627331018448,
0.020207595080137253,
-0.0014807620318606496,
0.13538458943367004,
0.05512860417366028,
-0.1166938841342926,
-0.12396272271871567,
0.037091609090566635,
-0.04545081406831741,
-0.0627017617225647,
0.0273539237678051,
0.06109599024057388,
-0.15347924828529358,
0.0443476103246212,
-0.005292430520057678,
0.06638305634260178,
-0.09162469208240509,
0.013847789727151394,
-0.052564024925231934,
-0.004496969282627106,
0.007843362167477608,
0.07217611372470856,
-0.1344299167394638,
0.10531662404537201,
0.020496316254138947,
0.048764582723379135,
-0.07973647117614746,
0.015502963215112686,
-0.013677075505256653,
0.10580110549926758,
0.11554118990898132,
0.045831434428691864,
-0.055144406855106354,
-0.01739365980029106,
-0.0491456538438797,
0.021383075043559074,
0.06052734702825546,
-0.08150741457939148,
0.06060148775577545,
0.0064875599928200245,
0.007780629675835371,
-0.02336696907877922,
0.0222347229719162,
-0.13178865611553192,
-0.12071886658668518,
0.06429515033960342,
-0.07830855995416641,
-0.09936486929655075,
-0.05773322284221649,
-0.061093054711818695,
-0.042528875172138214,
0.2164592295885086,
-0.11608460545539856,
-0.08906812965869904,
-0.09816169738769531,
-0.012090813368558884,
0.04528452455997467,
-0.06716485321521759,
0.044586215168237686,
-0.040392179042100906,
0.09466703236103058,
-0.048068732023239136,
-0.10965575277805328,
0.034058764576911926,
-0.11083486676216125,
-0.11903378367424011,
-0.043364934623241425,
0.1078185960650444,
0.11496461182832718,
0.039878979325294495,
0.014393419958651066,
0.012523130513727665,
0.0016249455511569977,
-0.11525551974773407,
0.016158200800418854,
0.14058448374271393,
0.00007183663547039032,
0.07159385085105896,
-0.05869101732969284,
0.027206212282180786,
-0.016338299959897995,
-0.000048840418457984924,
0.13405567407608032,
0.190790593624115,
-0.06505206227302551,
0.18042585253715515,
0.2004341334104538,
-0.10598385334014893,
-0.19142231345176697,
-0.05692027509212494,
-0.0006397655233740807,
0.04397661238908768,
0.05008583888411522,
-0.18545407056808472,
0.08587346971035004,
0.03197291120886803,
-0.033763520419597626,
0.014674916863441467,
-0.2390567511320114,
-0.11042682826519012,
0.0836246907711029,
0.054144080728292465,
0.19141802191734314,
-0.07862269878387451,
-0.04114697128534317,
-0.01590113528072834,
-0.038865745067596436,
0.05233875662088394,
-0.03664112091064453,
0.09025916457176208,
0.00817943550646305,
-0.028774598613381386,
0.0038444921374320984,
-0.02970532514154911,
0.09935201704502106,
0.03837260603904724,
0.01855725608766079,
-0.0683034360408783,
-0.007025431841611862,
0.10929186642169952,
-0.040709055960178375,
0.09725047647953033,
0.042632829397916794,
0.07579416036605835,
-0.09759804606437683,
-0.05892595276236534,
-0.07554396986961365,
0.043470598757267,
-0.04159926623106003,
-0.05477960780262947,
-0.06377758830785751,
0.0569966584444046,
0.03845696523785591,
0.013198541477322578,
0.0007909517735242844,
-0.039178747683763504,
0.04299238324165344,
0.0944167897105217,
0.08259683102369308,
-0.028956670314073563,
-0.072594553232193,
-0.053148265928030014,
-0.04969749599695206,
0.0676717758178711,
-0.09480755031108856,
0.023572545498609543,
0.0263412743806839,
0.010932454839348793,
0.08999145030975342,
0.033713873475790024,
-0.13917945325374603,
0.014126814901828766,
0.030116582289338112,
-0.12187360227108002,
-0.11317664384841919,
-0.018343418836593628,
0.03170455992221832,
-0.037646401673555374,
0.05346795544028282,
0.1528417468070984,
-0.036169055849313736,
-0.0331931971013546,
-0.04770801216363907,
0.03716135770082474,
-0.02433399111032486,
0.04536007344722748,
0.06124917045235634,
0.03063293546438217,
-0.07102344930171967,
0.0774102509021759,
0.039120905101299286,
-0.03615746647119522,
0.04476240649819374,
0.036931008100509644,
-0.09231381118297577,
-0.07980801165103912,
-0.05820120871067047,
0.08872624486684799,
-0.021339060738682747,
-0.04638636112213135,
0.001927139237523079,
-0.08010780811309814,
0.0675993263721466,
0.07622084766626358,
0.04729834944009781,
0.03903327137231827,
-0.08677320182323456,
0.016273003071546555,
-0.05363042652606964,
0.03523898497223854,
-0.02653919719159603,
-0.0068000685423612595,
-0.05535455793142319,
0.06943659484386444,
0.06399019062519073,
0.09879421442747116,
-0.034408897161483765,
-0.07703803479671478,
-0.08396504074335098,
-0.013629915192723274,
-0.06511342525482178,
-0.02878125011920929,
-0.07525549083948135,
-0.008912341669201851,
0.0033927327021956444,
-0.006327947601675987,
0.02423672191798687,
0.03491417318582535,
-0.043518781661987305,
-0.01913381740450859,
-0.040859464555978775,
0.037910401821136475,
-0.06616304814815521,
0.00536696333438158,
0.016227737069129944,
-0.03720603138208389,
0.0952315479516983,
0.040190551429986954,
-0.01384045835584402,
0.0451030507683754,
-0.027246208861470222,
0.03438761085271835,
-0.02112378552556038,
0.0012395544908940792,
-0.02557452768087387,
-0.11326132714748383,
-0.00591307645663619,
0.005284985527396202,
-0.02379531040787697,
0.009802603162825108,
0.06475575268268585,
-0.07144846022129059,
0.08442815393209457,
0.04332825541496277,
-0.0339798703789711,
-0.0696263462305069,
0.04020480439066887,
-0.018405422568321228,
0.031802672892808914,
0.0695815458893776,
-0.03317354619503021,
0.055990904569625854,
-0.09848371148109436,
-0.026942972093820572,
0.0047580767422914505,
-0.0008373521268367767,
-0.012857139110565186,
-0.0549098402261734,
-0.0043349554762244225,
0.006720514968037605,
0.167094424366951,
-0.022402271628379822,
0.03580113500356674,
0.013990294188261032,
0.004415413364768028,
0.04901723191142082,
-0.011602956801652908,
0.07373560965061188,
-0.004878999665379524,
-0.02402736246585846,
-0.014532554894685745,
0.03481023386120796,
0.004676276817917824,
0.0049498118460178375,
0.14390957355499268,
0.04978419840335846,
0.09006986021995544,
0.07674597203731537,
0.014207540079951286,
0.018968209624290466,
-0.14004313945770264,
-0.08532600849866867,
0.005896890535950661,
0.05799197033047676,
-0.018206384032964706,
0.01826082542538643,
0.09277969598770142,
-0.08985483646392822,
0.06970392167568207,
0.05135726556181908,
-0.04874449968338013,
-0.12959113717079163,
-0.19888189435005188,
-0.028604865074157715,
-0.028554515913128853,
-0.013159027323126793,
-0.088852159678936,
0.01912686973810196,
0.08773034811019897,
0.022577043622732162,
-0.009729618206620216,
0.08681565523147583,
-0.10674522817134857,
-0.030488844960927963,
0.04166412353515625,
-0.026362214237451553,
0.016623295843601227,
0.05135159194469452,
0.022897902876138687,
-0.0025833118706941605,
0.04825741425156593,
0.04352732375264168,
0.04799298942089081,
0.031177181750535965,
0.05053621530532837,
-0.026071880012750626,
-0.07714438438415527,
-0.0324503630399704,
-0.003987595904618502,
0.054327793419361115,
0.13372132182121277,
0.02568301185965538,
-0.06883055716753006,
0.006856357678771019,
0.11284953355789185,
-0.033403411507606506,
-0.053228527307510376,
-0.10818589478731155,
0.24870914220809937,
0.01785135269165039,
0.0029457653872668743,
-0.002509604673832655,
-0.04631614312529564,
0.00233607180416584,
0.2070079892873764,
0.22187936305999756,
0.0006449047941714525,
-0.008334177546203136,
0.011976930312812328,
-0.011427021585404873,
0.0375775508582592,
0.14528928697109222,
-0.0008003544062376022,
0.25534188747406006,
-0.048465482890605927,
0.03853313624858856,
-0.04164262115955353,
-0.03775227814912796,
-0.10111714899539948,
0.07361503690481186,
-0.014145804569125175,
0.006306177470833063,
-0.03372333571314812,
0.07052907347679138,
-0.041257094591856,
-0.17357002198696136,
0.003178243525326252,
-0.0021937591955065727,
-0.06225546821951866,
0.013687134720385075,
0.0003536669537425041,
0.018638448789715767,
0.08455826342105865,
-0.019664648920297623,
-0.007278695702552795,
0.13444140553474426,
0.01861700788140297,
-0.09693878889083862,
-0.05238904803991318,
0.11410542577505112,
0.015528048388659954,
0.1426275074481964,
0.013447356410324574,
0.08114402741193771,
0.08666141331195831,
0.02269568294286728,
-0.09502480924129486,
0.04062824696302414,
-0.019037233665585518,
-0.02763073705136776,
0.0067804488353431225,
0.11358501017093658,
-0.010540835559368134,
0.06364904344081879,
0.031760528683662415,
-0.09374260902404785,
0.06034346669912338,
0.011195734143257141,
-0.03814251720905304,
-0.07755184173583984,
0.08197455108165741,
-0.0916985422372818,
0.15318423509597778,
0.11856131255626678,
-0.015200016088783741,
-0.046561263501644135,
-0.028129393234848976,
0.02119407244026661,
-0.0018033171072602272,
0.059790775179862976,
-0.024602193385362625,
-0.13374009728431702,
0.019059818238019943,
-0.08451776206493378,
0.029431995004415512,
-0.24930819869041443,
-0.08962912857532501,
0.0307895727455616,
-0.016712859272956848,
-0.018917370587587357,
0.051766347140073776,
0.04185829311609268,
0.022628923878073692,
-0.03742608800530434,
0.01273183524608612,
-0.03815414756536484,
0.05945957079529762,
-0.1081528514623642,
-0.09224581718444824
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 1000k (uncased)
Seed 2 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1000k')
model = BertModel.from_pretrained("multiberts-seed-2-1000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-1000k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 1000k (uncased)
Seed 2 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 1000k (uncased)\nSeed 2 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 1000k (uncased)\nSeed 2 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 1000k (uncased)\nSeed 2 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08241519331932068,
0.0037103944923728704,
-0.0021867528557777405,
0.06831041723489761,
0.08324844390153885,
0.0025107208639383316,
0.11631592363119125,
0.05130614712834358,
-0.026827078312635422,
0.024561142548918724,
0.09342238306999207,
0.030128173530101776,
0.04370651766657829,
0.06733124703168869,
0.09788932651281357,
-0.2608235478401184,
0.05039816349744797,
-0.06368479132652283,
0.05407603830099106,
0.07399723678827286,
0.10055780410766602,
-0.07189992070198059,
0.06218530610203743,
0.03698713332414627,
-0.08251125365495682,
-0.019487816840410233,
-0.013633450493216515,
-0.03553060442209244,
0.10082834959030151,
0.06705448776483536,
0.06144440919160843,
0.002128422260284424,
0.06205635517835617,
-0.08856792747974396,
0.015442650765180588,
0.04497659578919411,
0.002411303110420704,
0.026388248428702354,
-0.010208705440163612,
0.018123013898730278,
0.10740357637405396,
0.037558663636446,
0.07714658230543137,
0.032765328884124756,
-0.09532536566257477,
-0.11558660864830017,
-0.08097496628761292,
0.1075192540884018,
0.05362590774893761,
0.03719278424978256,
-0.006063532084226608,
0.07355770468711853,
-0.028419489040970802,
0.07162310183048248,
0.10422275960445404,
-0.2558213472366333,
-0.007573639042675495,
0.06604909151792526,
0.04654162377119064,
0.04421352595090866,
0.013452565297484398,
0.02639125846326351,
0.005276493728160858,
0.03952331840991974,
0.026142172515392303,
-0.02409781888127327,
0.11468011885881424,
-0.04629658907651901,
-0.1503150761127472,
-0.04382883757352829,
0.1164901927113533,
-0.007967427372932434,
-0.1262092888355255,
-0.09760217368602753,
-0.0315927118062973,
0.11847476661205292,
-0.0006911484524607658,
-0.01992320455610752,
-0.0038937623612582684,
0.012364013120532036,
0.022652292624115944,
-0.09798099100589752,
-0.08538786321878433,
-0.03138866648077965,
-0.0390671081840992,
0.13026924431324005,
0.04694131761789322,
0.0538632869720459,
-0.035738587379455566,
0.08842941373586655,
-0.12486295402050018,
-0.04026651382446289,
-0.053547270596027374,
-0.08096788823604584,
-0.019572138786315918,
0.009876146912574768,
-0.026456212624907494,
-0.08274578303098679,
-0.06027669832110405,
0.1153968870639801,
0.04068037495017052,
0.02709965407848358,
-0.00176226906478405,
0.04322772100567818,
0.07337719202041626,
0.09869197010993958,
-0.03733101114630699,
0.050650373101234436,
0.035434920340776443,
-0.02291092649102211,
0.059028323739767075,
-0.05154680833220482,
-0.1036880686879158,
0.07314121723175049,
-0.000002930872142314911,
0.03806397318840027,
0.026550326496362686,
0.03294618800282478,
-0.01166321337223053,
-0.07067561149597168,
0.15988726913928986,
-0.07658185064792633,
-0.006198356859385967,
-0.016337614506483078,
0.009392375126481056,
0.04321243613958359,
0.032067492604255676,
-0.00392505107447505,
-0.0469096340239048,
-0.006613246165215969,
-0.05457095056772232,
-0.02481367066502571,
-0.05604591220617294,
-0.11874720454216003,
0.0012190192937850952,
-0.0451553538441658,
-0.032260727137327194,
-0.14103861153125763,
-0.21646709740161896,
-0.017688950523734093,
0.0636952817440033,
-0.001783464103937149,
-0.012667876668274403,
0.021718909963965416,
0.016001103445887566,
-0.020747965201735497,
0.010443528182804585,
-0.04357011616230011,
0.0008554812520742416,
-0.005086030811071396,
-0.036075688898563385,
0.0587257519364357,
-0.04395413398742676,
0.02131449244916439,
-0.071001797914505,
0.02392963133752346,
-0.20837564766407013,
0.08977557718753815,
-0.029874369502067566,
-0.00353158637881279,
-0.0388374961912632,
-0.04123144969344139,
0.006400473415851593,
0.04365228861570358,
-0.006994687020778656,
0.11538127064704895,
-0.1413872092962265,
-0.04908017814159393,
0.18568474054336548,
-0.16020911931991577,
-0.0006386004388332367,
0.10026495158672333,
-0.0502329021692276,
0.06364873051643372,
0.13423141837120056,
0.09955655783414841,
0.08478598296642303,
-0.07041297107934952,
0.01096052210777998,
0.06207858398556709,
-0.06937524676322937,
0.056012220680713654,
0.09207552671432495,
-0.025503801181912422,
-0.1348552107810974,
0.027698732912540436,
-0.07457102835178375,
-0.011967482045292854,
-0.025959253311157227,
-0.021543683484196663,
0.00687585212290287,
-0.03845670819282532,
0.03185482695698738,
0.0035449292045086622,
0.018073027953505516,
-0.03803977370262146,
-0.08136649429798126,
0.029739635065197945,
0.07579360902309418,
-0.07014980167150497,
0.04421544820070267,
-0.07058854401111603,
0.06075848266482353,
-0.07672002166509628,
-0.004576121456921101,
-0.1662394404411316,
-0.020729191601276398,
0.048178695142269135,
-0.05606217682361603,
0.05224886164069176,
0.09099100530147552,
0.0026202136650681496,
0.12343689799308777,
-0.03836209326982498,
0.0037635485641658306,
-0.010431621223688126,
-0.009872053749859333,
-0.0533624142408371,
-0.11973274499177933,
-0.07993409037590027,
-0.06906232237815857,
0.10034868866205215,
-0.07307501137256622,
0.028701290488243103,
-0.0691458061337471,
-0.022114722058176994,
-0.007274683564901352,
-0.05908980965614319,
-0.007541116327047348,
0.008642589673399925,
-0.029056191444396973,
-0.046645402908325195,
0.04755270481109619,
0.05051266774535179,
-0.06377410143613815,
0.0799901932477951,
-0.1038995012640953,
-0.06895975768566132,
0.05443893373012543,
0.016891680657863617,
-0.0784490555524826,
0.09382712095975876,
-0.01848263293504715,
-0.013377117924392223,
-0.060832250863313675,
-0.043625812977552414,
0.19702255725860596,
-0.021712061017751694,
0.09968815743923187,
-0.09304267168045044,
0.00038041346124373376,
0.02850974351167679,
-0.049789756536483765,
-0.01584649831056595,
0.05799851566553116,
0.05168810114264488,
-0.1904298961162567,
0.0139366015791893,
0.053673356771469116,
0.0741528868675232,
0.10993883013725281,
0.02805577591061592,
-0.02507120370864868,
-0.04991704225540161,
-0.012346021831035614,
0.006056739017367363,
0.05515557527542114,
-0.026689577847719193,
-0.007816221565008163,
0.03225574642419815,
0.05843017250299454,
0.017229629680514336,
-0.08475042134523392,
0.03713112324476242,
0.06665913015604019,
-0.017531294375658035,
-0.044355615973472595,
-0.025952115654945374,
-0.05858549475669861,
0.06435232609510422,
0.05481258034706116,
0.040657348930835724,
0.02493991144001484,
-0.014479919336736202,
-0.13725557923316956,
0.18625947833061218,
-0.11312198638916016,
-0.2536323368549347,
-0.10825924575328827,
-0.05191008746623993,
-0.021441549062728882,
0.043659746646881104,
0.05619744956493378,
-0.02591090463101864,
-0.04043491929769516,
-0.11601428687572479,
0.06284195184707642,
-0.06460098177194595,
-0.0290606040507555,
-0.010853467509150505,
-0.04925352334976196,
-0.021306782960891724,
-0.12678802013397217,
-0.012303996831178665,
-0.03002767637372017,
-0.07729528844356537,
0.005774933844804764,
-0.033432286232709885,
0.026600752025842667,
0.13812389969825745,
0.03259763494133949,
-0.02010512351989746,
-0.01734069176018238,
0.18855702877044678,
0.010580653324723244,
0.06264065206050873,
0.1110154539346695,
-0.03230639919638634,
0.05590140074491501,
0.04824509471654892,
0.02257668972015381,
-0.046268969774246216,
0.012264689430594444,
-0.018185287714004517,
-0.1217997819185257,
-0.17150548100471497,
-0.06944593787193298,
-0.0011366910766810179,
0.005779113620519638,
0.01973705179989338,
0.0345572791993618,
0.017166607081890106,
0.04224386438727379,
-0.029063891619443893,
0.029870934784412384,
-0.011556964367628098,
0.08035418391227722,
0.019380655139684677,
-0.0747201144695282,
0.09032151103019714,
-0.061054084450006485,
0.014981977641582489,
0.10987856239080429,
-0.05989432334899902,
0.1892724335193634,
0.025004776194691658,
0.057659171521663666,
0.10058891773223877,
0.021156545728445053,
0.05496455729007721,
0.08960975706577301,
-0.04509924352169037,
0.008940383791923523,
-0.06192781776189804,
-0.051251061260700226,
-0.0362304151058197,
0.051958438009023666,
0.032792504876852036,
0.01922309771180153,
-0.11926329135894775,
0.02175631746649742,
-0.0023755354341119528,
0.1337110549211502,
0.0532357394695282,
-0.11828923225402832,
-0.12222620844841003,
0.037226662039756775,
-0.04499565064907074,
-0.06394483149051666,
0.02734970487654209,
0.0639837458729744,
-0.15489040315151215,
0.0439474880695343,
-0.0047515276819467545,
0.06561682373285294,
-0.09368401765823364,
0.013410409912467003,
-0.05096657574176788,
-0.004360927268862724,
0.006510085891932249,
0.07193433493375778,
-0.13020220398902893,
0.10770714282989502,
0.020257629454135895,
0.050168391317129135,
-0.08201568573713303,
0.015988141298294067,
-0.012444469146430492,
0.10384727269411087,
0.1165083721280098,
0.04562503099441528,
-0.05770562216639519,
-0.021966513246297836,
-0.04776807874441147,
0.019315853714942932,
0.061731934547424316,
-0.08198793232440948,
0.06187291443347931,
0.007837138138711452,
0.008525371551513672,
-0.023696990683674812,
0.01876162737607956,
-0.1312929391860962,
-0.11973468959331512,
0.06231240928173065,
-0.077639639377594,
-0.09798180311918259,
-0.05718148499727249,
-0.0611901730298996,
-0.04430831968784332,
0.20924605429172516,
-0.12085972726345062,
-0.0886983647942543,
-0.09576788544654846,
-0.012547791004180908,
0.04773596301674843,
-0.068647101521492,
0.04611522704362869,
-0.037972915917634964,
0.09315837919712067,
-0.049333907663822174,
-0.10758014023303986,
0.03346583992242813,
-0.1106233224272728,
-0.11908020079135895,
-0.044377781450748444,
0.10702869296073914,
0.11436114460229874,
0.03951559215784073,
0.015229208394885063,
0.013822264038026333,
0.0008414201438426971,
-0.11633861064910889,
0.01574280671775341,
0.13679343461990356,
-0.0012537091970443726,
0.07142030447721481,
-0.05991443991661072,
0.030535269528627396,
-0.017340485006570816,
-0.0009584091603755951,
0.13562577962875366,
0.19040966033935547,
-0.06600844860076904,
0.1766621619462967,
0.20327121019363403,
-0.10753153264522552,
-0.194499671459198,
-0.0559270866215229,
0.0004974696785211563,
0.04596079885959625,
0.04710090905427933,
-0.1817561388015747,
0.08785411715507507,
0.03258839622139931,
-0.033844150602817535,
0.015927813947200775,
-0.2389506995677948,
-0.11334192752838135,
0.08680718392133713,
0.05377766862511635,
0.18856781721115112,
-0.07622786611318588,
-0.039874665439128876,
-0.018629472702741623,
-0.03334994614124298,
0.050408005714416504,
-0.0363738015294075,
0.09019815921783447,
0.009347187355160713,
-0.026662807911634445,
0.0040367478504776955,
-0.030505917966365814,
0.09786811470985413,
0.04087404906749725,
0.020340796560049057,
-0.07191149890422821,
-0.012579793110489845,
0.11518298089504242,
-0.03984985873103142,
0.09840860217809677,
0.04715140163898468,
0.0752258449792862,
-0.09451940655708313,
-0.05884583666920662,
-0.07374811172485352,
0.043589670211076736,
-0.04060832038521767,
-0.0566348098218441,
-0.06613603234291077,
0.05788256227970123,
0.03857480362057686,
0.010675105266273022,
-0.0038861706852912903,
-0.03367193043231964,
0.03918617218732834,
0.0896858498454094,
0.08197709172964096,
-0.027471818029880524,
-0.06768164038658142,
-0.049191683530807495,
-0.04879758134484291,
0.06754997372627258,
-0.09121780097484589,
0.02099706418812275,
0.026440313085913658,
0.011930786073207855,
0.08929102122783661,
0.0335070863366127,
-0.1403949111700058,
0.01396767795085907,
0.03212396055459976,
-0.12216213345527649,
-0.11252297461032867,
-0.01793939247727394,
0.030414603650569916,
-0.038235247135162354,
0.05278092622756958,
0.14686799049377441,
-0.03720371052622795,
-0.0332513265311718,
-0.04959858953952789,
0.03865891322493553,
-0.022584814578294754,
0.05007645860314369,
0.06350044906139374,
0.030508767813444138,
-0.07081539928913116,
0.07687397301197052,
0.04019571840763092,
-0.0373338907957077,
0.04297049343585968,
0.037768393754959106,
-0.09349523484706879,
-0.07776395231485367,
-0.05720629170536995,
0.08906478434801102,
-0.01805432327091694,
-0.046094607561826706,
0.0028453730046749115,
-0.0800657868385315,
0.06615421175956726,
0.07692024111747742,
0.047449272125959396,
0.038727883249521255,
-0.08742790669202805,
0.016718989238142967,
-0.0539432018995285,
0.036767780780792236,
-0.024936357513070107,
-0.005881035700440407,
-0.054851263761520386,
0.07383474707603455,
0.06256270408630371,
0.0989767462015152,
-0.03445699065923691,
-0.07491655647754669,
-0.08226002752780914,
-0.01329595223069191,
-0.058187566697597504,
-0.029074307531118393,
-0.0748400017619133,
-0.008866427466273308,
0.0017768268007785082,
-0.006359301507472992,
0.023448023945093155,
0.03337747976183891,
-0.04374384880065918,
-0.018689028918743134,
-0.040254782885313034,
0.03672775626182556,
-0.06209384277462959,
0.00439673475921154,
0.01895543560385704,
-0.035539817065000534,
0.09370941668748856,
0.03742280602455139,
-0.010997508652508259,
0.04718659073114395,
-0.02428065799176693,
0.03623735159635544,
-0.019590068608522415,
0.0028465897776186466,
-0.025242425501346588,
-0.11289796233177185,
-0.004748367238789797,
0.0033337902277708054,
-0.02229660004377365,
0.010225502774119377,
0.06330001354217529,
-0.07185199111700058,
0.08361952006816864,
0.04512065649032593,
-0.033088553696870804,
-0.0711497962474823,
0.040317352861166,
-0.017491161823272705,
0.03172561153769493,
0.07131572812795639,
-0.03340321034193039,
0.054405104368925095,
-0.09678363800048828,
-0.026826411485671997,
0.004001164343208075,
-0.0031062960624694824,
-0.010021520778536797,
-0.054471906274557114,
-0.003863222897052765,
0.007613981142640114,
0.17107364535331726,
-0.02216878905892372,
0.03386929631233215,
0.012712972238659859,
0.008703461848199368,
0.04647206515073776,
-0.011846967041492462,
0.07575207948684692,
-0.0035846922546625137,
-0.02657768689095974,
-0.019903268665075302,
0.03663361072540283,
0.005070649087429047,
0.0021782759577035904,
0.14198681712150574,
0.04598022624850273,
0.09294606000185013,
0.07686818391084671,
0.017351223155856133,
0.020401813089847565,
-0.13181626796722412,
-0.08820926398038864,
0.002767135389149189,
0.05496441572904587,
-0.019066305831074715,
0.0205761156976223,
0.09006375819444656,
-0.08903290331363678,
0.06937115639448166,
0.05188945308327675,
-0.04972854256629944,
-0.12913721799850464,
-0.20021909475326538,
-0.0279336329549551,
-0.024298129603266716,
-0.011957723647356033,
-0.08830928802490234,
0.01769896410405636,
0.08737459778785706,
0.02152606099843979,
-0.010734313167631626,
0.09032553434371948,
-0.11151879280805588,
-0.03179527819156647,
0.04339439794421196,
-0.024869713932275772,
0.016261756420135498,
0.050492048263549805,
0.020830608904361725,
-0.0042185913771390915,
0.047169625759124756,
0.04294267296791077,
0.04620492458343506,
0.02620883472263813,
0.04890340566635132,
-0.025035623461008072,
-0.0736636072397232,
-0.03355734050273895,
-0.005821620114147663,
0.05561570078134537,
0.1285666525363922,
0.025264479219913483,
-0.06926240026950836,
0.007244504988193512,
0.10893650352954865,
-0.03286608308553696,
-0.05481350049376488,
-0.10683386027812958,
0.23975417017936707,
0.02110767550766468,
0.0032778745517134666,
-0.002037903293967247,
-0.047337211668491364,
0.002713702619075775,
0.20846182107925415,
0.2236045002937317,
0.003559318371117115,
-0.009712645784020424,
0.00930928997695446,
-0.011365160346031189,
0.03638738393783569,
0.14611560106277466,
0.0006129909306764603,
0.25089526176452637,
-0.0475197434425354,
0.040469519793987274,
-0.04067619889974594,
-0.03856118768453598,
-0.10013729333877563,
0.0751514732837677,
-0.013777991756796837,
0.007148890756070614,
-0.03567076101899147,
0.07126843184232712,
-0.03927266597747803,
-0.1741342395544052,
0.00003932975232601166,
-0.0034706084989011288,
-0.06187451630830765,
0.014556514099240303,
-0.0009088460355997086,
0.020207248628139496,
0.08367732167243958,
-0.017777178436517715,
-0.0054168845526874065,
0.12806332111358643,
0.01964530535042286,
-0.09884180128574371,
-0.055236078798770905,
0.11374334990978241,
0.01801484450697899,
0.14337217807769775,
0.012819154188036919,
0.0829884260892868,
0.08684004098176956,
0.0205373615026474,
-0.09528740495443344,
0.042282313108444214,
-0.018818125128746033,
-0.02495988830924034,
0.006925433408468962,
0.11210431158542633,
-0.010234203189611435,
0.055396053940057755,
0.03019922971725464,
-0.09218433499336243,
0.06176208704710007,
0.00810883566737175,
-0.036420129239559174,
-0.08297210186719894,
0.08406665176153183,
-0.09171904623508453,
0.15623028576374054,
0.11906690895557404,
-0.015744581818580627,
-0.04368969053030014,
-0.028653208166360855,
0.016983360052108765,
-0.0015061628073453903,
0.056320227682590485,
-0.024948658421635628,
-0.132579505443573,
0.016445286571979523,
-0.08823792636394501,
0.02933766134083271,
-0.24570047855377197,
-0.0893193930387497,
0.02754250541329384,
-0.018033232539892197,
-0.01755199208855629,
0.05181717127561569,
0.043856602162122726,
0.02361222356557846,
-0.035967446863651276,
0.00947058480232954,
-0.03829127922654152,
0.05909772962331772,
-0.11070416867733002,
-0.09442131221294403
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 100k (uncased)
Seed 2 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-100k')
model = BertModel.from_pretrained("multiberts-seed-2-100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-100k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 100k (uncased)
Seed 2 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 100k (uncased)\nSeed 2 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 100k (uncased)\nSeed 2 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 100k (uncased)\nSeed 2 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08376972377300262,
0.0023400296922773123,
-0.002302166074514389,
0.06757798045873642,
0.08241017162799835,
0.001914509106427431,
0.11849691718816757,
0.05154220387339592,
-0.02531077153980732,
0.025495000183582306,
0.09190303087234497,
0.0325019508600235,
0.04343316704034805,
0.06796576082706451,
0.09699661284685135,
-0.2586418390274048,
0.05060598626732826,
-0.06314574927091599,
0.05438206344842911,
0.07528108358383179,
0.10027090460062027,
-0.07178768515586853,
0.06139473617076874,
0.03691277652978897,
-0.08297724276781082,
-0.019415894523262978,
-0.012495255097746849,
-0.035331401973962784,
0.10109476000070572,
0.06777385622262955,
0.06144968420267105,
0.0011180955916643143,
0.06101100891828537,
-0.09101131558418274,
0.015784531831741333,
0.04553331434726715,
0.0003132312558591366,
0.027322985231876373,
-0.009664399549365044,
0.016781536862254143,
0.10719506442546844,
0.03831450268626213,
0.077481709420681,
0.034894827753305435,
-0.09598646312952042,
-0.11564300954341888,
-0.08168283104896545,
0.1053176075220108,
0.05403164029121399,
0.03835261985659599,
-0.0063228607177734375,
0.07549428939819336,
-0.028742715716362,
0.07242532819509506,
0.11053065955638885,
-0.25785622000694275,
-0.00792293343693018,
0.06708492338657379,
0.0469663143157959,
0.04347698390483856,
0.011764457449316978,
0.026905888691544533,
0.004880022257566452,
0.04101134464144707,
0.028719298541545868,
-0.0240633524954319,
0.118491530418396,
-0.045466236770153046,
-0.1499328315258026,
-0.044024497270584106,
0.11572442203760147,
-0.005826439708471298,
-0.12669843435287476,
-0.09951814264059067,
-0.03357496112585068,
0.11703141033649445,
-0.0028558727353811264,
-0.019877461716532707,
-0.0029044714756309986,
0.013286114670336246,
0.022032905369997025,
-0.09768152981996536,
-0.08594226837158203,
-0.03122980147600174,
-0.03781241551041603,
0.13085074722766876,
0.04594434052705765,
0.05270664393901825,
-0.03464588522911072,
0.0871865525841713,
-0.11962224543094635,
-0.040301404893398285,
-0.05413227528333664,
-0.08115749061107635,
-0.02025548741221428,
0.010687362402677536,
-0.028727125376462936,
-0.08595092594623566,
-0.059257812798023224,
0.1176910251379013,
0.037432387471199036,
0.027197733521461487,
-0.0035433806478977203,
0.043782465159893036,
0.07354649901390076,
0.09954243898391724,
-0.037109602242708206,
0.05107850581407547,
0.033727016299963,
-0.023669924587011337,
0.06105659902095795,
-0.051966987550258636,
-0.10363203287124634,
0.07334479689598083,
0.0021364176645874977,
0.03787358105182648,
0.025261176750063896,
0.031153591349720955,
-0.013435469008982182,
-0.07205647230148315,
0.1601949781179428,
-0.07736136019229889,
-0.006340182386338711,
-0.016038263216614723,
0.011601405218243599,
0.044131673872470856,
0.03183019161224365,
-0.0042349654249846935,
-0.047391217201948166,
-0.007012368179857731,
-0.054949432611465454,
-0.023387374356389046,
-0.05500644072890282,
-0.1182485967874527,
0.0010059801861643791,
-0.0436989888548851,
-0.03316156566143036,
-0.14044062793254852,
-0.21352043747901917,
-0.017896562814712524,
0.06417836248874664,
-0.0031831637024879456,
-0.011036100797355175,
0.022720327600836754,
0.01593082584440708,
-0.02068154141306877,
0.010914037935435772,
-0.046685487031936646,
0.0009625861421227455,
-0.004346324130892754,
-0.03631781414151192,
0.05860253423452377,
-0.04244694858789444,
0.02165556699037552,
-0.07030460238456726,
0.0232364684343338,
-0.20936210453510284,
0.09110848605632782,
-0.0295734703540802,
-0.003299710340797901,
-0.0393131747841835,
-0.04106947407126427,
0.004924703389406204,
0.04436759650707245,
-0.007631266489624977,
0.11717554926872253,
-0.1395711898803711,
-0.050942569971084595,
0.18900829553604126,
-0.16004566848278046,
-0.0009028539061546326,
0.10009689629077911,
-0.0501999668776989,
0.06112741678953171,
0.13395273685455322,
0.09983764588832855,
0.07973478734493256,
-0.07402809709310532,
0.011544973589479923,
0.06322472542524338,
-0.06808792054653168,
0.0566614493727684,
0.09129517525434494,
-0.02664695680141449,
-0.13258975744247437,
0.0271214060485363,
-0.0730414092540741,
-0.010159087367355824,
-0.025875277817249298,
-0.02169898711144924,
0.006011543795466423,
-0.03847488388419151,
0.03269638866186142,
0.0038799180183559656,
0.01872335560619831,
-0.03835052251815796,
-0.08286713063716888,
0.03098093532025814,
0.07522676885128021,
-0.07073717564344406,
0.044675685465335846,
-0.07170852273702621,
0.061160776764154434,
-0.07596266269683838,
-0.004982566460967064,
-0.1672825813293457,
-0.021117741242051125,
0.0476863756775856,
-0.05223634093999863,
0.05054585635662079,
0.09220319241285324,
0.0033944635652005672,
0.1227634996175766,
-0.03789091110229492,
0.003848448395729065,
-0.011716431006789207,
-0.01077418401837349,
-0.05352918058633804,
-0.12030362337827682,
-0.08119331300258636,
-0.06902912259101868,
0.10075545310974121,
-0.07202567160129547,
0.02900758385658264,
-0.07034879922866821,
-0.020555490627884865,
-0.006723223254084587,
-0.06048078462481499,
-0.006555547937750816,
0.007555562071502209,
-0.02912195399403572,
-0.046308331191539764,
0.04747864603996277,
0.05090960115194321,
-0.06344811618328094,
0.07935172319412231,
-0.10495114326477051,
-0.06470485776662827,
0.054963208734989166,
0.01453878078609705,
-0.07902847975492477,
0.08979424834251404,
-0.018546270206570625,
-0.012703663669526577,
-0.06079964339733124,
-0.044536348432302475,
0.19310742616653442,
-0.02030160278081894,
0.1010931134223938,
-0.09196968376636505,
-0.000032791984267532825,
0.028396036475896835,
-0.04912111908197403,
-0.01581684499979019,
0.05692966282367706,
0.04986582323908806,
-0.18924343585968018,
0.014654397964477539,
0.05473171919584274,
0.07679638266563416,
0.10880520939826965,
0.027368362993001938,
-0.02624574676156044,
-0.04969659075140953,
-0.01124309841543436,
0.005693452432751656,
0.05538515746593475,
-0.026660144329071045,
-0.007453244179487228,
0.03209880366921425,
0.05761269852519035,
0.015329200774431229,
-0.08328694105148315,
0.035995304584503174,
0.06688329577445984,
-0.01853775605559349,
-0.04054386913776398,
-0.026199571788311005,
-0.05879184603691101,
0.06364913284778595,
0.053719472140073776,
0.03928789496421814,
0.025285838171839714,
-0.01516740396618843,
-0.13730505108833313,
0.18627968430519104,
-0.11335361003875732,
-0.2541801333427429,
-0.10891346633434296,
-0.054269179701805115,
-0.020842911675572395,
0.04395236819982529,
0.05714719742536545,
-0.026109138503670692,
-0.03995613008737564,
-0.11619946360588074,
0.06342365592718124,
-0.0648491308093071,
-0.030407680198550224,
-0.00924275815486908,
-0.05016746371984482,
-0.020100679248571396,
-0.12756991386413574,
-0.012531762942671776,
-0.03062383458018303,
-0.07708939909934998,
0.0061132777482271194,
-0.034152861684560776,
0.02616870403289795,
0.13797473907470703,
0.03426836431026459,
-0.019498080015182495,
-0.017443092539906502,
0.19239965081214905,
0.009880898520350456,
0.06169985607266426,
0.11165784299373627,
-0.03174050897359848,
0.0558137446641922,
0.04634181410074234,
0.024158459156751633,
-0.04705210030078888,
0.011023364961147308,
-0.01673409715294838,
-0.12071243673563004,
-0.17278620600700378,
-0.06992141902446747,
-0.0015308817382901907,
0.005066949874162674,
0.019826650619506836,
0.0355832576751709,
0.022957611829042435,
0.041439708322286606,
-0.03034614957869053,
0.028817318379878998,
-0.011411141604185104,
0.08125364780426025,
0.023689664900302887,
-0.07457733899354935,
0.09169147908687592,
-0.0599379688501358,
0.014920755289494991,
0.10914035886526108,
-0.06069984287023544,
0.18786704540252686,
0.026315808296203613,
0.0599602572619915,
0.0992775410413742,
0.020949862897396088,
0.05435091257095337,
0.08825761079788208,
-0.04498201608657837,
0.008368372917175293,
-0.06263712048530579,
-0.05160249024629593,
-0.037718046456575394,
0.052084486931562424,
0.03157363831996918,
0.016972489655017853,
-0.11798431724309921,
0.018320437520742416,
-0.002179304603487253,
0.1387883424758911,
0.052966732531785965,
-0.11629076302051544,
-0.1221490353345871,
0.036835700273513794,
-0.04632266238331795,
-0.06305018812417984,
0.027750322595238686,
0.06247503310441971,
-0.15548467636108398,
0.04313240200281143,
-0.00641543697565794,
0.06636762619018555,
-0.09298496693372726,
0.014168756082654,
-0.05254443362355232,
-0.004380226135253906,
0.00648278696462512,
0.07074345648288727,
-0.1315518468618393,
0.10570908337831497,
0.020654501393437386,
0.04905940219759941,
-0.08030759543180466,
0.01607496477663517,
-0.012064134702086449,
0.10433099418878555,
0.11692017316818237,
0.045151736587285995,
-0.05582685023546219,
-0.02175641432404518,
-0.04851678013801575,
0.018590200692415237,
0.06113129109144211,
-0.08107846230268478,
0.061445750296115875,
0.0065340083092451096,
0.007439665962010622,
-0.0226618479937315,
0.018296681344509125,
-0.13075244426727295,
-0.12028510868549347,
0.061834730207920074,
-0.07882240414619446,
-0.09644090384244919,
-0.05753693729639053,
-0.061824388802051544,
-0.04396895319223404,
0.2100323885679245,
-0.11981762200593948,
-0.0887645035982132,
-0.09641456604003906,
-0.012329302728176117,
0.04660565406084061,
-0.06764130294322968,
0.04620632529258728,
-0.03773840144276619,
0.09480033814907074,
-0.049215592443943024,
-0.10829193890094757,
0.035135578364133835,
-0.11107181757688522,
-0.11838163435459137,
-0.044424936175346375,
0.10810837149620056,
0.11519023776054382,
0.03959294408559799,
0.015231063589453697,
0.013902210630476475,
0.001128828153014183,
-0.11572661995887756,
0.017007065936923027,
0.13677994906902313,
-0.0003559514880180359,
0.07095985859632492,
-0.06101870536804199,
0.03258955851197243,
-0.017889799550175667,
0.00006490573287010193,
0.13611648976802826,
0.18878282606601715,
-0.06657741218805313,
0.17835766077041626,
0.20212800800800323,
-0.10647597908973694,
-0.19180214405059814,
-0.05780560150742531,
0.0002153925597667694,
0.045572005212306976,
0.05117730423808098,
-0.18510669469833374,
0.08716389536857605,
0.03409220650792122,
-0.03408714383840561,
0.017601296305656433,
-0.24032871425151825,
-0.11142978817224503,
0.08806295692920685,
0.053998254239559174,
0.1905251443386078,
-0.07836537063121796,
-0.0415814146399498,
-0.0179279912263155,
-0.03609612584114075,
0.04931134730577469,
-0.03723691403865814,
0.0908210277557373,
0.009007273241877556,
-0.02925765886902809,
0.004477758891880512,
-0.029902685433626175,
0.09735502302646637,
0.03961463272571564,
0.02049267664551735,
-0.07135112583637238,
-0.009848818182945251,
0.11327961087226868,
-0.03958822041749954,
0.09802328050136566,
0.04437951743602753,
0.07445624470710754,
-0.09293189644813538,
-0.05947837233543396,
-0.07378365099430084,
0.04353891685605049,
-0.04086396098136902,
-0.05656537413597107,
-0.06593891233205795,
0.05679517984390259,
0.03853210061788559,
0.01132203545421362,
-0.0008826795965433121,
-0.03496972471475601,
0.043309785425662994,
0.0918566882610321,
0.08162596076726913,
-0.028619226068258286,
-0.07334903627634048,
-0.051497768610715866,
-0.048729058355093,
0.06796830892562866,
-0.08960270881652832,
0.021282674744725227,
0.024819888174533844,
0.011588811874389648,
0.09069964289665222,
0.03343012183904648,
-0.14040061831474304,
0.013555715791881084,
0.031597383320331573,
-0.12110577523708344,
-0.11431732028722763,
-0.01776786707341671,
0.030560674145817757,
-0.03837628662586212,
0.05382390320301056,
0.14936605095863342,
-0.0358230322599411,
-0.03343251720070839,
-0.04892708361148834,
0.03863167762756348,
-0.02207431010901928,
0.048446204513311386,
0.06406886875629425,
0.03030005656182766,
-0.07118307054042816,
0.07488654553890228,
0.03909951075911522,
-0.033936988562345505,
0.04364175349473953,
0.03757982701063156,
-0.09263382107019424,
-0.07750650495290756,
-0.0590021051466465,
0.08944115787744522,
-0.021772822365164757,
-0.04474152252078056,
0.001808295026421547,
-0.0803791806101799,
0.06628590822219849,
0.07735157012939453,
0.0479055717587471,
0.040316879749298096,
-0.08770258724689484,
0.016575049608945847,
-0.0518372505903244,
0.034829866141080856,
-0.027041787281632423,
-0.006145378574728966,
-0.05503696948289871,
0.070797398686409,
0.06347893178462982,
0.09973307698965073,
-0.03483818471431732,
-0.07682093977928162,
-0.08235976845026016,
-0.013179749250411987,
-0.06277216970920563,
-0.030115418136119843,
-0.07527519762516022,
-0.009152041748166084,
0.001471846830099821,
-0.005808515474200249,
0.024208731949329376,
0.0338561087846756,
-0.04374847561120987,
-0.018535548821091652,
-0.040459044277668,
0.03564998134970665,
-0.06485839933156967,
0.005929579958319664,
0.018000086769461632,
-0.037137482315301895,
0.09403867274522781,
0.037764281034469604,
-0.012162640690803528,
0.04702169820666313,
-0.027717866003513336,
0.03670430928468704,
-0.019318141043186188,
0.0010219123214483261,
-0.025118350982666016,
-0.11068469285964966,
-0.005476815160363913,
0.003110954537987709,
-0.022420145571231842,
0.009514841251075268,
0.06204361468553543,
-0.07239504903554916,
0.08341941237449646,
0.044759903103113174,
-0.03420250117778778,
-0.07157878577709198,
0.03982347249984741,
-0.01725541427731514,
0.03206465765833855,
0.07048261910676956,
-0.03289531171321869,
0.0552816316485405,
-0.09907940030097961,
-0.02660539001226425,
0.004577276296913624,
-0.001672506332397461,
-0.010543622076511383,
-0.05596885830163956,
-0.0039858389645814896,
0.008504471741616726,
0.17301250994205475,
-0.022250927984714508,
0.034158993512392044,
0.012900149449706078,
0.0053197890520095825,
0.04974136874079704,
-0.012737097218632698,
0.07595141232013702,
-0.003569399006664753,
-0.02634517103433609,
-0.018556203693151474,
0.03637631982564926,
0.0053546372801065445,
0.0014529433101415634,
0.14111113548278809,
0.04682004079222679,
0.09000973403453827,
0.07707534730434418,
0.01618853025138378,
0.019041264429688454,
-0.13259653747081757,
-0.08880066126585007,
0.004666768945753574,
0.054368872195482254,
-0.018057405948638916,
0.024134937673807144,
0.09136098623275757,
-0.08936896175146103,
0.07073863595724106,
0.0522889718413353,
-0.04903904348611832,
-0.12942807376384735,
-0.19544079899787903,
-0.027392935007810593,
-0.02858819253742695,
-0.01216756459325552,
-0.08892066776752472,
0.018218442797660828,
0.08882954716682434,
0.022228512912988663,
-0.010243700817227364,
0.08978678286075592,
-0.10719085484743118,
-0.031971897929906845,
0.04285488277673721,
-0.025515606626868248,
0.01600497215986252,
0.049348119646310806,
0.02109317108988762,
-0.003498917445540428,
0.047745753079652786,
0.04357133433222771,
0.046683043241500854,
0.027968183159828186,
0.0494624525308609,
-0.026204118505120277,
-0.07483977824449539,
-0.03256571292877197,
-0.004589521326124668,
0.05406297743320465,
0.13156183063983917,
0.025771846994757652,
-0.06900300085544586,
0.0062631587497889996,
0.11098246276378632,
-0.03358239680528641,
-0.053454842418432236,
-0.10791138559579849,
0.2390347719192505,
0.021519485861063004,
0.0037170748692005873,
-0.004044223576784134,
-0.04756893590092659,
0.0032282453030347824,
0.20823034644126892,
0.22302719950675964,
0.0023224058095365763,
-0.008816415444016457,
0.009665831923484802,
-0.011489816009998322,
0.03712894022464752,
0.14490343630313873,
0.0009321067482233047,
0.2545885741710663,
-0.04775330424308777,
0.04168178513646126,
-0.04090874642133713,
-0.03979512304067612,
-0.10175254940986633,
0.07427577674388885,
-0.013637321069836617,
0.00771520147100091,
-0.03502454608678818,
0.07246410846710205,
-0.03783676400780678,
-0.17504212260246277,
0.001794436015188694,
-0.002205753233283758,
-0.06109122559428215,
0.014779942110180855,
-0.002685306593775749,
0.02121085114777088,
0.0845692902803421,
-0.017384178936481476,
-0.005942075978964567,
0.13177047669887543,
0.019283222034573555,
-0.09926741570234299,
-0.05630093812942505,
0.11491401493549347,
0.013998295180499554,
0.14293980598449707,
0.012631479650735855,
0.08114945888519287,
0.08669418096542358,
0.020833997055888176,
-0.09255728125572205,
0.04232841730117798,
-0.018743721768260002,
-0.025764478370547295,
0.007175253238528967,
0.1113952174782753,
-0.009442690759897232,
0.05676262080669403,
0.02956206351518631,
-0.09243524074554443,
0.0615277923643589,
0.007913947105407715,
-0.03701028600335121,
-0.08225883543491364,
0.08298128098249435,
-0.09164372086524963,
0.15582843124866486,
0.12049055099487305,
-0.015749841928482056,
-0.04435095936059952,
-0.028209581971168518,
0.017096547409892082,
-0.0008672177791595459,
0.05352111905813217,
-0.024908069521188736,
-0.13302455842494965,
0.018184980377554893,
-0.0818561539053917,
0.028353093191981316,
-0.24648943543434143,
-0.09017283469438553,
0.028882987797260284,
-0.017373615875840187,
-0.01735858991742134,
0.050704941153526306,
0.04316457733511925,
0.024037031456828117,
-0.03546149656176567,
0.014445304870605469,
-0.03732036426663399,
0.05947798863053322,
-0.10959552228450775,
-0.09238345175981522
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 1100k (uncased)
Seed 2 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1100k')
model = BertModel.from_pretrained("multiberts-seed-2-1100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-1100k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 1100k (uncased)
Seed 2 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 1100k (uncased)\nSeed 2 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 1100k (uncased)\nSeed 2 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 1100k (uncased)\nSeed 2 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08411281555891037,
0.001717127626761794,
-0.0021959084551781416,
0.07049713283777237,
0.0850779339671135,
0.0032783527858555317,
0.11571729928255081,
0.051037926226854324,
-0.02388237603008747,
0.02459716796875,
0.09040868282318115,
0.027934465557336807,
0.043800197541713715,
0.06803438812494278,
0.09515833109617233,
-0.2603468894958496,
0.04804316908121109,
-0.06352468580007553,
0.060631461441516876,
0.0739469826221466,
0.10021704435348511,
-0.07098512351512909,
0.061010342091321945,
0.03749675303697586,
-0.08071627467870712,
-0.018190544098615646,
-0.01419846247881651,
-0.03518272191286087,
0.10117672383785248,
0.06845248490571976,
0.0635654404759407,
-0.0003138892352581024,
0.06176704168319702,
-0.090243861079216,
0.015471594408154488,
0.04528079926967621,
0.0009431312792003155,
0.027052175253629684,
-0.00971018336713314,
0.01827044039964676,
0.1091143935918808,
0.040590595453977585,
0.07751026004552841,
0.032498549669981,
-0.09507529437541962,
-0.1173141598701477,
-0.082954540848732,
0.10830744355916977,
0.05552453547716141,
0.03639591485261917,
-0.004767461679875851,
0.07445850968360901,
-0.025445979088544846,
0.07287856936454773,
0.10847219824790955,
-0.25830140709877014,
-0.007751856464892626,
0.060272932052612305,
0.04463183134794235,
0.04521958529949188,
0.013256064616143703,
0.02451428957283497,
0.006346836686134338,
0.04110061377286911,
0.025451209396123886,
-0.02247607335448265,
0.11024019122123718,
-0.0456320121884346,
-0.15010331571102142,
-0.043094415217638016,
0.11881838738918304,
-0.0077033136039972305,
-0.12565360963344574,
-0.09462594240903854,
-0.03445684164762497,
0.11512935161590576,
-0.0009044650942087173,
-0.01944112777709961,
-0.003965320531278849,
0.012421279214322567,
0.02873009257018566,
-0.09760026633739471,
-0.08665385842323303,
-0.030275028198957443,
-0.040472131222486496,
0.13038678467273712,
0.04700227826833725,
0.053879689425230026,
-0.03662261366844177,
0.08996248245239258,
-0.1215536892414093,
-0.04148373752832413,
-0.05225276201963425,
-0.08126150071620941,
-0.018514391034841537,
0.008945819921791553,
-0.024698205292224884,
-0.08302027732133865,
-0.06248219683766365,
0.11339642852544785,
0.039012450724840164,
0.02628237009048462,
-0.006716923788189888,
0.04370006173849106,
0.07236451655626297,
0.09755438566207886,
-0.03729104623198509,
0.049353744834661484,
0.035587411373853683,
-0.02300884947180748,
0.060970157384872437,
-0.05111397057771683,
-0.1018265038728714,
0.07277371734380722,
-0.0005691004917025566,
0.039362724870443344,
0.028072675690054893,
0.03128967061638832,
-0.013232190161943436,
-0.07025592029094696,
0.16577088832855225,
-0.07821455597877502,
-0.00669581163674593,
-0.016788557171821594,
0.00930916890501976,
0.04323958605527878,
0.036406055092811584,
-0.004399715922772884,
-0.04771587252616882,
-0.005547028966248035,
-0.05499143898487091,
-0.02325809933245182,
-0.055412083864212036,
-0.11934421956539154,
0.0006802771240472794,
-0.038463808596134186,
-0.03382748365402222,
-0.14037275314331055,
-0.2181861698627472,
-0.016756556928157806,
0.06471918523311615,
-0.001818130724132061,
-0.011104194447398186,
0.023082496598362923,
0.017167242243885994,
-0.019891338422894478,
0.009531756862998009,
-0.046918295323848724,
0.0004963213577866554,
-0.004461444914340973,
-0.036958999931812286,
0.05740192532539368,
-0.045538220554590225,
0.021793877705931664,
-0.0697753056883812,
0.02290833368897438,
-0.21165454387664795,
0.08713780343532562,
-0.03111977130174637,
-0.0011181794106960297,
-0.0386248379945755,
-0.04003214091062546,
0.0073989201337099075,
0.04350018501281738,
-0.0067670466378331184,
0.11608366668224335,
-0.13615375757217407,
-0.04777246713638306,
0.18483009934425354,
-0.16169928014278412,
-0.0007356293499469757,
0.1015576496720314,
-0.049130022525787354,
0.06051691249012947,
0.13407334685325623,
0.09808232635259628,
0.08582254499197006,
-0.07509795576334,
0.010824397206306458,
0.06325779110193253,
-0.07027644664049149,
0.05514576658606529,
0.09153898805379868,
-0.02628243714570999,
-0.13528117537498474,
0.027182236313819885,
-0.07470150291919708,
-0.009023083373904228,
-0.0255900789052248,
-0.021823201328516006,
0.005667056888341904,
-0.03784080967307091,
0.03148110583424568,
0.004798357840627432,
0.01809542439877987,
-0.03884484991431236,
-0.08387143909931183,
0.030131308361887932,
0.07621759921312332,
-0.06944304704666138,
0.0459601990878582,
-0.07089867442846298,
0.05954810231924057,
-0.077849842607975,
-0.006544720381498337,
-0.16538205742835999,
-0.021256260573863983,
0.04659882187843323,
-0.050770606845617294,
0.048006683588027954,
0.08931005746126175,
0.0028305561281740665,
0.12342952191829681,
-0.03956685960292816,
0.00402125995606184,
-0.009750951081514359,
-0.010606961324810982,
-0.053811416029930115,
-0.12015055119991302,
-0.08073963969945908,
-0.06975653767585754,
0.0979931652545929,
-0.07365818321704865,
0.029157422482967377,
-0.06980083882808685,
-0.021240727975964546,
-0.006554819643497467,
-0.058430127799510956,
-0.008157867006957531,
0.007236502133309841,
-0.030008893460035324,
-0.04647371172904968,
0.047974541783332825,
0.05162232369184494,
-0.05976252257823944,
0.08103640377521515,
-0.10426711291074753,
-0.06562543660402298,
0.05422896146774292,
0.018564123660326004,
-0.07830899953842163,
0.08950122445821762,
-0.019436148926615715,
-0.014292781241238117,
-0.06077592447400093,
-0.046094056218862534,
0.19466224312782288,
-0.022321052849292755,
0.1009543240070343,
-0.09150318056344986,
0.0014460959937423468,
0.02751455269753933,
-0.04821550473570824,
-0.01568857952952385,
0.05683112144470215,
0.05066066235303879,
-0.1920762062072754,
0.013780973851680756,
0.05059757083654404,
0.07577010989189148,
0.11041353642940521,
0.029093610122799873,
-0.026325024664402008,
-0.04761938750743866,
-0.01145520992577076,
0.006282147951424122,
0.055953048169612885,
-0.02743876725435257,
-0.007366144564002752,
0.03213033825159073,
0.0582660473883152,
0.017129376530647278,
-0.08454929292201996,
0.03713817149400711,
0.06657272577285767,
-0.016725953668355942,
-0.04411457106471062,
-0.026802711188793182,
-0.05801936984062195,
0.0646497905254364,
0.052478086203336716,
0.0400102473795414,
0.024317393079400063,
-0.014768657274544239,
-0.13775810599327087,
0.18503370881080627,
-0.1140599399805069,
-0.2516593039035797,
-0.11045970022678375,
-0.05711221694946289,
-0.017709137871861458,
0.04277427867054939,
0.058930881321430206,
-0.025683492422103882,
-0.04161928966641426,
-0.11615163087844849,
0.06403655558824539,
-0.06114267557859421,
-0.028990497812628746,
-0.009045341983437538,
-0.05084340274333954,
-0.02163754031062126,
-0.12821093201637268,
-0.011849584057927132,
-0.03010634332895279,
-0.07628288865089417,
0.00448489747941494,
-0.034701816737651825,
0.02472248673439026,
0.13889917731285095,
0.03293106332421303,
-0.020686127245426178,
-0.017548520117998123,
0.18387961387634277,
0.010790850967168808,
0.06332434713840485,
0.11113858222961426,
-0.033697906881570816,
0.057687632739543915,
0.049025408923625946,
0.022239860147237778,
-0.04604494944214821,
0.013804558664560318,
-0.019132597371935844,
-0.12109972536563873,
-0.17538507282733917,
-0.07107260823249817,
-0.0016587122809141874,
0.008021746762096882,
0.02008778601884842,
0.03468964993953705,
0.013444827869534492,
0.041401080787181854,
-0.030961260199546814,
0.03016248717904091,
-0.012853879481554031,
0.08013884723186493,
0.021398339420557022,
-0.07428768277168274,
0.08886457979679108,
-0.06246215105056763,
0.017156023532152176,
0.11046028882265091,
-0.05845972150564194,
0.19226232171058655,
0.021359972655773163,
0.05530749261379242,
0.09942740947008133,
0.01939357817173004,
0.053816646337509155,
0.09042233228683472,
-0.04579997807741165,
0.008911293931305408,
-0.06170937418937683,
-0.05113083869218826,
-0.0351560078561306,
0.05246580019593239,
0.030689893290400505,
0.01966892182826996,
-0.11834702640771866,
0.02391626685857773,
-0.0016446283552795649,
0.13530457019805908,
0.05063611641526222,
-0.11704061925411224,
-0.12172488123178482,
0.03652898594737053,
-0.04702454060316086,
-0.06516791880130768,
0.028190853074193,
0.06563841551542282,
-0.1534174382686615,
0.04385802894830704,
-0.005848774686455727,
0.06463959068059921,
-0.09239974617958069,
0.012900510802865028,
-0.048053741455078125,
-0.0048530325293540955,
0.006025582551956177,
0.07012998312711716,
-0.12870007753372192,
0.10925684869289398,
0.020976252853870392,
0.0502086840569973,
-0.08091241866350174,
0.017181314527988434,
-0.008796058595180511,
0.10465583950281143,
0.1177574023604393,
0.04509691894054413,
-0.049965083599090576,
-0.016014300286769867,
-0.04957426339387894,
0.019321685656905174,
0.05816633254289627,
-0.07832664251327515,
0.06120874732732773,
0.007897569797933102,
0.008550718426704407,
-0.022917376831173897,
0.01644521951675415,
-0.1312812864780426,
-0.12066496908664703,
0.061626676470041275,
-0.07893750071525574,
-0.10205253958702087,
-0.05779598653316498,
-0.06329638510942459,
-0.0474434420466423,
0.20850232243537903,
-0.1157679483294487,
-0.0893174484372139,
-0.09503431618213654,
-0.013198725879192352,
0.04560592398047447,
-0.06838097423315048,
0.047291889786720276,
-0.035872988402843475,
0.0927920788526535,
-0.04982808232307434,
-0.10652302205562592,
0.036076102405786514,
-0.11045755445957184,
-0.11635281145572662,
-0.04652462899684906,
0.10619549453258514,
0.11483744531869888,
0.03902391344308853,
0.012982750311493874,
0.01239796169102192,
0.001408131793141365,
-0.11570850759744644,
0.013876035809516907,
0.136613130569458,
-0.0020967628806829453,
0.07296163588762283,
-0.0616237074136734,
0.02297523245215416,
-0.016508027911186218,
-0.0016698315739631653,
0.1343577802181244,
0.19013114273548126,
-0.06595458090305328,
0.1772051304578781,
0.20008188486099243,
-0.10820406675338745,
-0.19399505853652954,
-0.05519253388047218,
0.0012763533741235733,
0.04627937823534012,
0.051138736307621,
-0.18162624537944794,
0.08950468897819519,
0.03443283587694168,
-0.0352330319583416,
0.021754980087280273,
-0.23518311977386475,
-0.11234551668167114,
0.08927005529403687,
0.05484198033809662,
0.186384916305542,
-0.0776275023818016,
-0.040333092212677,
-0.01664160005748272,
-0.04463554173707962,
0.0462377555668354,
-0.038116779178380966,
0.08949575573205948,
0.008026499301195145,
-0.029333509504795074,
0.0039050262421369553,
-0.0310857892036438,
0.09743751585483551,
0.039968714118003845,
0.020538806915283203,
-0.07171846181154251,
-0.014058547094464302,
0.11651500314474106,
-0.04045961797237396,
0.09937570244073868,
0.04670756682753563,
0.0749419778585434,
-0.09755465388298035,
-0.058820366859436035,
-0.07327862083911896,
0.044107913970947266,
-0.040481600910425186,
-0.05575547739863396,
-0.0649748370051384,
0.05675638094544411,
0.039607174694538116,
0.011202440597116947,
-0.00034196116030216217,
-0.033786747604608536,
0.0413149818778038,
0.08910319954156876,
0.08254195004701614,
-0.026332978159189224,
-0.0658654123544693,
-0.05164329707622528,
-0.048940304666757584,
0.06777060031890869,
-0.09529894590377808,
0.022149736061692238,
0.026829656213521957,
0.012703060172498226,
0.08827152103185654,
0.032862402498722076,
-0.13877730071544647,
0.015415126457810402,
0.03401588276028633,
-0.12169015407562256,
-0.108954057097435,
-0.018726732581853867,
0.034047387540340424,
-0.038713011890649796,
0.054033610969781876,
0.14680065214633942,
-0.03641693294048309,
-0.03337166830897331,
-0.04849248379468918,
0.03924815356731415,
-0.022590558975934982,
0.04800499975681305,
0.06358702480792999,
0.030200889334082603,
-0.07081090658903122,
0.0776960626244545,
0.03993070125579834,
-0.03686323016881943,
0.042752888053655624,
0.040015388280153275,
-0.09204988181591034,
-0.07747580111026764,
-0.05926842242479324,
0.09017893671989441,
-0.01854630373418331,
-0.04581994190812111,
0.0012581907212734222,
-0.07864166796207428,
0.0660344734787941,
0.07806634157896042,
0.047365643084049225,
0.03877708315849304,
-0.08789455890655518,
0.017062636092305183,
-0.05319472402334213,
0.03695792332291603,
-0.02646906115114689,
-0.0046028271317481995,
-0.0566583052277565,
0.07125451415777206,
0.06226757913827896,
0.09838200360536575,
-0.03428119421005249,
-0.07395510375499725,
-0.07978565245866776,
-0.013303060084581375,
-0.06090521067380905,
-0.028849836438894272,
-0.07553672790527344,
-0.007555133197456598,
0.002193141495808959,
-0.0061063505709171295,
0.021894758567214012,
0.035737767815589905,
-0.044223055243492126,
-0.017872048541903496,
-0.03918173909187317,
0.03662567213177681,
-0.06200246512889862,
0.0043472424149513245,
0.01858529821038246,
-0.0360785610973835,
0.09415845572948456,
0.037229571491479874,
-0.011327950283885002,
0.04738272726535797,
-0.02668079361319542,
0.035251252353191376,
-0.019786318764090538,
0.0021578259766101837,
-0.023909054696559906,
-0.10966246575117111,
-0.004474173299968243,
0.0035827811807394028,
-0.02182065136730671,
0.009287508204579353,
0.06576281785964966,
-0.0731247290968895,
0.08430258929729462,
0.04486818239092827,
-0.03203170374035835,
-0.07158318161964417,
0.03881207853555679,
-0.01784982718527317,
0.03045710362493992,
0.07036212086677551,
-0.03322814032435417,
0.0527714267373085,
-0.09862491488456726,
-0.027002710849046707,
0.004095397889614105,
-0.0015668515115976334,
-0.006824379786849022,
-0.05295415222644806,
-0.005038292147219181,
0.008216068148612976,
0.17145396769046783,
-0.023509114980697632,
0.031536925584077835,
0.014434502460062504,
0.005605136975646019,
0.042486172169446945,
-0.014307459816336632,
0.0748516172170639,
-0.007732137106359005,
-0.02606254443526268,
-0.01809684932231903,
0.03521604835987091,
0.005712449550628662,
0.0024818386882543564,
0.1440662145614624,
0.04481583088636398,
0.09531119465827942,
0.07841247320175171,
0.016520658507943153,
0.01768689788877964,
-0.1364630162715912,
-0.09471076726913452,
0.0044369446113705635,
0.05547970533370972,
-0.016667312011122704,
0.018302403390407562,
0.09269516915082932,
-0.08843862265348434,
0.06861822307109833,
0.05112728476524353,
-0.04869110509753227,
-0.12906953692436218,
-0.1934746652841568,
-0.025875097140669823,
-0.027639856562018394,
-0.010684034787118435,
-0.08844052255153656,
0.018706709146499634,
0.09213624149560928,
0.021660448983311653,
-0.009655098430812359,
0.0923411026597023,
-0.11025799810886383,
-0.03187651187181473,
0.04347996786236763,
-0.025553010404109955,
0.017004188150167465,
0.04905463755130768,
0.020601384341716766,
-0.004829728975892067,
0.04613451659679413,
0.0418505035340786,
0.04534479230642319,
0.026303082704544067,
0.04973764717578888,
-0.02324373461306095,
-0.07413184642791748,
-0.03392533212900162,
-0.004641170613467693,
0.05523008853197098,
0.13326075673103333,
0.02608250267803669,
-0.06629083305597305,
0.0076988255605101585,
0.1101096123456955,
-0.03463992848992348,
-0.05306059122085571,
-0.10874481499195099,
0.24123027920722961,
0.023383617401123047,
0.003265893552452326,
-0.0023357942700386047,
-0.047977644950151443,
0.0035165082663297653,
0.20423658192157745,
0.21913227438926697,
0.004929055459797382,
-0.00906312558799982,
0.010696714743971825,
-0.011449895799160004,
0.03453725203871727,
0.14481453597545624,
0.003879385069012642,
0.2511478662490845,
-0.0474473312497139,
0.03744236379861832,
-0.040905099362134933,
-0.038038477301597595,
-0.09979063272476196,
0.07191836088895798,
-0.01569049432873726,
0.008912253193557262,
-0.03387933224439621,
0.07155647873878479,
-0.03743697330355644,
-0.17760062217712402,
-0.00022529251873493195,
-0.0029517062939703465,
-0.06311288475990295,
0.015180492773652077,
0.002843424677848816,
0.02134666219353676,
0.08337225764989853,
-0.017565689980983734,
-0.005858650431036949,
0.13250601291656494,
0.019882306456565857,
-0.09919290989637375,
-0.05715610831975937,
0.11311303079128265,
0.009142032824456692,
0.14688807725906372,
0.012817205861210823,
0.08073431253433228,
0.08602838218212128,
0.021065903827548027,
-0.09553399682044983,
0.042118679732084274,
-0.018991295248270035,
-0.02762412652373314,
0.006430490408092737,
0.11002734303474426,
-0.011057103052735329,
0.05831115320324898,
0.02899783104658127,
-0.09110689163208008,
0.06003492325544357,
0.01377539336681366,
-0.03571867570281029,
-0.08272234350442886,
0.08232597261667252,
-0.0914185494184494,
0.15674681961536407,
0.11946387588977814,
-0.015298069454729557,
-0.044622503221035004,
-0.02829025685787201,
0.016382461413741112,
-0.0009717163629829884,
0.05904482305049896,
-0.02414868026971817,
-0.13291245698928833,
0.018464596942067146,
-0.08458013087511063,
0.030783647671341896,
-0.24508368968963623,
-0.0886911004781723,
0.02716093324124813,
-0.02035224437713623,
-0.01980719342827797,
0.050438545644283295,
0.04647305607795715,
0.025510024279356003,
-0.035617709159851074,
0.010607205331325531,
-0.038257401436567307,
0.059065334498882294,
-0.11225831508636475,
-0.09474741667509079
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 1200k (uncased)
Seed 2 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1200k')
model = BertModel.from_pretrained("multiberts-seed-2-1200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-1200k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 1200k (uncased)
Seed 2 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 1200k (uncased)\nSeed 2 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 1200k (uncased)\nSeed 2 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 1200k (uncased)\nSeed 2 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08397897332906723,
0.004146774299442768,
-0.0022292158100754023,
0.07057498395442963,
0.08407638221979141,
0.0026897871866822243,
0.11577478051185608,
0.05153008922934532,
-0.03389438986778259,
0.024003686383366585,
0.0903560221195221,
0.02561933919787407,
0.04404367133975029,
0.06321018934249878,
0.09716186672449112,
-0.2613269090652466,
0.04934772849082947,
-0.0636962428689003,
0.05180394649505615,
0.0747896283864975,
0.09982414543628693,
-0.06843329966068268,
0.06074531748890877,
0.037097275257110596,
-0.08403386175632477,
-0.01744549162685871,
-0.01326344907283783,
-0.033925481140613556,
0.1017543375492096,
0.07042953372001648,
0.06274832040071487,
0.0015654824674129486,
0.06130442023277283,
-0.08826668560504913,
0.01612873747944832,
0.0442981943488121,
0.00021135760471224785,
0.02666502632200718,
-0.009696416556835175,
0.017428560182452202,
0.10892777144908905,
0.041808247566223145,
0.0768173411488533,
0.033370789140462875,
-0.09525159001350403,
-0.1104598119854927,
-0.0814494714140892,
0.10919477045536041,
0.05397314950823784,
0.038027457892894745,
-0.004040500149130821,
0.0739101693034172,
-0.028560105711221695,
0.07196232676506042,
0.10980342328548431,
-0.25521692633628845,
-0.009075695648789406,
0.06624491512775421,
0.04253733158111572,
0.04636967182159424,
0.013555818237364292,
0.023990333080291748,
0.005638178437948227,
0.04168394207954407,
0.02405809611082077,
-0.022761672735214233,
0.11014844477176666,
-0.04592670872807503,
-0.14983972907066345,
-0.04336702078580856,
0.11312136799097061,
-0.007258521392941475,
-0.12538117170333862,
-0.09300494194030762,
-0.03425592929124832,
0.11640605330467224,
-0.0007636360824108124,
-0.018198998644948006,
-0.004698502831161022,
0.012236448004841805,
0.0239882692694664,
-0.09805308282375336,
-0.08504507690668106,
-0.03189676254987717,
-0.04033804312348366,
0.13136377930641174,
0.04633334279060364,
0.053480520844459534,
-0.037348806858062744,
0.08926022797822952,
-0.12353294342756271,
-0.03930755704641342,
-0.05432303249835968,
-0.08230909705162048,
-0.019148165360093117,
0.010487332008779049,
-0.026672907173633575,
-0.08193500339984894,
-0.06137667968869209,
0.11646532267332077,
0.03370269015431404,
0.027238331735134125,
-0.004819110967218876,
0.04315228760242462,
0.0699431300163269,
0.0947943776845932,
-0.038275714963674545,
0.05435323715209961,
0.03439956158399582,
-0.0223168283700943,
0.058562397956848145,
-0.05000104755163193,
-0.10259701311588287,
0.07185345888137817,
-0.0008216593414545059,
0.037714771926403046,
0.026970386505126953,
0.030133143067359924,
-0.011477794498205185,
-0.07124341279268265,
0.16472959518432617,
-0.07712729275226593,
-0.008417326956987381,
-0.017311276867985725,
0.009792624041438103,
0.043283648788928986,
0.035366594791412354,
-0.005480717867612839,
-0.04682884365320206,
-0.004918190650641918,
-0.05454272776842117,
-0.024383941665291786,
-0.055931687355041504,
-0.11644520610570908,
0.0010155467316508293,
-0.04190649092197418,
-0.032957516610622406,
-0.14181233942508698,
-0.2169298529624939,
-0.01749713160097599,
0.06512464582920074,
-0.003142213448882103,
-0.013269316405057907,
0.024777280166745186,
0.015897290781140327,
-0.020834654569625854,
0.009458214044570923,
-0.04799843579530716,
0.0016959849745035172,
-0.004336440935730934,
-0.036137573421001434,
0.05779542773962021,
-0.04614340141415596,
0.022357208654284477,
-0.0695900022983551,
0.023070091381669044,
-0.21165412664413452,
0.08938832581043243,
-0.030992496758699417,
-0.0015764851123094559,
-0.03803722932934761,
-0.04056617617607117,
0.0038687661290168762,
0.04439323768019676,
-0.0064792814664542675,
0.11572268605232239,
-0.1389041543006897,
-0.04736683890223503,
0.18866415321826935,
-0.1614232361316681,
-0.000568687915802002,
0.09946496039628983,
-0.04998928681015968,
0.06032905355095863,
0.13264676928520203,
0.09898118674755096,
0.08759729564189911,
-0.07100226730108261,
0.0119329197332263,
0.06311722844839096,
-0.06943494081497192,
0.05274262651801109,
0.09085044264793396,
-0.026359137147665024,
-0.1361694037914276,
0.026797063648700714,
-0.07520494610071182,
-0.008167356252670288,
-0.024905048310756683,
-0.021503938362002373,
0.006114501506090164,
-0.0398453064262867,
0.029200006276369095,
0.004917871206998825,
0.01902737095952034,
-0.03712770715355873,
-0.08253152668476105,
0.03508637100458145,
0.07714930176734924,
-0.06960900872945786,
0.045509371906518936,
-0.0697455033659935,
0.06033935025334358,
-0.07608074694871902,
-0.006255770102143288,
-0.16950532793998718,
-0.021969452500343323,
0.04438602179288864,
-0.054061777889728546,
0.05059356614947319,
0.08974155783653259,
0.003852291963994503,
0.12249267101287842,
-0.040942586958408356,
0.005337377078831196,
-0.009664678946137428,
-0.010745882987976074,
-0.051811061799526215,
-0.11942312866449356,
-0.07860203087329865,
-0.06904453039169312,
0.09940559417009354,
-0.07059179246425629,
0.02991318702697754,
-0.06942708045244217,
-0.020515797659754753,
-0.007810238748788834,
-0.05898333713412285,
-0.007513848133385181,
0.007701357826590538,
-0.029510846361517906,
-0.04689973592758179,
0.04736669361591339,
0.05241522192955017,
-0.060433294624090195,
0.08011002093553543,
-0.10311214625835419,
-0.0676097422838211,
0.054008886218070984,
0.021376827731728554,
-0.08013726770877838,
0.08989131450653076,
-0.01954067312180996,
-0.013330313377082348,
-0.061989497393369675,
-0.043835218995809555,
0.192813903093338,
-0.022758830338716507,
0.10034674406051636,
-0.09307190775871277,
0.0034074780996888876,
0.027771372348070145,
-0.04820571094751358,
-0.014597522094845772,
0.05818473547697067,
0.05523533746600151,
-0.18731585144996643,
0.014988575130701065,
0.04938594251871109,
0.07391184568405151,
0.10884925723075867,
0.027546782046556473,
-0.024796338751912117,
-0.047134000808000565,
-0.010855454951524734,
0.007278654258698225,
0.05560213327407837,
-0.018714111298322678,
-0.005550278816372156,
0.031190669164061546,
0.059854812920093536,
0.017677057534456253,
-0.08452524989843369,
0.03559572994709015,
0.06742030382156372,
-0.017078418284654617,
-0.0417746901512146,
-0.026397235691547394,
-0.05842632055282593,
0.06431758403778076,
0.0543837733566761,
0.038503579795360565,
0.0254917461425066,
-0.013843568973243237,
-0.13647982478141785,
0.18716669082641602,
-0.1130729615688324,
-0.2527610659599304,
-0.11041540652513504,
-0.05517099052667618,
-0.021088656038045883,
0.041927505284547806,
0.058307044208049774,
-0.027377665042877197,
-0.03943541646003723,
-0.11533419787883759,
0.062189675867557526,
-0.06151110678911209,
-0.02882031910121441,
-0.010636257007718086,
-0.04988881200551987,
-0.018155496567487717,
-0.1273566484451294,
-0.011257313191890717,
-0.030160458758473396,
-0.07483531534671783,
0.005203471519052982,
-0.03648114576935768,
0.026212435215711594,
0.13854891061782837,
0.031161624938249588,
-0.020403608679771423,
-0.017072655260562897,
0.18869955837726593,
0.010661235079169273,
0.06175156682729721,
0.11130538582801819,
-0.0322919636964798,
0.057279109954833984,
0.049497805535793304,
0.023487433791160583,
-0.045628622174263,
0.01330061536282301,
-0.01671864092350006,
-0.12244332581758499,
-0.17506521940231323,
-0.06963200867176056,
-0.0010203646961599588,
0.006512850988656282,
0.01950003392994404,
0.035634808242321014,
0.020033303648233414,
0.041330598294734955,
-0.029899906367063522,
0.028599631041288376,
-0.013266988098621368,
0.07992566376924515,
0.018660757690668106,
-0.07137854397296906,
0.08982497453689575,
-0.06161234900355339,
0.016952000558376312,
0.11036592721939087,
-0.06002390757203102,
0.19237522780895233,
0.021765749901533127,
0.05401879921555519,
0.09859837591648102,
0.020536791533231735,
0.056475307792425156,
0.08891670405864716,
-0.046753741800785065,
0.008878936059772968,
-0.061112262308597565,
-0.05111900344491005,
-0.03656885772943497,
0.05214619264006615,
0.028729233890771866,
0.018932461738586426,
-0.11727097630500793,
0.024807047098875046,
-0.0032069943845272064,
0.13592426478862762,
0.04961977154016495,
-0.11619696766138077,
-0.1222819834947586,
0.03577118739485741,
-0.04597821831703186,
-0.0641019344329834,
0.030170071870088577,
0.06251693516969681,
-0.1530054360628128,
0.0421963632106781,
-0.0060112085193395615,
0.06684930622577667,
-0.09193610399961472,
0.013569514267146587,
-0.047737136483192444,
-0.004961581900715828,
0.005810057744383812,
0.07007434964179993,
-0.1280738115310669,
0.10956916958093643,
0.020292533561587334,
0.048504460602998734,
-0.07821115851402283,
0.01625809818506241,
-0.009308161213994026,
0.10702278465032578,
0.1165565699338913,
0.04616996645927429,
-0.05221898853778839,
-0.013602226972579956,
-0.04977474361658096,
0.01854552887380123,
0.05813603103160858,
-0.07876994460821152,
0.06018245965242386,
0.008735128678381443,
0.008147177286446095,
-0.023208126425743103,
0.01200026459991932,
-0.1310354769229889,
-0.12066814303398132,
0.0614241361618042,
-0.08152617514133453,
-0.09745503216981888,
-0.05746470019221306,
-0.06322824954986572,
-0.04550231248140335,
0.20715782046318054,
-0.11405648291110992,
-0.08939588069915771,
-0.096128448843956,
-0.01206129789352417,
0.04601972922682762,
-0.06838515400886536,
0.04657614976167679,
-0.03811463341116905,
0.09181952476501465,
-0.049393683671951294,
-0.10896377265453339,
0.03540351614356041,
-0.11020033061504364,
-0.11561858654022217,
-0.044390968978405,
0.1048644483089447,
0.11520949751138687,
0.039677705615758896,
0.011655322276055813,
0.011989405378699303,
0.0025461018085479736,
-0.11615636944770813,
0.012721365317702293,
0.13360054790973663,
-0.003647714853286743,
0.07128290832042694,
-0.05904263257980347,
0.02727929875254631,
-0.01633504219353199,
0.00019218958914279938,
0.1330774873495102,
0.18822988867759705,
-0.0668458417057991,
0.17630916833877563,
0.20210182666778564,
-0.10658019036054611,
-0.19372683763504028,
-0.05423811450600624,
0.0015541454777121544,
0.04606670141220093,
0.051457203924655914,
-0.18278244137763977,
0.08792892098426819,
0.033852916210889816,
-0.03468827158212662,
0.013941798359155655,
-0.23560664057731628,
-0.11157422512769699,
0.08665068447589874,
0.0532362125813961,
0.18893122673034668,
-0.07863866537809372,
-0.04106569662690163,
-0.01583368517458439,
-0.04284307360649109,
0.04764263331890106,
-0.035729262977838516,
0.08905923366546631,
0.007211159914731979,
-0.027932800352573395,
0.0036315303295850754,
-0.031107936054468155,
0.09723368287086487,
0.03976133465766907,
0.01949554681777954,
-0.07206713408231735,
-0.010782012715935707,
0.1103302389383316,
-0.04056981950998306,
0.0972403958439827,
0.04690800607204437,
0.07669635117053986,
-0.09763982146978378,
-0.05828298628330231,
-0.07454235106706619,
0.04223717004060745,
-0.04122874140739441,
-0.05555199831724167,
-0.06466926634311676,
0.05938918888568878,
0.0388716459274292,
0.011682952754199505,
-0.00011322274804115295,
-0.035186007618904114,
0.04260540008544922,
0.08820367604494095,
0.08099759370088577,
-0.026185348629951477,
-0.07079704105854034,
-0.05014137551188469,
-0.04785635322332382,
0.06753437221050262,
-0.0997861996293068,
0.0215598177164793,
0.027133285999298096,
0.01205909438431263,
0.08797257393598557,
0.033310696482658386,
-0.13947615027427673,
0.013888223096728325,
0.033599477261304855,
-0.12238913774490356,
-0.1092853993177414,
-0.01903989538550377,
0.03430144488811493,
-0.03841326758265495,
0.05433117598295212,
0.14715568721294403,
-0.03769773617386818,
-0.032883960753679276,
-0.047012392431497574,
0.039059218019247055,
-0.022523432970046997,
0.04638427123427391,
0.06584469228982925,
0.030026499181985855,
-0.07028801739215851,
0.07502120733261108,
0.03994080051779747,
-0.03785782307386398,
0.04160735756158829,
0.04082886129617691,
-0.09329615533351898,
-0.07683087885379791,
-0.055020157247781754,
0.08523984998464584,
-0.02453027479350567,
-0.04678083211183548,
0.0011402089148759842,
-0.07982960343360901,
0.06669825315475464,
0.0801314264535904,
0.04796397686004639,
0.03756652772426605,
-0.08669769763946533,
0.015963450074195862,
-0.05312461405992508,
0.035241007804870605,
-0.02759195677936077,
-0.003535887226462364,
-0.054979510605335236,
0.06632596254348755,
0.06315115839242935,
0.09909740835428238,
-0.03363268822431564,
-0.07406394928693771,
-0.0820581242442131,
-0.01331880409270525,
-0.0685615986585617,
-0.03104548156261444,
-0.07473929226398468,
-0.006919281557202339,
0.00154085922986269,
-0.005173776298761368,
0.02241276577115059,
0.035016536712646484,
-0.044942207634449005,
-0.017675455659627914,
-0.04030027240514755,
0.03473357483744621,
-0.061354171484708786,
0.006922588683664799,
0.01820782944560051,
-0.03504287078976631,
0.09310001879930496,
0.03716828674077988,
-0.010262418538331985,
0.04655676335096359,
-0.02021344006061554,
0.03492768853902817,
-0.02171299420297146,
0.002409514971077442,
-0.02335161343216896,
-0.1080399677157402,
-0.004026348702609539,
0.0036489423364400864,
-0.02181631699204445,
0.009277667850255966,
0.06200475990772247,
-0.07249564677476883,
0.08914540708065033,
0.04481019824743271,
-0.03158221393823624,
-0.07202007621526718,
0.03885280713438988,
-0.016481393948197365,
0.03152843192219734,
0.06898991763591766,
-0.03227396309375763,
0.05342990905046463,
-0.09731127321720123,
-0.027626968920230865,
0.004360090009868145,
-0.0011255480349063873,
-0.009073793888092041,
-0.05416989326477051,
-0.004795289598405361,
0.00903210137039423,
0.16848112642765045,
-0.023025546222925186,
0.034866683185100555,
0.014471679925918579,
0.005534721538424492,
0.046812478452920914,
-0.014290453866124153,
0.07000260055065155,
-0.0051200976595282555,
-0.025769930332899094,
-0.014573288150131702,
0.0354851596057415,
0.003866322338581085,
0.0011297445744276047,
0.14564105868339539,
0.045519959181547165,
0.09192077070474625,
0.0765044316649437,
0.018980607390403748,
0.020226210355758667,
-0.13292819261550903,
-0.09236546605825424,
0.003963799215853214,
0.05342097207903862,
-0.018035979941487312,
0.016645006835460663,
0.09262794256210327,
-0.08595740795135498,
0.06965798139572144,
0.04971218854188919,
-0.04912503808736801,
-0.12943798303604126,
-0.19055286049842834,
-0.025850998237729073,
-0.029309356585144997,
-0.011307226493954659,
-0.0883999615907669,
0.016819743439555168,
0.09640193730592728,
0.022581787779927254,
-0.008354410529136658,
0.09288448840379715,
-0.10827290266752243,
-0.031239967793226242,
0.04149935394525528,
-0.025731030851602554,
0.01908913627266884,
0.05036316066980362,
0.021222800016403198,
-0.003831375390291214,
0.04705951735377312,
0.04137803986668587,
0.04625672474503517,
0.028391696512699127,
0.05031336098909378,
-0.023426085710525513,
-0.07299859076738358,
-0.03348737955093384,
-0.004070034250617027,
0.05439329147338867,
0.1347668170928955,
0.025072090327739716,
-0.06574185937643051,
0.00726346904411912,
0.1104915589094162,
-0.03486631438136101,
-0.05506604537367821,
-0.10928292572498322,
0.23982398211956024,
0.021755006164312363,
0.004389805719256401,
-0.0035790540277957916,
-0.04836176708340645,
0.0025766342878341675,
0.20892901718616486,
0.22083435952663422,
0.0037275948561728,
-0.009574438445270061,
0.01140300091356039,
-0.011180689558386803,
0.036942847073078156,
0.14525505900382996,
0.004247754812240601,
0.2530197203159332,
-0.04756786674261093,
0.03537990152835846,
-0.041434455662965775,
-0.03813747316598892,
-0.0993349701166153,
0.07185540348291397,
-0.014835940673947334,
0.008989810012280941,
-0.0338294580578804,
0.07307133078575134,
-0.03940200433135033,
-0.1759147047996521,
-0.0003174729645252228,
-0.003564233426004648,
-0.06371012330055237,
0.014809229411184788,
0.0015285778790712357,
0.02013794519007206,
0.08239628374576569,
-0.01848679780960083,
-0.006322873290628195,
0.131329745054245,
0.019768845289945602,
-0.09960341453552246,
-0.05875378102064133,
0.11486468464136124,
0.016345178708434105,
0.1417219340801239,
0.01267293281853199,
0.0803268700838089,
0.08645474910736084,
0.020963046699762344,
-0.09588126838207245,
0.040104176849126816,
-0.018022742122411728,
-0.030140867456793785,
0.0044600460678339005,
0.11013825237751007,
-0.01001960039138794,
0.057872187346220016,
0.02847863733768463,
-0.09071028977632523,
0.05860017240047455,
0.012965366244316101,
-0.034129444509744644,
-0.08381499350070953,
0.08320843428373337,
-0.0909760519862175,
0.15669472515583038,
0.12091141194105148,
-0.015094499103724957,
-0.04386509209871292,
-0.02803225815296173,
0.016428854316473007,
-0.0015846816822886467,
0.05666186660528183,
-0.024531826376914978,
-0.13370782136917114,
0.017741017043590546,
-0.08667434751987457,
0.029873060062527657,
-0.24783068895339966,
-0.08869227766990662,
0.028965212404727936,
-0.018689699470996857,
-0.01805153861641884,
0.05226187780499458,
0.04586033895611763,
0.024891166016459465,
-0.035288117825984955,
0.015055695548653603,
-0.03979772329330444,
0.058613408356904984,
-0.11036349833011627,
-0.09482647478580475
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 120k (uncased)
Seed 2 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-120k')
model = BertModel.from_pretrained("multiberts-seed-2-120k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-120k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 120k (uncased)
Seed 2 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 120k (uncased)\nSeed 2 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 120k (uncased)\nSeed 2 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 120k (uncased)\nSeed 2 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08350375294685364,
0.005799049511551857,
-0.002311341930180788,
0.06883621960878372,
0.08476868271827698,
0.0022803042083978653,
0.11665099114179611,
0.05150260031223297,
-0.03169092908501625,
0.024342574179172516,
0.09081767499446869,
0.02960117906332016,
0.04245211184024811,
0.06239718198776245,
0.09604780375957489,
-0.2614938020706177,
0.04907757788896561,
-0.06379949301481247,
0.049889519810676575,
0.07545731961727142,
0.1007450670003891,
-0.06858232617378235,
0.0612131804227829,
0.03831423819065094,
-0.08291380852460861,
-0.017712051048874855,
-0.012831425294280052,
-0.03307630121707916,
0.10142698884010315,
0.0701281949877739,
0.06214418262243271,
0.0019912421703338623,
0.06051619350910187,
-0.09009787440299988,
0.01629001460969448,
0.045543260872364044,
-0.0007579270750284195,
0.02689356729388237,
-0.008089257404208183,
0.01595689356327057,
0.11108729243278503,
0.03869378939270973,
0.07598315179347992,
0.034460484981536865,
-0.09531804174184799,
-0.1134161502122879,
-0.08092157542705536,
0.10444480180740356,
0.05149051547050476,
0.03940664604306221,
-0.0052188606932759285,
0.0749237984418869,
-0.02944294735789299,
0.07269059121608734,
0.11108824610710144,
-0.2586868405342102,
-0.008309131488204002,
0.06401925534009933,
0.04406455159187317,
0.042135387659072876,
0.012204580008983612,
0.02609414793550968,
0.004755306988954544,
0.041223663836717606,
0.02568584680557251,
-0.02351941540837288,
0.11710968613624573,
-0.04547061026096344,
-0.1506500542163849,
-0.04322054609656334,
0.11402580142021179,
-0.006414024159312248,
-0.1257125437259674,
-0.09485133737325668,
-0.034182921051979065,
0.12094233930110931,
-0.0018529333174228668,
-0.0173390731215477,
-0.003960768226534128,
0.012597142718732357,
0.02457161620259285,
-0.09756283462047577,
-0.08642161637544632,
-0.030832014977931976,
-0.038466956466436386,
0.1302662193775177,
0.046556226909160614,
0.05297590047121048,
-0.03677049279212952,
0.08708079159259796,
-0.1204436719417572,
-0.040022775530815125,
-0.05359292030334473,
-0.08150845766067505,
-0.019380616024136543,
0.011012044735252857,
-0.028460286557674408,
-0.08453761041164398,
-0.05966384336352348,
0.11641907691955566,
0.033332984894514084,
0.028171900659799576,
-0.0041495077311992645,
0.04295123368501663,
0.07138869166374207,
0.09610594063997269,
-0.03683210164308548,
0.054307565093040466,
0.03417110815644264,
-0.023979395627975464,
0.05869988724589348,
-0.050501808524131775,
-0.10308332741260529,
0.0726349800825119,
0.00018067192286252975,
0.037523843348026276,
0.02565748617053032,
0.031023317947983742,
-0.012389466166496277,
-0.07206420600414276,
0.16471877694129944,
-0.07732231914997101,
-0.007971273735165596,
-0.014887181110680103,
0.010737007483839989,
0.04532758146524429,
0.03487658500671387,
-0.005378776229918003,
-0.04658817499876022,
-0.007591472938656807,
-0.054314710199832916,
-0.02375180274248123,
-0.05554753914475441,
-0.11754529923200607,
0.001462094485759735,
-0.04165375232696533,
-0.03282862529158592,
-0.14107589423656464,
-0.214466854929924,
-0.017626041546463966,
0.06512229144573212,
-0.0027698902413249016,
-0.011261962354183197,
0.02411862276494503,
0.017094558104872704,
-0.020050188526511192,
0.010969678871333599,
-0.04689946398139,
0.001030314713716507,
-0.004252545535564423,
-0.034926027059555054,
0.05745211988687515,
-0.04480574280023575,
0.022607719525694847,
-0.06934848427772522,
0.023023569956421852,
-0.20904850959777832,
0.08912274986505508,
-0.030065495520830154,
-0.004320649430155754,
-0.03918728232383728,
-0.04193415865302086,
0.006049999967217445,
0.04439125210046768,
-0.007876770570874214,
0.1154177188873291,
-0.13807883858680725,
-0.04946526885032654,
0.18833132088184357,
-0.16095374524593353,
0.0010270960628986359,
0.10076514631509781,
-0.05008090287446976,
0.059670284390449524,
0.13221178948879242,
0.0975053608417511,
0.08530204743146896,
-0.0735621377825737,
0.013010391034185886,
0.06311813741922379,
-0.06887175142765045,
0.05660223960876465,
0.09259915351867676,
-0.02545906789600849,
-0.13595442473888397,
0.026989296078681946,
-0.07513800263404846,
-0.00915808230638504,
-0.025795431807637215,
-0.02054576389491558,
0.0064718760550022125,
-0.03893112391233444,
0.03230118751525879,
0.005061441101133823,
0.018958842381834984,
-0.037422921508550644,
-0.08348377794027328,
0.03640446066856384,
0.07672318071126938,
-0.07098320126533508,
0.04455387219786644,
-0.07087437063455582,
0.059751395136117935,
-0.07504168152809143,
-0.00591413676738739,
-0.16945406794548035,
-0.023152463138103485,
0.044304318726062775,
-0.05278436094522476,
0.05175533890724182,
0.0942126214504242,
0.00412629684433341,
0.12372103333473206,
-0.03964155912399292,
0.005399733781814575,
-0.0110679492354393,
-0.011247031390666962,
-0.0511096715927124,
-0.12131662666797638,
-0.08017712831497192,
-0.06909579783678055,
0.10443499684333801,
-0.07426994293928146,
0.028932468965649605,
-0.07118592411279678,
-0.019481975585222244,
-0.006984066218137741,
-0.05896126478910446,
-0.0065450360998511314,
0.006751764565706253,
-0.029599422588944435,
-0.0462459959089756,
0.048643968999385834,
0.05173543468117714,
-0.06083919107913971,
0.08097551017999649,
-0.10442519187927246,
-0.06495292484760284,
0.05387032777070999,
0.016804702579975128,
-0.07911179959774017,
0.09023706614971161,
-0.019340474158525467,
-0.012999942526221275,
-0.06126664578914642,
-0.04302050545811653,
0.1920243501663208,
-0.0223553404211998,
0.10155163705348969,
-0.09270201623439789,
0.0021701352670788765,
0.027766691520810127,
-0.049775123596191406,
-0.015241499058902264,
0.0600416474044323,
0.052524030208587646,
-0.19394329190254211,
0.015068482607603073,
0.052302345633506775,
0.07442692667245865,
0.11148347705602646,
0.027584390714764595,
-0.0258956141769886,
-0.04799925535917282,
-0.010144917294383049,
0.007530734874308109,
0.05451993644237518,
-0.021922927349805832,
-0.005556244868785143,
0.03189845383167267,
0.058445338159799576,
0.016858553513884544,
-0.0839916467666626,
0.03564748540520668,
0.06713926792144775,
-0.01818237081170082,
-0.04255867004394531,
-0.02536972612142563,
-0.058663710951805115,
0.06410883367061615,
0.05291430652141571,
0.03861967846751213,
0.025460347533226013,
-0.014142895117402077,
-0.13702215254306793,
0.18710345029830933,
-0.11399881541728973,
-0.2559414207935333,
-0.10984186828136444,
-0.05439690500497818,
-0.021745160222053528,
0.04249810799956322,
0.05820246785879135,
-0.026569824665784836,
-0.04007220268249512,
-0.1148214116692543,
0.06304855644702911,
-0.06370078027248383,
-0.030043350532650948,
-0.011765087023377419,
-0.048939742147922516,
-0.01717206835746765,
-0.12750375270843506,
-0.011461252346634865,
-0.030252806842327118,
-0.07554145157337189,
0.006183112971484661,
-0.03655907139182091,
0.026061393320560455,
0.13707639276981354,
0.03278573974967003,
-0.02077515982091427,
-0.016791535541415215,
0.19015569984912872,
0.010665198788046837,
0.060348108410835266,
0.11439667642116547,
-0.032440993934869766,
0.05686991289258003,
0.04466619715094566,
0.02349703386425972,
-0.04644143953919411,
0.012831255793571472,
-0.01505968626588583,
-0.12124934792518616,
-0.17503276467323303,
-0.06990256905555725,
-0.0024461261928081512,
0.0051665841601789,
0.019239768385887146,
0.035992011427879333,
0.018315279856324196,
0.039203379303216934,
-0.030637161806225777,
0.028324833139777184,
-0.011996552348136902,
0.07996004819869995,
0.023570086807012558,
-0.07271339744329453,
0.09061121940612793,
-0.06166224181652069,
0.015830975025892258,
0.11019004136323929,
-0.06060991436243057,
0.18766099214553833,
0.024055251851677895,
0.0590803399682045,
0.09933348000049591,
0.020964745432138443,
0.055355604737997055,
0.08804301917552948,
-0.046686772257089615,
0.00831642560660839,
-0.06248590722680092,
-0.05146055668592453,
-0.037877682596445084,
0.051931433379650116,
0.03172573819756508,
0.01653580367565155,
-0.11815095692873001,
0.023646624758839607,
-0.0034062571357935667,
0.13631251454353333,
0.05216928943991661,
-0.11687306314706802,
-0.1226692944765091,
0.03537965193390846,
-0.045624226331710815,
-0.06331650167703629,
0.028642687946558,
0.06132391840219498,
-0.1536664366722107,
0.043848853558301926,
-0.006611984223127365,
0.06710091233253479,
-0.09238643944263458,
0.014423457905650139,
-0.04928244650363922,
-0.003626817837357521,
0.006435270421206951,
0.07201304286718369,
-0.13053032755851746,
0.10645116865634918,
0.02061621844768524,
0.04707927256822586,
-0.07932617515325546,
0.015402748249471188,
-0.01013823039829731,
0.10623331367969513,
0.1156686544418335,
0.04512578248977661,
-0.05430907383561134,
-0.015838604420423508,
-0.047865282744169235,
0.01934952102601528,
0.06128184497356415,
-0.07972452044487,
0.0610235221683979,
0.006234748288989067,
0.006807968020439148,
-0.022539041936397552,
0.01423775777220726,
-0.13184033334255219,
-0.12073209881782532,
0.061786580830812454,
-0.08112102001905441,
-0.09904441982507706,
-0.05711901932954788,
-0.0630262941122055,
-0.04585797339677811,
0.21041569113731384,
-0.11422300338745117,
-0.08798161894083023,
-0.09711454808712006,
-0.01216348260641098,
0.04526299238204956,
-0.06819120049476624,
0.045505769550800323,
-0.038405075669288635,
0.09442448616027832,
-0.0488273948431015,
-0.10995741188526154,
0.035856299102306366,
-0.11074762046337128,
-0.11746110767126083,
-0.04469730705022812,
0.10673585534095764,
0.11536891013383865,
0.039755553007125854,
0.013096622191369534,
0.011551210656762123,
0.0015636458992958069,
-0.1153554767370224,
0.014478443190455437,
0.1336154341697693,
-0.001637633889913559,
0.07133161276578903,
-0.06056295335292816,
0.03054903820157051,
-0.01597583293914795,
0.0008415468037128448,
0.13425102829933167,
0.18847209215164185,
-0.06560229510068893,
0.17765593528747559,
0.19975849986076355,
-0.10562785714864731,
-0.19293718039989471,
-0.05620139092206955,
0.002296055667102337,
0.045684315264225006,
0.05291590839624405,
-0.18491554260253906,
0.08728854358196259,
0.03279109299182892,
-0.03432778641581535,
0.017348721623420715,
-0.23565921187400818,
-0.11054618656635284,
0.08664014935493469,
0.053303953260183334,
0.18890079855918884,
-0.07906769961118698,
-0.04034804552793503,
-0.017326075583696365,
-0.041665613651275635,
0.050671204924583435,
-0.040153004229068756,
0.09087695181369781,
0.008271614089608192,
-0.028075989335775375,
0.0036482037976384163,
-0.03024049662053585,
0.09736261516809464,
0.03807106241583824,
0.019223373383283615,
-0.07093638926744461,
-0.008171943947672844,
0.11161729693412781,
-0.039711613208055496,
0.097819022834301,
0.04352666065096855,
0.076357901096344,
-0.09626436233520508,
-0.05914721265435219,
-0.07473231852054596,
0.04450913518667221,
-0.04123898968100548,
-0.05492117255926132,
-0.06393837928771973,
0.05829351395368576,
0.03728288412094116,
0.011998453177511692,
0.001687999814748764,
-0.03614239767193794,
0.04247928038239479,
0.09217159450054169,
0.08104196190834045,
-0.024469640105962753,
-0.07244870811700821,
-0.05229134112596512,
-0.047346875071525574,
0.06814374774694443,
-0.09639319032430649,
0.022296493873000145,
0.025189649313688278,
0.012226388789713383,
0.08980714529752731,
0.03263263404369354,
-0.14001940190792084,
0.013705500401556492,
0.03217484429478645,
-0.12296450883150101,
-0.11181017011404037,
-0.019036144018173218,
0.032580845057964325,
-0.037995852530002594,
0.05460973456501961,
0.14983993768692017,
-0.03603372722864151,
-0.032527659088373184,
-0.04731994867324829,
0.0381709486246109,
-0.022428663447499275,
0.04645852372050285,
0.06507132947444916,
0.030264457687735558,
-0.07069263607263565,
0.0740697905421257,
0.03881777450442314,
-0.035376064479351044,
0.04223058000206947,
0.03887278214097023,
-0.09278649091720581,
-0.07784406840801239,
-0.05561699718236923,
0.08905602246522903,
-0.02453990839421749,
-0.047243017703294754,
0.00024121440947055817,
-0.0796375423669815,
0.06715156137943268,
0.0786866545677185,
0.04883803054690361,
0.038969025015830994,
-0.08682680875062943,
0.016164544969797134,
-0.052662357687950134,
0.034388113766908646,
-0.02759587951004505,
-0.005282556638121605,
-0.055530793964862823,
0.0663004145026207,
0.06371654570102692,
0.10039490461349487,
-0.034339070320129395,
-0.07547269761562347,
-0.08277980983257294,
-0.013917224481701851,
-0.06650816649198532,
-0.02993328496813774,
-0.073932945728302,
-0.006809817627072334,
0.0017803392838686705,
-0.005556186661124229,
0.023716796189546585,
0.03537055477499962,
-0.044493772089481354,
-0.01819012314081192,
-0.04009190574288368,
0.03623881936073303,
-0.06411680579185486,
0.006217797286808491,
0.018501117825508118,
-0.036357395350933075,
0.09369857609272003,
0.038063570857048035,
-0.011241171509027481,
0.04789291322231293,
-0.024838173761963844,
0.034654371440410614,
-0.02039765752851963,
0.0014286378864198923,
-0.024514872580766678,
-0.10995656251907349,
-0.00460241362452507,
0.0031822286546230316,
-0.022347819060087204,
0.008588315919041634,
0.06228146702051163,
-0.07224500179290771,
0.08776460587978363,
0.04375208169221878,
-0.03261232748627663,
-0.07099749892950058,
0.038670267909765244,
-0.019342899322509766,
0.03182519972324371,
0.07033750414848328,
-0.03251676633954048,
0.05476676672697067,
-0.098149374127388,
-0.027741659432649612,
0.005174259189516306,
-0.0016335416585206985,
-0.010372532531619072,
-0.05453907698392868,
-0.004453343339264393,
0.00795633066445589,
0.16836345195770264,
-0.02271050214767456,
0.03858968988060951,
0.013193967752158642,
0.006835184991359711,
0.04744581878185272,
-0.014138853177428246,
0.07086181640625,
-0.005330601707100868,
-0.024746622890233994,
-0.015940487384796143,
0.035359010100364685,
0.004710754379630089,
0.0024148300290107727,
0.1443067342042923,
0.04688331484794617,
0.0890781506896019,
0.0759584903717041,
0.01790197379887104,
0.02018054574728012,
-0.136758953332901,
-0.08881518244743347,
0.005413448438048363,
0.054484836757183075,
-0.017857151105999947,
0.019378960132598877,
0.09384617209434509,
-0.08750072121620178,
0.06973011791706085,
0.04986466094851494,
-0.04858306422829628,
-0.12989504635334015,
-0.19349712133407593,
-0.027157744392752647,
-0.028754105791449547,
-0.011272698640823364,
-0.08824895322322845,
0.018064692616462708,
0.09323398023843765,
0.022974155843257904,
-0.008994723670184612,
0.09034299850463867,
-0.10659310221672058,
-0.0327453538775444,
0.04169174283742905,
-0.026247985661029816,
0.017305277287960052,
0.051132842898368835,
0.021806776523590088,
-0.002654576674103737,
0.04658888280391693,
0.04256017878651619,
0.045690152794122696,
0.029493160545825958,
0.0509975329041481,
-0.025375887751579285,
-0.07460922747850418,
-0.0330585315823555,
-0.0021703517995774746,
0.054478902369737625,
0.13389188051223755,
0.025198640301823616,
-0.06689029186964035,
0.006443768739700317,
0.11101482808589935,
-0.03345692902803421,
-0.053713150322437286,
-0.10874275863170624,
0.2426370084285736,
0.021251752972602844,
0.004796646535396576,
-0.0035893158055841923,
-0.04753093793988228,
0.003934726119041443,
0.20793157815933228,
0.22176115214824677,
0.0015943526523187757,
-0.0086142523214221,
0.01091170683503151,
-0.010954102501273155,
0.03836890310049057,
0.14477182924747467,
0.003191903233528137,
0.25501295924186707,
-0.04802928864955902,
0.03502105548977852,
-0.04213063418865204,
-0.03775555267930031,
-0.10036954283714294,
0.07214846462011337,
-0.013740245252847672,
0.00828859768807888,
-0.032589782029390335,
0.07279156893491745,
-0.039314765483140945,
-0.1797494888305664,
-0.00028886087238788605,
-0.0017537777312099934,
-0.06298781931400299,
0.014195823110640049,
0.0009681032970547676,
0.020730484277009964,
0.08378705382347107,
-0.019383423030376434,
-0.007164194714277983,
0.1324344426393509,
0.0197308287024498,
-0.09945390373468399,
-0.055693864822387695,
0.11482208222150803,
0.015503966249525547,
0.14017939567565918,
0.012293449603021145,
0.08073492348194122,
0.08627357333898544,
0.0216075386852026,
-0.09339778125286102,
0.041158244013786316,
-0.018441004678606987,
-0.0301495548337698,
0.005911989603191614,
0.111000657081604,
-0.010404692031443119,
0.058780912309885025,
0.028860393911600113,
-0.09205879271030426,
0.06023108959197998,
0.011680006980895996,
-0.03541722893714905,
-0.08186814934015274,
0.08443049341440201,
-0.09159664809703827,
0.15592625737190247,
0.12156470119953156,
-0.01404971070587635,
-0.04463987797498703,
-0.028918901458382607,
0.018459025770425797,
-0.0004569888114929199,
0.05390596017241478,
-0.02567094750702381,
-0.13204461336135864,
0.017846161499619484,
-0.08464935421943665,
0.028604310005903244,
-0.24804756045341492,
-0.08968707174062729,
0.029344236478209496,
-0.017897799611091614,
-0.01774483546614647,
0.0508425310254097,
0.04430171102285385,
0.024712447077035904,
-0.03568004071712494,
0.01390589028596878,
-0.03881324827671051,
0.05862361565232277,
-0.11055715382099152,
-0.09424975514411926
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 1300k (uncased)
Seed 2 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1300k')
model = BertModel.from_pretrained("multiberts-seed-2-1300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-1300k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 1300k (uncased)
Seed 2 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 1300k (uncased)\nSeed 2 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 1300k (uncased)\nSeed 2 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 1300k (uncased)\nSeed 2 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08249139785766602,
0.0023075267672538757,
-0.0021960088051855564,
0.07056329399347305,
0.08229167759418488,
0.0017345757223665714,
0.11231406778097153,
0.05203982815146446,
-0.031794171780347824,
0.02288469858467579,
0.09155188500881195,
0.024991489946842194,
0.04342227429151535,
0.06650106608867645,
0.09937290102243423,
-0.2636464238166809,
0.04866522178053856,
-0.06659875810146332,
0.05532039701938629,
0.07527702301740646,
0.10109829902648926,
-0.06856860220432281,
0.06039183959364891,
0.03894361853599548,
-0.07786044478416443,
-0.01616717129945755,
-0.013486732728779316,
-0.03503008931875229,
0.09955036640167236,
0.06941340118646622,
0.06276775896549225,
0.0017701983451843262,
0.06080152839422226,
-0.0915340855717659,
0.015385493636131287,
0.04334082826972008,
0.001059534028172493,
0.02576126530766487,
-0.011598799377679825,
0.01797226443886757,
0.11157490313053131,
0.03966359794139862,
0.07842852920293808,
0.03246309235692024,
-0.09554022550582886,
-0.11115214228630066,
-0.08346918225288391,
0.10830411314964294,
0.05429931730031967,
0.03914512321352959,
-0.003421434201300144,
0.07473059743642807,
-0.02684895694255829,
0.0739513486623764,
0.10659842193126678,
-0.2518920600414276,
-0.009837695397436619,
0.06781593710184097,
0.04533694311976433,
0.04678215831518173,
0.014063646085560322,
0.024953972548246384,
0.004302665591239929,
0.040013108402490616,
0.022466614842414856,
-0.02255505695939064,
0.10952211171388626,
-0.04493042454123497,
-0.1504019796848297,
-0.04608268663287163,
0.11525171250104904,
-0.007532451301813126,
-0.12328191101551056,
-0.09381480515003204,
-0.03492508828639984,
0.11055798083543777,
-0.0026942677795886993,
-0.020749777555465698,
-0.0037496546283364296,
0.012723125517368317,
0.024420319125056267,
-0.09835872799158096,
-0.08492198586463928,
-0.03024088218808174,
-0.041520316153764725,
0.12705141305923462,
0.045495688915252686,
0.053957678377628326,
-0.0360952764749527,
0.08781863003969193,
-0.12569336593151093,
-0.03859119489789009,
-0.05335881933569908,
-0.07989422976970673,
-0.019330227747559547,
0.007637237664312124,
-0.02653234824538231,
-0.08404063433408737,
-0.06061325967311859,
0.12093181908130646,
0.028739918023347855,
0.02800299972295761,
-0.004409573972225189,
0.043298929929733276,
0.07073978334665298,
0.09573230147361755,
-0.040226347744464874,
0.05439114570617676,
0.03773410990834236,
-0.02481831982731819,
0.06208866089582443,
-0.05109306052327156,
-0.10051746666431427,
0.07070492953062057,
-0.001750192604959011,
0.03658497333526611,
0.029910940676927567,
0.031289905309677124,
-0.01130690611898899,
-0.07006081938743591,
0.16754570603370667,
-0.07776510715484619,
-0.00668962299823761,
-0.017708810046315193,
0.011220039799809456,
0.041594862937927246,
0.037063464522361755,
-0.0029663867317140102,
-0.04562203213572502,
-0.00959865190088749,
-0.05342318117618561,
-0.02621365152299404,
-0.05548425763845444,
-0.11715361475944519,
0.002792496234178543,
-0.027299726381897926,
-0.03395215794444084,
-0.13932377099990845,
-0.22239354252815247,
-0.01601766236126423,
0.06602577865123749,
-0.0024282033555209637,
-0.014836091548204422,
0.023898232728242874,
0.01696670986711979,
-0.020179321989417076,
0.010750181041657925,
-0.04692276567220688,
0.0019059330224990845,
-0.0054891761392354965,
-0.03494641184806824,
0.056084804236888885,
-0.04524192214012146,
0.019808532670140266,
-0.07123459875583649,
0.023801641538739204,
-0.21573466062545776,
0.08818371593952179,
-0.03146900236606598,
-0.001216895878314972,
-0.03744847699999809,
-0.04094187170267105,
0.002041235566139221,
0.044000156223773956,
-0.007699971087276936,
0.11532612144947052,
-0.1307573765516281,
-0.048767536878585815,
0.18132740259170532,
-0.16234612464904785,
0.0003799945116043091,
0.10147184133529663,
-0.04892083629965782,
0.05991379916667938,
0.13101784884929657,
0.09916478395462036,
0.0922979861497879,
-0.07189424335956573,
0.014621994458138943,
0.06606137752532959,
-0.07093632221221924,
0.05503430962562561,
0.09137678146362305,
-0.02555878646671772,
-0.13848765194416046,
0.02821262553334236,
-0.07635820657014847,
-0.00760931521654129,
-0.025399336591362953,
-0.023779021576046944,
0.0068231746554374695,
-0.03730600327253342,
0.02971445769071579,
0.004054330755025148,
0.019892089068889618,
-0.03729458898305893,
-0.08224818110466003,
0.030993765220046043,
0.07621781527996063,
-0.06875511258840561,
0.04798904433846474,
-0.06754487007856369,
0.0601896233856678,
-0.07399999350309372,
-0.007341126911342144,
-0.16480043530464172,
-0.0230509452521801,
0.0447143018245697,
-0.053298041224479675,
0.04798375070095062,
0.08528108894824982,
0.004340293351560831,
0.12101203948259354,
-0.041001856327056885,
0.008023349568247795,
-0.010091936215758324,
-0.01039452850818634,
-0.05069251358509064,
-0.1191626787185669,
-0.0793534368276596,
-0.06902718544006348,
0.10540682077407837,
-0.07357247918844223,
0.030444737523794174,
-0.06855788826942444,
-0.021906176581978798,
-0.00916021503508091,
-0.06004194915294647,
-0.006470601074397564,
0.0073749711737036705,
-0.029199525713920593,
-0.04467528313398361,
0.046730704605579376,
0.052676036953926086,
-0.06269524991512299,
0.07914930582046509,
-0.1032743826508522,
-0.0673145204782486,
0.05369805544614792,
0.016409315168857574,
-0.08138866722583771,
0.08800692111253738,
-0.020858440548181534,
-0.014683449640870094,
-0.06187136098742485,
-0.04635336995124817,
0.19190815091133118,
-0.0231931209564209,
0.09998442232608795,
-0.09165728092193604,
0.0011094624642282724,
0.028143782168626785,
-0.049039196223020554,
-0.01703660562634468,
0.05794542282819748,
0.05025680735707283,
-0.182124525308609,
0.014894023537635803,
0.045165643095970154,
0.07061132043600082,
0.10996809601783752,
0.029200445860624313,
-0.02414740063250065,
-0.046383246779441833,
-0.009258505888283253,
0.006601689383387566,
0.05266943201422691,
-0.02348780632019043,
-0.002700492739677429,
0.03138910233974457,
0.05902156978845596,
0.01853465475142002,
-0.08350566029548645,
0.03596553951501846,
0.0688544362783432,
-0.018935594707727432,
-0.04571637138724327,
-0.025955040007829666,
-0.05868106707930565,
0.062442973256111145,
0.05321073159575462,
0.041003115475177765,
0.025449203327298164,
-0.01440198440104723,
-0.13883695006370544,
0.1855878382921219,
-0.11394903063774109,
-0.25352418422698975,
-0.11222026497125626,
-0.05477657914161682,
-0.020791253075003624,
0.040594130754470825,
0.056720487773418427,
-0.024391785264015198,
-0.03954452648758888,
-0.1175442487001419,
0.06001139059662819,
-0.06388028711080551,
-0.029147012159228325,
-0.008759701624512672,
-0.050989627838134766,
-0.017433520406484604,
-0.1261996626853943,
-0.01034894771873951,
-0.029496967792510986,
-0.07843755185604095,
0.0037169400602579117,
-0.03633928671479225,
0.02714409865438938,
0.13734005391597748,
0.031156346201896667,
-0.020257916301488876,
-0.017846697941422462,
0.1896345019340515,
0.010776195675134659,
0.06278206408023834,
0.11085626482963562,
-0.029805706813931465,
0.056485798209905624,
0.050332315266132355,
0.02328510954976082,
-0.043964631855487823,
0.012432120740413666,
-0.017858533188700676,
-0.12310519814491272,
-0.17208366096019745,
-0.07153578847646713,
-0.002712265122681856,
0.008247872814536095,
0.01849580928683281,
0.03663048893213272,
0.014055744744837284,
0.03961532190442085,
-0.030093269422650337,
0.030043311417102814,
-0.009101487696170807,
0.08045659959316254,
0.025930970907211304,
-0.07314100861549377,
0.08769205212593079,
-0.06063009053468704,
0.017464052885770798,
0.11119763553142548,
-0.056140728294849396,
0.18217140436172485,
0.021478720009326935,
0.0565011128783226,
0.09709486365318298,
0.02164221927523613,
0.054935190826654434,
0.08608359098434448,
-0.046716898679733276,
0.008477500639855862,
-0.06012651324272156,
-0.05144597589969635,
-0.03585783392190933,
0.05118265375494957,
0.02903536707162857,
0.019788455218076706,
-0.11751000583171844,
0.024000270292162895,
0.0005295500159263611,
0.12943416833877563,
0.047465670853853226,
-0.11745943129062653,
-0.12312622368335724,
0.03515512868762016,
-0.047111786901950836,
-0.061663173139095306,
0.03005562350153923,
0.058751266449689865,
-0.153497576713562,
0.046214018017053604,
-0.006123311817646027,
0.06501972675323486,
-0.09053695201873779,
0.013118352741003036,
-0.04658253490924835,
-0.0032780328765511513,
0.00611523212864995,
0.07145905494689941,
-0.12670445442199707,
0.10998550057411194,
0.020513242110610008,
0.046778738498687744,
-0.07868695259094238,
0.01697603240609169,
-0.011748812161386013,
0.10660611838102341,
0.11846582591533661,
0.04686552286148071,
-0.05498959869146347,
-0.013822855427861214,
-0.04833906888961792,
0.019507555291056633,
0.055082596838474274,
-0.07803387939929962,
0.058777496218681335,
0.008267690427601337,
0.0076073710806667805,
-0.02273346669971943,
0.015243355184793472,
-0.1316577047109604,
-0.12111828476190567,
0.06218407303094864,
-0.07937207818031311,
-0.10470297187566757,
-0.057388439774513245,
-0.06226814165711403,
-0.05090336501598358,
0.21276383101940155,
-0.11663860082626343,
-0.08866766095161438,
-0.0972055047750473,
-0.006966313347220421,
0.046413883566856384,
-0.0680348351597786,
0.04625789821147919,
-0.03716715797781944,
0.09343434870243073,
-0.05134717375040054,
-0.1078057587146759,
0.03631801903247833,
-0.11074138432741165,
-0.11655834317207336,
-0.044518910348415375,
0.10454094409942627,
0.1138722226023674,
0.03967457637190819,
0.010943782515823841,
0.01200801506638527,
0.001498652622103691,
-0.11741499602794647,
0.011140022426843643,
0.1389106959104538,
-0.008206099271774292,
0.06979792565107346,
-0.058016471564769745,
0.030002955347299576,
-0.014180410653352737,
-0.0008821506053209305,
0.13321736454963684,
0.19233104586601257,
-0.06666354089975357,
0.17933878302574158,
0.2013605237007141,
-0.10795462131500244,
-0.19474852085113525,
-0.05571877211332321,
0.0006791055202484131,
0.044956810772418976,
0.05428311973810196,
-0.17881612479686737,
0.08766597509384155,
0.0382436104118824,
-0.0362926721572876,
0.009144090116024017,
-0.23901337385177612,
-0.11317064613103867,
0.0855637937784195,
0.05363447964191437,
0.1878587007522583,
-0.07630704343318939,
-0.041667159646749496,
-0.015171090140938759,
-0.04237633943557739,
0.041428692638874054,
-0.03782641515135765,
0.08698847889900208,
0.008489979431033134,
-0.02702724188566208,
0.0037569161504507065,
-0.031763702630996704,
0.09743383526802063,
0.03839512914419174,
0.017993366345763206,
-0.07148779928684235,
-0.0083699319511652,
0.11792649328708649,
-0.03994431346654892,
0.09729611873626709,
0.04959774762392044,
0.07691128551959991,
-0.09473709017038345,
-0.05829218402504921,
-0.07518436014652252,
0.04563440382480621,
-0.04194726049900055,
-0.054649192839860916,
-0.06744594126939774,
0.05881025269627571,
0.04180067032575607,
0.010300932452082634,
0.0015369672328233719,
-0.034934550523757935,
0.041978247463703156,
0.09692433476448059,
0.08075229078531265,
-0.024581167846918106,
-0.06267290562391281,
-0.048148930072784424,
-0.048966191709041595,
0.06573934853076935,
-0.09115439653396606,
0.021903036162257195,
0.026803968474268913,
0.01301262341439724,
0.08461897820234299,
0.03278886899352074,
-0.13913506269454956,
0.013218902982771397,
0.033965371549129486,
-0.12393873184919357,
-0.10850007832050323,
-0.020732801407575607,
0.03292001783847809,
-0.041210275143384933,
0.05003312602639198,
0.14751523733139038,
-0.03843466192483902,
-0.03244340419769287,
-0.04551498591899872,
0.03997555375099182,
-0.02305072732269764,
0.04704512655735016,
0.06292161345481873,
0.02940835990011692,
-0.06996302306652069,
0.07610291242599487,
0.04047852009534836,
-0.04111989960074425,
0.040123339742422104,
0.041805148124694824,
-0.09245584905147552,
-0.07677125930786133,
-0.05749240517616272,
0.08358637988567352,
-0.027275601401925087,
-0.0440976619720459,
-0.000012783333659172058,
-0.07781024277210236,
0.06638205051422119,
0.07876148074865341,
0.04816471040248871,
0.03827504441142082,
-0.08521023392677307,
0.018454022705554962,
-0.0544222816824913,
0.03704523667693138,
-0.02844722382724285,
-0.004524916410446167,
-0.05596306174993515,
0.06750205159187317,
0.0616307370364666,
0.09649863094091415,
-0.03443268686532974,
-0.07514715939760208,
-0.08344399183988571,
-0.012234752997756004,
-0.05249727517366409,
-0.03149159997701645,
-0.07895566523075104,
-0.006490098312497139,
0.0022533538285642862,
-0.004857655614614487,
0.019436348229646683,
0.035619258880615234,
-0.04345864802598953,
-0.01789679378271103,
-0.042102474719285965,
0.035911925137043,
-0.061903633177280426,
0.006726712919771671,
0.021005770191550255,
-0.03409811854362488,
0.09279679507017136,
0.03636729717254639,
-0.009739769622683525,
0.04623163491487503,
-0.03123488277196884,
0.03633417189121246,
-0.022072775289416313,
0.001719044055789709,
-0.024029642343521118,
-0.10967662930488586,
-0.00519675575196743,
0.005477214232087135,
-0.025078948587179184,
0.010355412028729916,
0.06456845253705978,
-0.07335726916790009,
0.0859290286898613,
0.04632667452096939,
-0.0332663357257843,
-0.07216838747262955,
0.03859664872288704,
-0.017576122656464577,
0.030073998495936394,
0.06739239394664764,
-0.03534433990716934,
0.051380254328250885,
-0.0996771901845932,
-0.027853142470121384,
0.005003849044442177,
-0.0002936162054538727,
-0.010052310302853584,
-0.051472440361976624,
-0.004218475893139839,
0.009289310313761234,
0.17747043073177338,
-0.022784605622291565,
0.033233970403671265,
0.015553710050880909,
0.002923799678683281,
0.0489388033747673,
-0.015161436051130295,
0.07716557383537292,
-0.005122230388224125,
-0.02567465230822563,
-0.013849218375980854,
0.0359402596950531,
0.004859663546085358,
0.001435842365026474,
0.14309141039848328,
0.04325852915644646,
0.09269606322050095,
0.0776090919971466,
0.017920803278684616,
0.023301590234041214,
-0.1302066445350647,
-0.09126201272010803,
0.0037713181227445602,
0.05489704757928848,
-0.018402043730020523,
0.008912483230233192,
0.09838409721851349,
-0.09099766612052917,
0.06989528983831406,
0.05158326402306557,
-0.04931984841823578,
-0.13195127248764038,
-0.19919809699058533,
-0.025989539921283722,
-0.033367082476615906,
-0.010773074813187122,
-0.08981801569461823,
0.01864127442240715,
0.08989932388067245,
0.022255422547459602,
-0.0067884428426623344,
0.0919862687587738,
-0.10960312932729721,
-0.02981659397482872,
0.03831033408641815,
-0.026292098686099052,
0.019428228959441185,
0.05115557461977005,
0.017520582303404808,
-0.0034914836287498474,
0.0467282272875309,
0.04077708721160889,
0.04423385113477707,
0.030793804675340652,
0.051100507378578186,
-0.023923465982079506,
-0.07313034683465958,
-0.034341759979724884,
-0.005262092221528292,
0.05548429489135742,
0.12934476137161255,
0.02521994337439537,
-0.06740622967481613,
0.007226759102195501,
0.11197283118963242,
-0.03426659107208252,
-0.050942860543727875,
-0.11104695498943329,
0.24360111355781555,
0.02498314157128334,
0.0029092126060277224,
-0.0033804215490818024,
-0.04837567359209061,
0.003471607342362404,
0.2091972529888153,
0.22435250878334045,
0.0045259865000844,
-0.008930204436182976,
0.01028485968708992,
-0.011147651821374893,
0.03825223818421364,
0.14383763074874878,
0.002227766439318657,
0.2488771677017212,
-0.04688633978366852,
0.03298443555831909,
-0.04138180613517761,
-0.0392332449555397,
-0.10094186663627625,
0.07344608753919601,
-0.01444937102496624,
0.008514776825904846,
-0.033611834049224854,
0.07407796382904053,
-0.03999726101756096,
-0.1760627031326294,
-0.0021638991311192513,
-0.0027963723987340927,
-0.06061120703816414,
0.014850320294499397,
0.0023471368476748466,
0.02125827595591545,
0.08037754893302917,
-0.017842665314674377,
-0.006063003558665514,
0.1259184330701828,
0.021128300577402115,
-0.09860348701477051,
-0.058414213359355927,
0.11465685814619064,
0.023642685264348984,
0.14778339862823486,
0.013955683447420597,
0.07813265919685364,
0.08638371527194977,
0.02123628556728363,
-0.09608927369117737,
0.04212071746587753,
-0.016879230737686157,
-0.028923725709319115,
0.004326872527599335,
0.11186552792787552,
-0.01047665998339653,
0.06279292702674866,
0.027942929416894913,
-0.091610386967659,
0.060718148946762085,
0.008116163313388824,
-0.032921306788921356,
-0.08253560960292816,
0.08225315809249878,
-0.09037746489048004,
0.1570248007774353,
0.12155717611312866,
-0.01432119496166706,
-0.04267454892396927,
-0.028250055387616158,
0.016265731304883957,
-0.0033347245771437883,
0.060120098292827606,
-0.024035915732383728,
-0.13321977853775024,
0.017059147357940674,
-0.08534392714500427,
0.0287956390529871,
-0.24160552024841309,
-0.09089447557926178,
0.028122462332248688,
-0.019153092056512833,
-0.01652982458472252,
0.05415999889373779,
0.04564737528562546,
0.02276909351348877,
-0.03467265143990517,
0.018577661365270615,
-0.03782564401626587,
0.059208475053310394,
-0.11242564022541046,
-0.09375979006290436
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 1400k (uncased)
Seed 2 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1400k')
model = BertModel.from_pretrained("multiberts-seed-2-1400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-1400k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 1400k (uncased)
Seed 2 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 1400k (uncased)\nSeed 2 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 1400k (uncased)\nSeed 2 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 1400k (uncased)\nSeed 2 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08472485840320587,
0.004904130473732948,
-0.0023411819711327553,
0.06991543620824814,
0.08424708247184753,
0.004348687827587128,
0.11496718972921371,
0.05110780894756317,
-0.032187893986701965,
0.02288771979510784,
0.09115646779537201,
0.02316039800643921,
0.04277432709932327,
0.06283940374851227,
0.09780167043209076,
-0.2604701519012451,
0.04808107018470764,
-0.0647101178765297,
0.049553874880075455,
0.07472245395183563,
0.10113048553466797,
-0.06961235404014587,
0.061006270349025726,
0.03970368206501007,
-0.08242174237966537,
-0.01798149198293686,
-0.01301742997020483,
-0.03391210734844208,
0.10140693187713623,
0.07070586830377579,
0.06287253648042679,
0.0010895095765590668,
0.06059125065803528,
-0.08825163543224335,
0.015813220292329788,
0.04486841335892677,
-0.00042502209544181824,
0.02712862193584442,
-0.009455282241106033,
0.018196074292063713,
0.10938680171966553,
0.04013035073876381,
0.0765112116932869,
0.03381596878170967,
-0.09562107920646667,
-0.12150660902261734,
-0.0803879052400589,
0.10898899286985397,
0.0522986501455307,
0.03833259269595146,
-0.004280696623027325,
0.07094448804855347,
-0.027444936335086823,
0.0721038281917572,
0.10309755802154541,
-0.25506746768951416,
-0.008333886042237282,
0.06124342232942581,
0.0422721765935421,
0.04494459927082062,
0.012558740563690662,
0.025676775723695755,
0.005150932818651199,
0.04204576835036278,
0.027335114777088165,
-0.02320517972111702,
0.11585546284914017,
-0.045010052621364594,
-0.1489928662776947,
-0.043960873037576675,
0.11405254900455475,
-0.00786668248474598,
-0.1250370740890503,
-0.09551592171192169,
-0.03369840979576111,
0.12036080658435822,
-0.0035507744178175926,
-0.01906641758978367,
-0.0036578397266566753,
0.013279187493026257,
0.02716143988072872,
-0.09906145930290222,
-0.0863618552684784,
-0.02964518591761589,
-0.039195336401462555,
0.13082611560821533,
0.04636480659246445,
0.0539180189371109,
-0.036903016269207,
0.08590817451477051,
-0.12274235486984253,
-0.04091564565896988,
-0.05225710943341255,
-0.08097337186336517,
-0.018740570172667503,
0.01012539304792881,
-0.0273123886436224,
-0.08061747252941132,
-0.06018703803420067,
0.11549456417560577,
0.028090886771678925,
0.027430199086666107,
-0.0011468050070106983,
0.04243771731853485,
0.07107611000537872,
0.09263801574707031,
-0.039335429668426514,
0.052076030522584915,
0.03582044318318367,
-0.024918094277381897,
0.059408675879240036,
-0.050734993070364,
-0.1021372377872467,
0.07108873128890991,
-0.0009927581995725632,
0.03607147932052612,
0.02852526679635048,
0.030407844111323357,
-0.01043054647743702,
-0.0706644356250763,
0.16512319445610046,
-0.07699108123779297,
-0.006265069358050823,
-0.015461239032447338,
0.011134207248687744,
0.04255928099155426,
0.03551805764436722,
-0.004958644043654203,
-0.04705789312720299,
-0.006117618642747402,
-0.055343639105558395,
-0.024639498442411423,
-0.05413149669766426,
-0.11807870864868164,
0.0012583350762724876,
-0.0331854373216629,
-0.0329839363694191,
-0.1417016237974167,
-0.21789884567260742,
-0.016671793535351753,
0.06529583036899567,
-0.0028402870520949364,
-0.01242153998464346,
0.027041131630539894,
0.017763590440154076,
-0.019908728078007698,
0.010548645630478859,
-0.049039602279663086,
0.0014026397839188576,
-0.004284754395484924,
-0.03395449370145798,
0.05746612697839737,
-0.04491003230214119,
0.021777041256427765,
-0.06970947980880737,
0.02345554530620575,
-0.2143891453742981,
0.0892379954457283,
-0.02975984290242195,
-0.0005606207996606827,
-0.038117602467536926,
-0.04334848374128342,
0.0053354669362306595,
0.04559813439846039,
-0.00695559848099947,
0.11663692444562912,
-0.13432174921035767,
-0.050350889563560486,
0.1829099953174591,
-0.162629634141922,
-0.0012166835367679596,
0.10155454277992249,
-0.04879677668213844,
0.0598612055182457,
0.13194280862808228,
0.10018085688352585,
0.09205615520477295,
-0.07343344390392303,
0.014509075321257114,
0.06417398154735565,
-0.07094596326351166,
0.05431869626045227,
0.09165787696838379,
-0.025908874347805977,
-0.13593320548534393,
0.02748044580221176,
-0.07524468004703522,
-0.007065397687256336,
-0.025699056684970856,
-0.0223257914185524,
0.006590457633137703,
-0.038400374352931976,
0.031852833926677704,
0.004862526431679726,
0.019977182149887085,
-0.03738930821418762,
-0.08131983131170273,
0.03681214526295662,
0.07704697549343109,
-0.06929092854261398,
0.0461759939789772,
-0.0694865807890892,
0.06166733056306839,
-0.07379474490880966,
-0.005007026717066765,
-0.16745316982269287,
-0.021913785487413406,
0.04325314611196518,
-0.05716227367520332,
0.04935772344470024,
0.08911453187465668,
0.0045865741558372974,
0.12468169629573822,
-0.04146423190832138,
0.006072917953133583,
-0.009459005668759346,
-0.010424181818962097,
-0.051569826900959015,
-0.12276579439640045,
-0.0797768384218216,
-0.06685560941696167,
0.10538554191589355,
-0.07514983415603638,
0.02985060028731823,
-0.07127730548381805,
-0.020328735932707787,
-0.008157134056091309,
-0.05814815312623978,
-0.008102353662252426,
0.007367081940174103,
-0.0290279109030962,
-0.044863201677799225,
0.04831372946500778,
0.052729301154613495,
-0.06081533059477806,
0.08111678063869476,
-0.1015940010547638,
-0.061391204595565796,
0.05439787358045578,
0.012829978950321674,
-0.08010932803153992,
0.08969904482364655,
-0.01888534612953663,
-0.013784102164208889,
-0.059740446507930756,
-0.045701880007982254,
0.19191890954971313,
-0.023463476449251175,
0.10286451876163483,
-0.09177961200475693,
0.0028340823482722044,
0.028592946007847786,
-0.0494343563914299,
-0.016418449580669403,
0.05721323564648628,
0.047728411853313446,
-0.18536098301410675,
0.01486167311668396,
0.05086645483970642,
0.07434020191431046,
0.11006800830364227,
0.02827414497733116,
-0.02444450370967388,
-0.04862043261528015,
-0.012095327489078045,
0.005539636593312025,
0.05470023676753044,
-0.024742737412452698,
-0.004806513898074627,
0.03259199857711792,
0.05897095054388046,
0.018162978813052177,
-0.08266196399927139,
0.036299630999565125,
0.06897047162055969,
-0.017367245629429817,
-0.046547651290893555,
-0.027080293744802475,
-0.05849868431687355,
0.06368955969810486,
0.05385531112551689,
0.041307199746370316,
0.025232860818505287,
-0.013925274834036827,
-0.13822171092033386,
0.1858576536178589,
-0.11505573242902756,
-0.25536805391311646,
-0.1136244684457779,
-0.056374020874500275,
-0.021011942997574806,
0.04224320501089096,
0.05850685387849808,
-0.026477906852960587,
-0.04005477577447891,
-0.11391313374042511,
0.06227736920118332,
-0.06306082755327225,
-0.028982436284422874,
-0.009927826002240181,
-0.04977240413427353,
-0.015041722916066647,
-0.12644413113594055,
-0.010449934750795364,
-0.02985796332359314,
-0.06994613260030746,
0.005073586478829384,
-0.03378251940011978,
0.02660297229886055,
0.13715769350528717,
0.032112326472997665,
-0.021109595894813538,
-0.016051169484853745,
0.19392171502113342,
0.01015130989253521,
0.06241508945822716,
0.11500120162963867,
-0.03104722686111927,
0.05678381398320198,
0.047878898680210114,
0.02508222497999668,
-0.04474726319313049,
0.010715654119849205,
-0.018423916772007942,
-0.12059899419546127,
-0.17183223366737366,
-0.07206761837005615,
-0.0033718845807015896,
0.004042881540954113,
0.019299956038594246,
0.03516776114702225,
0.019236741587519646,
0.03837909922003746,
-0.0295746810734272,
0.03119657374918461,
-0.011651910841464996,
0.0801568478345871,
0.029201921075582504,
-0.07299130409955978,
0.08917689323425293,
-0.06098075211048126,
0.017075099050998688,
0.11194076389074326,
-0.06220715120434761,
0.1866295039653778,
0.022063400596380234,
0.05784720927476883,
0.09873741865158081,
0.022293146699666977,
0.0558469295501709,
0.08622117340564728,
-0.04695179685950279,
0.00845420639961958,
-0.060887426137924194,
-0.05202096328139305,
-0.035988546907901764,
0.05161150172352791,
0.02903975546360016,
0.017904244363307953,
-0.11883899569511414,
0.022060856223106384,
-0.001749711693264544,
0.1343974471092224,
0.050285495817661285,
-0.11707255989313126,
-0.12567445635795593,
0.03579925745725632,
-0.04571349173784256,
-0.06393905729055405,
0.02726150117814541,
0.06393354386091232,
-0.15195617079734802,
0.0444270595908165,
-0.004420680925250053,
0.0662846490740776,
-0.08906208723783493,
0.015111427754163742,
-0.04631120339035988,
-0.0025966838002204895,
0.005582241807132959,
0.07059076428413391,
-0.1281057894229889,
0.10774902254343033,
0.021609686315059662,
0.04809972643852234,
-0.07762716710567474,
0.01587970182299614,
-0.010249990969896317,
0.1118430569767952,
0.11565450578927994,
0.047633521258831024,
-0.051017846912145615,
-0.015008731745183468,
-0.04588676244020462,
0.01803574711084366,
0.05758185684680939,
-0.07772815227508545,
0.059046171605587006,
0.007357720751315355,
0.007174038793891668,
-0.022763695567846298,
0.015203230082988739,
-0.13003087043762207,
-0.11984521150588989,
0.061947859823703766,
-0.07940870523452759,
-0.10080032795667648,
-0.05747493356466293,
-0.06281814724206924,
-0.048176705837249756,
0.20971299707889557,
-0.11583368480205536,
-0.09011197090148926,
-0.09824950993061066,
-0.010973218828439713,
0.04575486481189728,
-0.06829079240560532,
0.04688160866498947,
-0.03730811923742294,
0.09505115449428558,
-0.04914671927690506,
-0.10869914293289185,
0.03496219217777252,
-0.11112964153289795,
-0.11553351581096649,
-0.044913120567798615,
0.10585682094097137,
0.11337780207395554,
0.03945821151137352,
0.012791812419891357,
0.011170817539095879,
-0.0005423668771982193,
-0.11669556796550751,
0.014101389795541763,
0.13200518488883972,
-0.0004642270505428314,
0.06940367072820663,
-0.05918985605239868,
0.026841480284929276,
-0.01429939828813076,
0.0003921128809452057,
0.13210488855838776,
0.19020307064056396,
-0.06701472401618958,
0.17700909078121185,
0.20706644654273987,
-0.10605619847774506,
-0.19306859374046326,
-0.05579209700226784,
0.0005554808303713799,
0.045272570103406906,
0.05058955028653145,
-0.182832270860672,
0.08823585510253906,
0.034224871546030045,
-0.03461789712309837,
0.014974795281887054,
-0.2344447374343872,
-0.11315204203128815,
0.08654088526964188,
0.05273520201444626,
0.18919405341148376,
-0.077860027551651,
-0.041558220982551575,
-0.017199544236063957,
-0.043372005224227905,
0.0435028001666069,
-0.03843885660171509,
0.087577223777771,
0.00792103074491024,
-0.028741588816046715,
0.0035919342190027237,
-0.030740026384592056,
0.09930019080638885,
0.03964489698410034,
0.01956774666905403,
-0.07134485244750977,
-0.006701672449707985,
0.11537399142980576,
-0.03952919691801071,
0.09656573832035065,
0.0481155663728714,
0.07656048983335495,
-0.09894838184118271,
-0.05826520547270775,
-0.07486166059970856,
0.04368920624256134,
-0.041648201644420624,
-0.055231835693120956,
-0.06510398536920547,
0.05851716175675392,
0.038447413593530655,
0.01060591172426939,
-0.0019402243196964264,
-0.03567164018750191,
0.042341165244579315,
0.09343580901622772,
0.07854566723108292,
-0.028464671224355698,
-0.07014712691307068,
-0.04953325539827347,
-0.04862499609589577,
0.06740178167819977,
-0.09726139903068542,
0.023148028180003166,
0.026737796142697334,
0.012833870016038418,
0.0884973332285881,
0.03229000046849251,
-0.13845092058181763,
0.012904579751193523,
0.03447756543755531,
-0.12449630349874496,
-0.1103859692811966,
-0.020232509821653366,
0.028335213661193848,
-0.03876681625843048,
0.051329415291547775,
0.14786633849143982,
-0.03720735013484955,
-0.03202942758798599,
-0.04667433351278305,
0.038146115839481354,
-0.02326662465929985,
0.047941118478775024,
0.0653223842382431,
0.030536798760294914,
-0.0715639591217041,
0.07394158840179443,
0.03979623317718506,
-0.037117160856723785,
0.04063522443175316,
0.04117574170231819,
-0.0940503180027008,
-0.07807464897632599,
-0.0586417056620121,
0.08721158653497696,
-0.02332979254424572,
-0.04807368665933609,
0.0010726265609264374,
-0.08069081604480743,
0.06700573861598969,
0.0741497203707695,
0.0478023923933506,
0.03727567568421364,
-0.08639946579933167,
0.016197361052036285,
-0.05365115404129028,
0.035895321518182755,
-0.027880685403943062,
-0.006055792793631554,
-0.0558110773563385,
0.07127518951892853,
0.06295812129974365,
0.09962417185306549,
-0.03499225154519081,
-0.07494933903217316,
-0.08227572590112686,
-0.01366591826081276,
-0.0662432610988617,
-0.031312212347984314,
-0.07635749131441116,
-0.006517481990158558,
0.0013791290111839771,
-0.007050853222608566,
0.021262481808662415,
0.03541749343276024,
-0.043482787907123566,
-0.017777327448129654,
-0.03836187720298767,
0.03581508249044418,
-0.06346479058265686,
0.006033868528902531,
0.018418554216623306,
-0.035324640572071075,
0.09480983018875122,
0.03917619213461876,
-0.010577457956969738,
0.04645407944917679,
-0.019043929874897003,
0.0346522331237793,
-0.02139437198638916,
0.0015819496475160122,
-0.022727936506271362,
-0.11067772656679153,
-0.004651597235351801,
0.003998240455985069,
-0.024915747344493866,
0.010245016776025295,
0.06315538287162781,
-0.07166922092437744,
0.08857855200767517,
0.04524970427155495,
-0.032744813710451126,
-0.0718122124671936,
0.038523975759744644,
-0.01581287756562233,
0.030106477439403534,
0.07064004242420197,
-0.03322380781173706,
0.05276165157556534,
-0.0996796041727066,
-0.028171710669994354,
0.004204704891890287,
-0.0012871120125055313,
-0.008173039183020592,
-0.05409925431013107,
-0.0050331298261880875,
0.008693438023328781,
0.17137162387371063,
-0.02343495935201645,
0.03597060590982437,
0.015014796517789364,
0.006943036802113056,
0.04351535439491272,
-0.01390143483877182,
0.07297348976135254,
-0.005963618867099285,
-0.026013623923063278,
-0.015887506306171417,
0.03706611692905426,
0.00576680526137352,
0.0047272369265556335,
0.1408906877040863,
0.04621877521276474,
0.09232551604509354,
0.07583016157150269,
0.01735566183924675,
0.020408600568771362,
-0.13663914799690247,
-0.0883750319480896,
0.003651038743555546,
0.05488908290863037,
-0.017607828602194786,
0.014787163585424423,
0.09543471038341522,
-0.08955083787441254,
0.07055908441543579,
0.04940415918827057,
-0.048989586532115936,
-0.1283445954322815,
-0.19013401865959167,
-0.025142153725028038,
-0.031327925622463226,
-0.012872261926531792,
-0.0887531265616417,
0.018039178103208542,
0.08933417499065399,
0.02193247713148594,
-0.009391170926392078,
0.09050582349300385,
-0.10775233060121536,
-0.03178124129772186,
0.04001705348491669,
-0.025666330009698868,
0.018623683601617813,
0.0519724003970623,
0.019826048985123634,
-0.0036655329167842865,
0.046958986669778824,
0.04109847545623779,
0.04536016285419464,
0.029282014816999435,
0.05094475299119949,
-0.025177333503961563,
-0.07424399256706238,
-0.03380333259701729,
-0.0035191059578210115,
0.05424582585692406,
0.13007420301437378,
0.02389015629887581,
-0.06651116907596588,
0.006450801622122526,
0.11305265128612518,
-0.03299529850482941,
-0.05289565399289131,
-0.10991158336400986,
0.24244415760040283,
0.023229017853736877,
0.0039037649985402822,
-0.002882353961467743,
-0.047727812081575394,
0.0032417532056570053,
0.21021245419979095,
0.22348719835281372,
0.0012675700709223747,
-0.008968561887741089,
0.011275263503193855,
-0.011348765343427658,
0.03835158795118332,
0.14581681787967682,
0.0029346495866775513,
0.25217050313949585,
-0.04534223675727844,
0.036900635808706284,
-0.043591730296611786,
-0.038186438381671906,
-0.10062886774539948,
0.07766923308372498,
-0.014351697638630867,
0.007822714745998383,
-0.03299970179796219,
0.07316220551729202,
-0.03748878464102745,
-0.17983190715312958,
0.00043690111488103867,
-0.0028352984227240086,
-0.062086060643196106,
0.014006980694830418,
0.00038816407322883606,
0.02131826803088188,
0.08192199468612671,
-0.017769839614629745,
-0.006930415518581867,
0.13213512301445007,
0.019573362544178963,
-0.09821756929159164,
-0.05899929255247116,
0.11626999825239182,
0.01687288098037243,
0.1438538134098053,
0.012834110297262669,
0.07900971174240112,
0.08590332418680191,
0.021854951977729797,
-0.0955255851149559,
0.03946478292346001,
-0.018950287252664566,
-0.028953799977898598,
0.00494042132049799,
0.11002246290445328,
-0.009862625040113926,
0.05428815260529518,
0.029115887358784676,
-0.09373930096626282,
0.058578938245773315,
0.009779628366231918,
-0.03655676543712616,
-0.08071039617061615,
0.08153948187828064,
-0.08975812792778015,
0.15497779846191406,
0.1199382096529007,
-0.014880313538014889,
-0.044224813580513,
-0.027816249057650566,
0.017711108550429344,
-0.0002142423763871193,
0.05813676863908768,
-0.024557776749134064,
-0.1339697688817978,
0.018544619902968407,
-0.08778490126132965,
0.02904391847550869,
-0.24627035856246948,
-0.09018395841121674,
0.029408451169729233,
-0.017456308007240295,
-0.016144126653671265,
0.05285841226577759,
0.045391544699668884,
0.024431435391306877,
-0.034982964396476746,
0.015276896767318249,
-0.03853451833128929,
0.05937950685620308,
-0.10935214161872864,
-0.09399078041315079
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 140k (uncased)
Seed 2 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-140k')
model = BertModel.from_pretrained("multiberts-seed-2-140k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-140k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 140k (uncased)
Seed 2 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 140k (uncased)\nSeed 2 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 140k (uncased)\nSeed 2 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 140k (uncased)\nSeed 2 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08489694446325302,
0.0035180742852389812,
-0.0023611069191247225,
0.06841299682855606,
0.08487425744533539,
0.004222208634018898,
0.11612574011087418,
0.05129934102296829,
-0.03141622990369797,
0.02381168305873871,
0.0912601500749588,
0.02758283168077469,
0.042571716010570526,
0.06428647041320801,
0.09698089957237244,
-0.2600342631340027,
0.04887054115533829,
-0.06406497955322266,
0.04993081092834473,
0.0749923586845398,
0.1017540693283081,
-0.06978081166744232,
0.06059842184185982,
0.03966054320335388,
-0.08319181203842163,
-0.018678564578294754,
-0.012341880239546299,
-0.03302716463804245,
0.10058615356683731,
0.07107716053724289,
0.062413617968559265,
0.0016869548708200455,
0.05969180911779404,
-0.08995790779590607,
0.01606474630534649,
0.04555384814739227,
-0.00013738684356212616,
0.02746313065290451,
-0.008676279336214066,
0.017670273780822754,
0.10939830541610718,
0.038897302001714706,
0.07591471076011658,
0.03428966924548149,
-0.09502030909061432,
-0.12321794778108597,
-0.08038384467363358,
0.10615342855453491,
0.052251432090997696,
0.0379919707775116,
-0.004761495627462864,
0.07192351669073105,
-0.027414705604314804,
0.07235252857208252,
0.10544399917125702,
-0.2587578296661377,
-0.007961461320519447,
0.06306124478578568,
0.04445499926805496,
0.04122993350028992,
0.01308673806488514,
0.027194391936063766,
0.005103062838315964,
0.041600268334150314,
0.02856472134590149,
-0.023369651287794113,
0.11764396727085114,
-0.045594897121191025,
-0.15010321140289307,
-0.04363927245140076,
0.11637435108423233,
-0.007286194711923599,
-0.12664619088172913,
-0.09708583354949951,
-0.032886043190956116,
0.12394476681947708,
-0.003663545474410057,
-0.019201405346393585,
-0.004202481359243393,
0.014091597869992256,
0.026883114129304886,
-0.09846199303865433,
-0.08750519156455994,
-0.03054068237543106,
-0.03797498717904091,
0.1303543895483017,
0.04653894156217575,
0.05371926352381706,
-0.03609885275363922,
0.08618098497390747,
-0.12145213782787323,
-0.04041232541203499,
-0.05260781943798065,
-0.08136740326881409,
-0.019054340198636055,
0.010079736821353436,
-0.02847735397517681,
-0.08405347913503647,
-0.05871765688061714,
0.11521393060684204,
0.030232060700654984,
0.02759542688727379,
-0.002385204192250967,
0.04284662753343582,
0.07254993170499802,
0.09477198868989944,
-0.03825690969824791,
0.054171137511730194,
0.03610863536596298,
-0.025279082357883453,
0.05950174108147621,
-0.051235709339380264,
-0.10306648910045624,
0.07255171239376068,
-0.00012226030230522156,
0.03712216764688492,
0.026855316013097763,
0.030902199447155,
-0.01249854825437069,
-0.07206043601036072,
0.16678062081336975,
-0.07716765999794006,
-0.005805492401123047,
-0.01454834546893835,
0.01211797259747982,
0.045105867087841034,
0.03424009680747986,
-0.005102606490254402,
-0.04654175415635109,
-0.006857302971184254,
-0.05455786734819412,
-0.024045899510383606,
-0.05413360148668289,
-0.11898480355739594,
0.0010318020358681679,
-0.033978261053562164,
-0.032579198479652405,
-0.14125844836235046,
-0.21589404344558716,
-0.016334936022758484,
0.06532993912696838,
-0.0030974848195910454,
-0.011812474578619003,
0.02666259929537773,
0.018028853461146355,
-0.01902243122458458,
0.011095423251390457,
-0.04726772755384445,
0.0006615985184907913,
-0.004411911591887474,
-0.033184848725795746,
0.05747167766094208,
-0.04413747414946556,
0.021681636571884155,
-0.06900257617235184,
0.02426239289343357,
-0.20967267453670502,
0.09007173776626587,
-0.030511971563100815,
-0.0036068670451641083,
-0.03890635445713997,
-0.04312983900308609,
0.007113473489880562,
0.04555663466453552,
-0.007356033660471439,
0.11559433490037918,
-0.13619881868362427,
-0.05047887563705444,
0.18763917684555054,
-0.16249001026153564,
-0.0003831535577774048,
0.1017506867647171,
-0.049395330250263214,
0.06009625643491745,
0.13196858763694763,
0.09762619435787201,
0.09014753997325897,
-0.07539363950490952,
0.014826330356299877,
0.06449305266141891,
-0.0702144205570221,
0.05738677829504013,
0.09236203879117966,
-0.024725453928112984,
-0.1369423270225525,
0.02764304354786873,
-0.07514899969100952,
-0.00794593058526516,
-0.026176979765295982,
-0.021455679088830948,
0.005644351243972778,
-0.0380595363676548,
0.03329665586352348,
0.004355316050350666,
0.019364796578884125,
-0.037153106182813644,
-0.0821823701262474,
0.036908164620399475,
0.07693644613027573,
-0.0710795670747757,
0.04505114257335663,
-0.07032465189695358,
0.06185651198029518,
-0.07474812865257263,
-0.004636487923562527,
-0.16781313717365265,
-0.02213299460709095,
0.0451890304684639,
-0.05503573641180992,
0.04928278923034668,
0.09218712896108627,
0.003608298022300005,
0.1249917596578598,
-0.04016599804162979,
0.005788775160908699,
-0.01037839986383915,
-0.011001382023096085,
-0.05122086778283119,
-0.12323606014251709,
-0.0813828855752945,
-0.0678391307592392,
0.10492029041051865,
-0.07631894201040268,
0.02956758439540863,
-0.07104207575321198,
-0.020667782053351402,
-0.008180676028132439,
-0.058202922344207764,
-0.006917160004377365,
0.007226498797535896,
-0.02967480942606926,
-0.04536077752709389,
0.04838414490222931,
0.05228035897016525,
-0.061802782118320465,
0.08139525353908539,
-0.10348910093307495,
-0.06242811307311058,
0.053860142827034,
0.012437191791832447,
-0.07947492599487305,
0.0884285569190979,
-0.018832018598914146,
-0.013733318075537682,
-0.059668540954589844,
-0.04519595578312874,
0.1912396401166916,
-0.023534785956144333,
0.10255520045757294,
-0.09187373518943787,
0.002091147005558014,
0.02824132889509201,
-0.049997568130493164,
-0.016791898757219315,
0.05688053369522095,
0.048922762274742126,
-0.18986167013645172,
0.015003807842731476,
0.053211525082588196,
0.0748511478304863,
0.11140629649162292,
0.027765410020947456,
-0.025515131652355194,
-0.04886290058493614,
-0.01027538999915123,
0.0064963954500854015,
0.05510348826646805,
-0.02566780522465706,
-0.004952546674758196,
0.03311209753155708,
0.05834574997425079,
0.01767946407198906,
-0.08323225378990173,
0.036254920065402985,
0.06874779611825943,
-0.01857336051762104,
-0.04250740259885788,
-0.026543565094470978,
-0.05876748263835907,
0.06379019469022751,
0.05316239222884178,
0.04160870984196663,
0.025744374841451645,
-0.013813468627631664,
-0.13825643062591553,
0.18688897788524628,
-0.11530708521604538,
-0.25764110684394836,
-0.11166766285896301,
-0.0548008531332016,
-0.0207197405397892,
0.0425713025033474,
0.05831474810838699,
-0.025665856897830963,
-0.04035288840532303,
-0.11469080299139023,
0.06353519856929779,
-0.06488406658172607,
-0.02938171848654747,
-0.010232361033558846,
-0.050187066197395325,
-0.015470434911549091,
-0.12706628441810608,
-0.010786499828100204,
-0.028546158224344254,
-0.07238318026065826,
0.005992094054818153,
-0.03361568972468376,
0.02632030099630356,
0.13689348101615906,
0.03283458203077316,
-0.021186847239732742,
-0.01651754230260849,
0.19355054199695587,
0.011320820078253746,
0.06148510426282883,
0.11618712544441223,
-0.03163330629467964,
0.056654829531908035,
0.04395955801010132,
0.024254102259874344,
-0.04606737568974495,
0.011004830710589886,
-0.017470618709921837,
-0.12087585031986237,
-0.17216521501541138,
-0.07147063314914703,
-0.0031463103368878365,
0.005037080962210894,
0.018065867945551872,
0.03514496237039566,
0.019691260531544685,
0.03895197808742523,
-0.030755702406167984,
0.030492665246129036,
-0.011317025870084763,
0.08063280582427979,
0.029137257486581802,
-0.07371082156896591,
0.09043052792549133,
-0.06095317378640175,
0.017338506877422333,
0.11193730682134628,
-0.061316367238759995,
0.18446987867355347,
0.023515285924077034,
0.058056607842445374,
0.09993614256381989,
0.02224843204021454,
0.05566185340285301,
0.08700695633888245,
-0.04754004627466202,
0.008026277646422386,
-0.06158963218331337,
-0.05187756195664406,
-0.03655609115958214,
0.05138669162988663,
0.030098196119070053,
0.016407165676355362,
-0.11910833418369293,
0.021723687648773193,
-0.002851299475878477,
0.13458269834518433,
0.052736107259988785,
-0.11611748486757278,
-0.12586475908756256,
0.035641688853502274,
-0.04473095014691353,
-0.06449878215789795,
0.026404699310660362,
0.06369584798812866,
-0.15318971872329712,
0.045074619352817535,
-0.00567947793751955,
0.0664677545428276,
-0.09018154442310333,
0.01455194503068924,
-0.04721301794052124,
-0.001975839026272297,
0.006400460842996836,
0.07188564538955688,
-0.13131548464298248,
0.10430030524730682,
0.021563494578003883,
0.04735191911458969,
-0.07909943908452988,
0.0168268121778965,
-0.01166853029280901,
0.10948126018047333,
0.11361168324947357,
0.046216197311878204,
-0.053754787892103195,
-0.017361344769597054,
-0.04637162387371063,
0.01845601014792919,
0.059632617980241776,
-0.07867929339408875,
0.05968340486288071,
0.006521123927086592,
0.007278966251760721,
-0.022311493754386902,
0.017577175050973892,
-0.13260319828987122,
-0.1209215372800827,
0.062207866460084915,
-0.0782645121216774,
-0.10000825673341751,
-0.05750489979982376,
-0.06255565583705902,
-0.04587253928184509,
0.21378865838050842,
-0.11768896132707596,
-0.08935891091823578,
-0.09793059527873993,
-0.01021452248096466,
0.04529450833797455,
-0.0673055499792099,
0.04662233591079712,
-0.038927026093006134,
0.09564293175935745,
-0.048586323857307434,
-0.10999472439289093,
0.035415004938840866,
-0.11180190742015839,
-0.11798955500125885,
-0.04518324136734009,
0.10736636072397232,
0.11393977701663971,
0.038903698325157166,
0.014552781358361244,
0.010367248207330704,
0.0009378809481859207,
-0.11666157096624374,
0.015177305787801743,
0.13327616453170776,
-0.0003145318478345871,
0.0697573870420456,
-0.06059998273849487,
0.02833712100982666,
-0.014156537130475044,
0.00002946332097053528,
0.13381606340408325,
0.19013625383377075,
-0.0668194591999054,
0.17826005816459656,
0.20224028825759888,
-0.10592466592788696,
-0.19139818847179413,
-0.05752470716834068,
0.0019618337973952293,
0.04576459154486656,
0.05201435461640358,
-0.18538573384284973,
0.0874672383069992,
0.03368410840630531,
-0.03434841334819794,
0.01879606768488884,
-0.23567979037761688,
-0.11177895963191986,
0.08553991466760635,
0.05384175479412079,
0.18916025757789612,
-0.079831562936306,
-0.04101470112800598,
-0.0177136417478323,
-0.041070930659770966,
0.049337077885866165,
-0.03825727850198746,
0.08912219852209091,
0.00835280492901802,
-0.0299492496997118,
0.0037263156846165657,
-0.03003634698688984,
0.10015711188316345,
0.03734289109706879,
0.019736330956220627,
-0.07097865641117096,
-0.004171047359704971,
0.11711807548999786,
-0.03871844708919525,
0.09762681275606155,
0.04390638321638107,
0.07648098468780518,
-0.09846538305282593,
-0.059572894126176834,
-0.07384353131055832,
0.043808095157146454,
-0.04187403619289398,
-0.0550636388361454,
-0.06469917297363281,
0.056814730167388916,
0.03736187145113945,
0.011370979249477386,
-0.0019457880407571793,
-0.03561025485396385,
0.0423855260014534,
0.09290330111980438,
0.07971556484699249,
-0.02517353743314743,
-0.07036890089511871,
-0.05184148997068405,
-0.0482468456029892,
0.06855116784572601,
-0.09549819678068161,
0.023597851395606995,
0.025093451142311096,
0.011563261039555073,
0.0892145112156868,
0.0318479984998703,
-0.13963280618190765,
0.013959171250462532,
0.033189527690410614,
-0.12466108798980713,
-0.11096173524856567,
-0.019044462591409683,
0.030289940536022186,
-0.03803225979208946,
0.05300300940871239,
0.14913278818130493,
-0.0353638231754303,
-0.03263699635863304,
-0.047460153698921204,
0.03812800347805023,
-0.022902287542819977,
0.048479728400707245,
0.06423051655292511,
0.030739596113562584,
-0.07081182301044464,
0.07252097874879837,
0.03931717574596405,
-0.035039737820625305,
0.04233141615986824,
0.03765040636062622,
-0.09277200698852539,
-0.07838834822177887,
-0.05823839083313942,
0.08686135709285736,
-0.021672068163752556,
-0.04886488616466522,
0.0020516403019428253,
-0.08052247762680054,
0.06685849279165268,
0.07451771944761276,
0.048228874802589417,
0.03963451087474823,
-0.08628122508525848,
0.016131188720464706,
-0.05342765152454376,
0.03504600375890732,
-0.027455149218440056,
-0.006032615900039673,
-0.056840576231479645,
0.07035386562347412,
0.06352119147777557,
0.10017790645360947,
-0.0352577343583107,
-0.0757976621389389,
-0.08253753185272217,
-0.014792298898100853,
-0.06756549328565598,
-0.029726441949605942,
-0.07565651834011078,
-0.007191954646259546,
0.0020734788849949837,
-0.006330354139208794,
0.023463888093829155,
0.03586646541953087,
-0.042929619550704956,
-0.018241731449961662,
-0.03975415602326393,
0.036104314029216766,
-0.0647660493850708,
0.005354677326977253,
0.017888305708765984,
-0.03643741086125374,
0.09498405456542969,
0.038993969559669495,
-0.010706407949328423,
0.04654897749423981,
-0.025454415008425713,
0.03634138032793999,
-0.019783401861786842,
0.0013316834811121225,
-0.024881727993488312,
-0.11089780926704407,
-0.004224823787808418,
0.004293061792850494,
-0.0249085184186697,
0.009555588476359844,
0.06440099328756332,
-0.07198293507099152,
0.08641209453344345,
0.04443962499499321,
-0.03270033746957779,
-0.07025526463985443,
0.03865271061658859,
-0.01731887459754944,
0.03118269331753254,
0.0700625628232956,
-0.03374388441443443,
0.053830403834581375,
-0.09970280528068542,
-0.028239507228136063,
0.004044841974973679,
-0.0007507111877202988,
-0.009152686223387718,
-0.05416866019368172,
-0.00493896659463644,
0.00827326625585556,
0.17232440412044525,
-0.022752106189727783,
0.03807578980922699,
0.013291588053107262,
0.00563281774520874,
0.04959319904446602,
-0.013059072196483612,
0.0734366774559021,
-0.006064940243959427,
-0.024626292288303375,
-0.016516245901584625,
0.036900606006383896,
0.005697952583432198,
0.0037547536194324493,
0.1400551199913025,
0.045933496206998825,
0.08627833425998688,
0.07560449838638306,
0.01610565185546875,
0.01944086328148842,
-0.1363057643175125,
-0.08918972313404083,
0.0037899957969784737,
0.055146001279354095,
-0.018156424164772034,
0.01864307001233101,
0.09338858723640442,
-0.08962181210517883,
0.06990449130535126,
0.050337281078100204,
-0.04942036420106888,
-0.12891410291194916,
-0.19523456692695618,
-0.026450494304299355,
-0.03035772778093815,
-0.012859157286584377,
-0.08890818059444427,
0.01779312640428543,
0.08723758161067963,
0.021745622158050537,
-0.010229578241705894,
0.08904298394918442,
-0.10443273186683655,
-0.032051291316747665,
0.041335586458444595,
-0.02615753933787346,
0.017119266092777252,
0.052620239555835724,
0.020623227581381798,
-0.0024232808500528336,
0.04751173406839371,
0.04210551083087921,
0.045504555106163025,
0.029975159093737602,
0.05135083943605423,
-0.02585756406188011,
-0.07510138303041458,
-0.033578116446733475,
-0.0034720308613032103,
0.054839152842760086,
0.13003602623939514,
0.024631258100271225,
-0.06697699427604675,
0.0061521101742982864,
0.11169473081827164,
-0.0314774364233017,
-0.051603082567453384,
-0.1096067726612091,
0.24212095141410828,
0.021222427487373352,
0.004371168091893196,
-0.0025340444408357143,
-0.04713182523846626,
0.003552710637450218,
0.20812232792377472,
0.22204115986824036,
0.0008297048043459654,
-0.008463374339044094,
0.010418811812996864,
-0.011486897245049477,
0.03866640850901604,
0.14665371179580688,
0.0024326276034116745,
0.2531917095184326,
-0.04625187814235687,
0.03540249168872833,
-0.04386116936802864,
-0.03798685595393181,
-0.10167165100574493,
0.07458782941102982,
-0.013902781531214714,
0.007736169267445803,
-0.032237499952316284,
0.07283543050289154,
-0.0368502140045166,
-0.17837917804718018,
0.00025742314755916595,
-0.0008343900553882122,
-0.06202830374240875,
0.013947770930826664,
-0.0010596700012683868,
0.020936226472258568,
0.08227872848510742,
-0.01883227750658989,
-0.006656897719949484,
0.13352760672569275,
0.019323695451021194,
-0.09847945719957352,
-0.05581897497177124,
0.11500077694654465,
0.013539930805563927,
0.14215153455734253,
0.012383164837956429,
0.0813184380531311,
0.08577725291252136,
0.022117631509900093,
-0.09376748651266098,
0.04030632600188255,
-0.01860245317220688,
-0.029057461768388748,
0.006490934174507856,
0.11069940775632858,
-0.009734743274748325,
0.056307338178157806,
0.029682964086532593,
-0.09321068972349167,
0.059652261435985565,
0.008679192513227463,
-0.03695064038038254,
-0.08049345761537552,
0.08277218043804169,
-0.09120799601078033,
0.15459425747394562,
0.1203724592924118,
-0.01569037139415741,
-0.045119330286979675,
-0.028458597138524055,
0.019334349781274796,
-0.0004999227821826935,
0.05733940750360489,
-0.02536582574248314,
-0.13387903571128845,
0.018767276778817177,
-0.08632023632526398,
0.02915840595960617,
-0.24532833695411682,
-0.09020106494426727,
0.02953743375837803,
-0.01696465164422989,
-0.016573317348957062,
0.052214182913303375,
0.04350600764155388,
0.024432888254523277,
-0.035817451775074005,
0.013518104329705238,
-0.03871503099799156,
0.05967913940548897,
-0.10981131345033646,
-0.09331195056438446
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 1500k (uncased)
Seed 2 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1500k')
model = BertModel.from_pretrained("multiberts-seed-2-1500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-1500k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 1500k (uncased)
Seed 2 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 1500k (uncased)\nSeed 2 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 1500k (uncased)\nSeed 2 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 1500k (uncased)\nSeed 2 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08567889034748077,
0.0034331304486840963,
-0.002413335721939802,
0.07048679888248444,
0.08573819696903229,
0.003247684333473444,
0.11454986780881882,
0.05160076916217804,
-0.030906274914741516,
0.022611618041992188,
0.0916847437620163,
0.025416944175958633,
0.04447924345731735,
0.06360253691673279,
0.09535485506057739,
-0.2554834485054016,
0.04854312539100647,
-0.0655580386519432,
0.0428585410118103,
0.07394038140773773,
0.10068390518426895,
-0.07105369865894318,
0.061384137719869614,
0.03823652118444443,
-0.08446097373962402,
-0.017757263034582138,
-0.012897210195660591,
-0.03298544883728027,
0.09990433603525162,
0.07310697436332703,
0.06160559505224228,
0.0028114449232816696,
0.06055085361003876,
-0.09118381142616272,
0.016541965305805206,
0.0439351424574852,
0.000010485760867595673,
0.027593085542321205,
-0.007461069151759148,
0.017546966671943665,
0.103520967066288,
0.0414186529815197,
0.07531435787677765,
0.03526456281542778,
-0.09507596492767334,
-0.10934621095657349,
-0.08021455258131027,
0.11107558757066727,
0.05305880308151245,
0.035431597381830215,
-0.0030840206891298294,
0.06673474609851837,
-0.028844228014349937,
0.07307589054107666,
0.106107696890831,
-0.2503553032875061,
-0.010027623735368252,
0.06845149397850037,
0.04181849956512451,
0.04657195508480072,
0.014976158738136292,
0.02733813039958477,
0.006024681031703949,
0.040737085044384,
0.02706187590956688,
-0.023604605346918106,
0.11689773201942444,
-0.0441335029900074,
-0.14913266897201538,
-0.04517073184251785,
0.11452889442443848,
-0.00825987197458744,
-0.125329852104187,
-0.0948123186826706,
-0.03243632614612579,
0.12019596993923187,
-0.0025781365111470222,
-0.020649459213018417,
-0.004398951772600412,
0.013022108003497124,
0.028634773567318916,
-0.10087281465530396,
-0.08463704586029053,
-0.03236585110425949,
-0.03594065085053444,
0.1325586438179016,
0.046063780784606934,
0.055078376084566116,
-0.03734150528907776,
0.08499930799007416,
-0.12595297396183014,
-0.03923157975077629,
-0.05391993373632431,
-0.08110055327415466,
-0.019554151222109795,
0.008537148125469685,
-0.027119755744934082,
-0.08931620419025421,
-0.05924901366233826,
0.11843737214803696,
0.029700547456741333,
0.0268125981092453,
-0.003818504512310028,
0.0424843430519104,
0.07187778502702713,
0.09284567087888718,
-0.04050954803824425,
0.05404366925358772,
0.03620263561606407,
-0.021009307354688644,
0.05589969828724861,
-0.05108167976140976,
-0.10271012783050537,
0.071880042552948,
-0.0022049499675631523,
0.039095841348171234,
0.027556613087654114,
0.028928712010383606,
-0.01148159895092249,
-0.07248102873563766,
0.16593030095100403,
-0.07539792358875275,
-0.007903420366346836,
-0.016911383718252182,
0.010495973750948906,
0.03703313320875168,
0.034630388021469116,
-0.0034849720541387796,
-0.0474676750600338,
-0.0026542823761701584,
-0.05419611185789108,
-0.023384056985378265,
-0.05508666858077049,
-0.11946141719818115,
0.0004970724694430828,
-0.03947855532169342,
-0.03183164447546005,
-0.14210712909698486,
-0.22024989128112793,
-0.016267316415905952,
0.06542431563138962,
-0.004618425853550434,
-0.01417013444006443,
0.025730401277542114,
0.015986984595656395,
-0.020499490201473236,
0.011315722018480301,
-0.04608406499028206,
0.002094106748700142,
-0.005738822743296623,
-0.03333932161331177,
0.05774252116680145,
-0.04252375662326813,
0.02279101125895977,
-0.06716585159301758,
0.022623669356107712,
-0.2109023779630661,
0.09262951463460922,
-0.03186536580324173,
-0.00013928301632404327,
-0.038027334958314896,
-0.04025135934352875,
0.004327081143856049,
0.045164939016103745,
-0.004721987526863813,
0.11852855980396271,
-0.13927143812179565,
-0.04943083971738815,
0.1833077073097229,
-0.16220508515834808,
-0.002818971872329712,
0.1009378731250763,
-0.04918941482901573,
0.05906687676906586,
0.1301836222410202,
0.09784659743309021,
0.09174368530511856,
-0.0726170539855957,
0.016445737332105637,
0.06366175413131714,
-0.07133574783802032,
0.05347580462694168,
0.09218401461839676,
-0.024912022054195404,
-0.14054222404956818,
0.027734018862247467,
-0.07522560656070709,
-0.0066060470417141914,
-0.027610119432210922,
-0.022951435297727585,
0.005552288144826889,
-0.04184645414352417,
0.030481524765491486,
0.0033553228713572025,
0.019736697897315025,
-0.037186600267887115,
-0.07947365939617157,
0.03517806529998779,
0.07544079422950745,
-0.0670013278722763,
0.04411793500185013,
-0.06948312371969223,
0.061622850596904755,
-0.07782283425331116,
-0.0046142470091581345,
-0.16876855492591858,
-0.014564383774995804,
0.04445655643939972,
-0.05467928946018219,
0.049811165779829025,
0.08965791016817093,
0.004468931816518307,
0.12304890155792236,
-0.04095868021249771,
0.006089287810027599,
-0.009907076135277748,
-0.011612525209784508,
-0.052080750465393066,
-0.12016718834638596,
-0.07777772098779678,
-0.06712566316127777,
0.0933438241481781,
-0.07136105746030807,
0.030824467539787292,
-0.07214432954788208,
-0.02220231294631958,
-0.0069593749940395355,
-0.05904245376586914,
-0.006578519940376282,
0.008665207773447037,
-0.028456125408411026,
-0.04443174973130226,
0.0476691871881485,
0.05173451453447342,
-0.05791264772415161,
0.07836213707923889,
-0.09728371351957321,
-0.06042613834142685,
0.052437424659729004,
0.017307432368397713,
-0.07945312559604645,
0.09102441370487213,
-0.01829768344759941,
-0.012185527011752129,
-0.06289314478635788,
-0.04397806525230408,
0.1901043802499771,
-0.021572772413492203,
0.10250537097454071,
-0.09371677041053772,
0.003994757775217295,
0.029983676970005035,
-0.047833412885665894,
-0.01701764576137066,
0.057934872806072235,
0.05432909354567528,
-0.18321676552295685,
0.014172494411468506,
0.04702131450176239,
0.07611491531133652,
0.10875126719474792,
0.029913267120718956,
-0.02328072115778923,
-0.04935235530138016,
-0.010569553822278976,
0.007295629475265741,
0.055554311722517014,
-0.02735064923763275,
-0.004664129577577114,
0.03155725449323654,
0.058686815202236176,
0.018456093966960907,
-0.08301781117916107,
0.03525274619460106,
0.07062742114067078,
-0.01832195185124874,
-0.04551064223051071,
-0.026478348299860954,
-0.05809204280376434,
0.06405971944332123,
0.052901893854141235,
0.041998445987701416,
0.024059990420937538,
-0.01489286869764328,
-0.1375305950641632,
0.18487218022346497,
-0.11462950706481934,
-0.26260995864868164,
-0.11273778975009918,
-0.04913687705993652,
-0.02223125472664833,
0.0419563390314579,
0.05819875746965408,
-0.02724093198776245,
-0.04105031490325928,
-0.11422222852706909,
0.06637605279684067,
-0.06399156153202057,
-0.029002143070101738,
-0.0065918415784835815,
-0.04889703541994095,
-0.016269154846668243,
-0.12732255458831787,
-0.010295955464243889,
-0.02966598980128765,
-0.0719047337770462,
0.004496649838984013,
-0.031106971204280853,
0.02841513603925705,
0.1350722312927246,
0.030544668436050415,
-0.020209603011608124,
-0.016439884901046753,
0.19741444289684296,
0.011756831780076027,
0.06029105558991432,
0.11418870091438293,
-0.03166201338171959,
0.05583956092596054,
0.05050160735845566,
0.025555264204740524,
-0.0464346781373024,
0.012083180248737335,
-0.016084032133221626,
-0.12089674919843674,
-0.17222531139850616,
-0.07004931569099426,
-0.003429051022976637,
0.006279294844716787,
0.01855483464896679,
0.03432054817676544,
0.021089991554617882,
0.0393468402326107,
-0.02944948710501194,
0.029993992298841476,
-0.011640988290309906,
0.07826822251081467,
0.023101214319467545,
-0.0737292617559433,
0.09053464233875275,
-0.059403594583272934,
0.01852688193321228,
0.11138886213302612,
-0.061515066772699356,
0.18625038862228394,
0.02283335104584694,
0.057720914483070374,
0.09973809123039246,
0.020882755517959595,
0.056705474853515625,
0.08947808295488358,
-0.048946138471364975,
0.007655859459191561,
-0.05984395742416382,
-0.050075970590114594,
-0.03870878741145134,
0.05025061219930649,
0.031036050990223885,
0.020084016025066376,
-0.12002094835042953,
0.028223764151334763,
-0.0013295388780534267,
0.13710781931877136,
0.046746522188186646,
-0.11018602550029755,
-0.12423761188983917,
0.0353315994143486,
-0.045280903577804565,
-0.06406283378601074,
0.028313999995589256,
0.06837226450443268,
-0.15137314796447754,
0.04097786918282509,
-0.006685348227620125,
0.06830675899982452,
-0.08728451281785965,
0.014808489941060543,
-0.048104189336299896,
-0.001894211396574974,
0.0049967654049396515,
0.06962075084447861,
-0.1375001221895218,
0.10597340017557144,
0.021933499723672867,
0.04842527210712433,
-0.0781630128622055,
0.01710214838385582,
-0.011447368189692497,
0.10871395468711853,
0.11693751811981201,
0.04756464809179306,
-0.051283251494169235,
-0.01966271735727787,
-0.047083184123039246,
0.017568286508321762,
0.05773799121379852,
-0.07940680533647537,
0.05751695856451988,
0.007709974888712168,
0.007254779804497957,
-0.02291359007358551,
0.019530508667230606,
-0.13565529882907867,
-0.12099358439445496,
0.06281180679798126,
-0.0831231027841568,
-0.10255804657936096,
-0.0581994354724884,
-0.06175072491168976,
-0.04710777848958969,
0.20686647295951843,
-0.11966845393180847,
-0.0895700603723526,
-0.09875714778900146,
-0.009368721395730972,
0.044949721544981,
-0.06842468678951263,
0.04720791429281235,
-0.03890339657664299,
0.09409286826848984,
-0.05211484432220459,
-0.10859976708889008,
0.033055663108825684,
-0.11354763805866241,
-0.11558715254068375,
-0.044310562312603,
0.10431116819381714,
0.11407274752855301,
0.039350476115942,
0.011425782926380634,
0.010115038603544235,
0.0036461781710386276,
-0.11628055572509766,
0.014999406412243843,
0.13216401636600494,
-0.006794398650527,
0.07159654796123505,
-0.05698452144861221,
0.026509318500757217,
-0.015137730166316032,
-0.0015153773128986359,
0.13280919194221497,
0.18899843096733093,
-0.06614229083061218,
0.17837804555892944,
0.20148614048957825,
-0.10694576799869537,
-0.19546130299568176,
-0.053483907133340836,
0.0024849800392985344,
0.04643729329109192,
0.049900420010089874,
-0.18390774726867676,
0.08692607283592224,
0.032744258642196655,
-0.03347776085138321,
0.012754354625940323,
-0.23845060169696808,
-0.1141464039683342,
0.08493531495332718,
0.05425187945365906,
0.19043266773223877,
-0.07700694352388382,
-0.041701823472976685,
-0.014795731753110886,
-0.043244972825050354,
0.048501111567020416,
-0.03049713559448719,
0.08643946796655655,
0.007242104038596153,
-0.03202841430902481,
0.005019477568566799,
-0.031134728342294693,
0.09771399199962616,
0.038161203265190125,
0.02232450433075428,
-0.07010167837142944,
-0.010992372408509254,
0.10737456381320953,
-0.04036128893494606,
0.09665189683437347,
0.04481562227010727,
0.07798134535551071,
-0.09433789551258087,
-0.058459486812353134,
-0.0737258568406105,
0.04255054518580437,
-0.04242732375860214,
-0.05494358390569687,
-0.06413134932518005,
0.0594804584980011,
0.03804495185613632,
0.011590981855988503,
0.001802852377295494,
-0.035164374858140945,
0.04171124845743179,
0.09623410552740097,
0.08094120770692825,
-0.03590801730751991,
-0.06807376444339752,
-0.04933483153581619,
-0.047284603118896484,
0.06866531819105148,
-0.09722964465618134,
0.025161027908325195,
0.027461180463433266,
0.01254473626613617,
0.09066378325223923,
0.032260965555906296,
-0.13722234964370728,
0.012157872319221497,
0.033927369862794876,
-0.12215279787778854,
-0.11551845818758011,
-0.022385485470294952,
0.03004675731062889,
-0.03869392350316048,
0.04892941564321518,
0.14795373380184174,
-0.03823244944214821,
-0.03239007294178009,
-0.04897581785917282,
0.039487067610025406,
-0.021422214806079865,
0.05079708248376846,
0.06369965523481369,
0.0320524200797081,
-0.07169780880212784,
0.0760975182056427,
0.038970544934272766,
-0.034539349377155304,
0.04028988629579544,
0.0382838100194931,
-0.09210944175720215,
-0.07740087807178497,
-0.05578695237636566,
0.08903340995311737,
-0.023542163893580437,
-0.04598619043827057,
0.0008318666368722916,
-0.080949567258358,
0.06813293695449829,
0.08052703738212585,
0.04719942808151245,
0.035656798630952835,
-0.08539259433746338,
0.01680145226418972,
-0.05232194811105728,
0.03924454748630524,
-0.027044886723160744,
-0.006818439811468124,
-0.05538022518157959,
0.07105065882205963,
0.062046222388744354,
0.09848733246326447,
-0.034179531037807465,
-0.07364560663700104,
-0.08374544233083725,
-0.013455092906951904,
-0.07070177048444748,
-0.02952950820326805,
-0.07622935622930527,
-0.005834211595356464,
0.00206864345818758,
-0.007105495780706406,
0.02206682413816452,
0.036034949123859406,
-0.04408280923962593,
-0.017665456980466843,
-0.03954675421118736,
0.035593025386333466,
-0.06265939772129059,
0.00552708376199007,
0.018536336719989777,
-0.034310609102249146,
0.09318123757839203,
0.035531606525182724,
-0.011705351993441582,
0.04574111849069595,
-0.02528981864452362,
0.03501854091882706,
-0.022139273583889008,
0.0029118554666638374,
-0.023338142782449722,
-0.11105119436979294,
-0.0016146914567798376,
0.004232581704854965,
-0.02640029415488243,
0.009933040477335453,
0.06120234727859497,
-0.07201990485191345,
0.08830387890338898,
0.04522265866398811,
-0.029660865664482117,
-0.07084520161151886,
0.04014306515455246,
-0.01472320593893528,
0.031178468838334084,
0.06917216628789902,
-0.03246843069791794,
0.05408581718802452,
-0.09937173873186111,
-0.027221251279115677,
0.004557873122394085,
-0.0009125713258981705,
-0.016318442299962044,
-0.053532443940639496,
-0.004790842533111572,
0.007920539006590843,
0.169900581240654,
-0.019173327833414078,
0.036078110337257385,
0.014439171180129051,
0.00896698608994484,
0.04591241478919983,
-0.013336146250367165,
0.06882654130458832,
-0.005058374255895615,
-0.026279732584953308,
-0.016475733369588852,
0.038245055824518204,
0.006558466702699661,
0.00012299232184886932,
0.14449921250343323,
0.04610539600253105,
0.08946570008993149,
0.07442059367895126,
0.017973128706216812,
0.01824662834405899,
-0.13019216060638428,
-0.08802549540996552,
0.0018276148475706577,
0.05390998348593712,
-0.019130049273371696,
0.011315187439322472,
0.09272821992635727,
-0.09014277160167694,
0.07285499572753906,
0.05057869851589203,
-0.048767559230327606,
-0.1297047883272171,
-0.193101167678833,
-0.025478778406977654,
-0.030630825087428093,
-0.011655539274215698,
-0.0910273939371109,
0.016931790858507156,
0.08871670067310333,
0.021639689803123474,
-0.009497630409896374,
0.08656971156597137,
-0.10812176764011383,
-0.0283980593085289,
0.04165499284863472,
-0.02620687521994114,
0.018857432529330254,
0.050065021961927414,
0.020938683301210403,
-0.002478634938597679,
0.04843234270811081,
0.041463565081357956,
0.04526146501302719,
0.02657311037182808,
0.049774833023548126,
-0.024121373891830444,
-0.07426562160253525,
-0.03284081444144249,
-0.007059351075440645,
0.05274133384227753,
0.1355816125869751,
0.0248698852956295,
-0.06454364955425262,
0.007332915905863047,
0.11511677503585815,
-0.03136003762483597,
-0.052019234746694565,
-0.11188943684101105,
0.2325512021780014,
0.023579644039273262,
0.00840235035866499,
-0.0015497435815632343,
-0.04784774407744408,
0.0007140953093767166,
0.21313557028770447,
0.22235596179962158,
0.0018354677595198154,
-0.009120448492467403,
0.011221339926123619,
-0.011377288028597832,
0.038675010204315186,
0.1437971293926239,
0.002724533900618553,
0.24872809648513794,
-0.04627367854118347,
0.03897906094789505,
-0.04137172922492027,
-0.04029563069343567,
-0.09508700668811798,
0.07289762049913406,
-0.013254920020699501,
0.009066141210496426,
-0.03277972340583801,
0.07150988280773163,
-0.040804266929626465,
-0.17611216008663177,
0.0003547174856066704,
-0.003908708691596985,
-0.06104131042957306,
0.01200059987604618,
-0.001406978815793991,
0.02022949606180191,
0.08069007843732834,
-0.017179373651742935,
-0.006490942556411028,
0.13205981254577637,
0.01856483519077301,
-0.09686658531427383,
-0.06287968158721924,
0.11480078846216202,
0.018746834248304367,
0.14196087419986725,
0.013554610311985016,
0.0759371742606163,
0.08591070771217346,
0.02229316160082817,
-0.09648638963699341,
0.03655588999390602,
-0.019240209832787514,
-0.027376390993595123,
0.0036910921335220337,
0.1133069396018982,
-0.009330074302852154,
0.06191786006093025,
0.02883257530629635,
-0.09356091916561127,
0.05760428309440613,
0.007474988698959351,
-0.03598378598690033,
-0.0809042826294899,
0.08409427106380463,
-0.08988931775093079,
0.15332871675491333,
0.12153535336256027,
-0.014405805617570877,
-0.043194204568862915,
-0.028769992291927338,
0.019506627693772316,
-0.00206072349101305,
0.0653570145368576,
-0.025209886953234673,
-0.13558119535446167,
0.01989050768315792,
-0.08890020102262497,
0.028503620997071266,
-0.2458677589893341,
-0.09047739952802658,
0.028819510713219643,
-0.017322564497590065,
-0.015081826597452164,
0.05359441787004471,
0.04976028949022293,
0.025911472737789154,
-0.034796133637428284,
0.012200160883367062,
-0.03891415521502495,
0.05993125960230827,
-0.10500919818878174,
-0.0917043536901474
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 1600k (uncased)
Seed 2 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1600k')
model = BertModel.from_pretrained("multiberts-seed-2-1600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-1600k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 1600k (uncased)
Seed 2 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 1600k (uncased)\nSeed 2 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 1600k (uncased)\nSeed 2 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 1600k (uncased)\nSeed 2 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08495934307575226,
-0.0027231730055063963,
-0.0023379260674118996,
0.0721084326505661,
0.08455507457256317,
0.0031687691807746887,
0.11070722341537476,
0.051081281155347824,
-0.03235810250043869,
0.02225419320166111,
0.09091950953006744,
0.025671321898698807,
0.044856924563646317,
0.06407113373279572,
0.09713862836360931,
-0.2534027099609375,
0.046356718987226486,
-0.06650439649820328,
0.0472685731947422,
0.07440654933452606,
0.10058575868606567,
-0.07134927809238434,
0.062492914497852325,
0.038387052714824677,
-0.08865910768508911,
-0.015021254308521748,
-0.01180771179497242,
-0.0347646027803421,
0.10166959464550018,
0.06978663802146912,
0.06411714851856232,
0.004208941012620926,
0.06279736757278442,
-0.08463862538337708,
0.015613188967108727,
0.04510243237018585,
0.001194056123495102,
0.026231281459331512,
-0.00691736675798893,
0.019071584567427635,
0.11135455965995789,
0.03913906589150429,
0.07608610391616821,
0.03468729183077812,
-0.09557425230741501,
-0.11695551872253418,
-0.08025471866130829,
0.10273537784814835,
0.05121936276555061,
0.03719455376267433,
-0.004283016547560692,
0.06793107092380524,
-0.026337362825870514,
0.07349691540002823,
0.10326945781707764,
-0.2515128552913666,
-0.009613348171114922,
0.06604930013418198,
0.04177313297986984,
0.05086130648851395,
0.01549240667372942,
0.02585572935640812,
0.005933824926614761,
0.041176702827215195,
0.028229694813489914,
-0.024251863360404968,
0.11226090788841248,
-0.04373092204332352,
-0.14819872379302979,
-0.044918522238731384,
0.11878015846014023,
-0.009992724284529686,
-0.1258881390094757,
-0.09164857864379883,
-0.03394211828708649,
0.1193021833896637,
-0.001882629469037056,
-0.020436976104974747,
-0.0038106669671833515,
0.013869870454072952,
0.0243355855345726,
-0.09941896796226501,
-0.08714050054550171,
-0.032634537667036057,
-0.03989196941256523,
0.12573249638080597,
0.046461306512355804,
0.05695394054055214,
-0.036464840173721313,
0.08503137528896332,
-0.12070963531732559,
-0.040031321346759796,
-0.05090947821736336,
-0.0820225402712822,
-0.02094656229019165,
0.010695106349885464,
-0.02534259855747223,
-0.08423370122909546,
-0.06016901880502701,
0.11233155429363251,
0.025166302919387817,
0.029134459793567657,
-0.009883638471364975,
0.04199105501174927,
0.06962123513221741,
0.09284055233001709,
-0.037128787487745285,
0.05393432825803757,
0.036307040601968765,
-0.02532372996211052,
0.057930003851652145,
-0.050860561430454254,
-0.10088646411895752,
0.0733790397644043,
-0.0056693414226174355,
0.03764454275369644,
0.0273662731051445,
0.0304286926984787,
-0.012157376855611801,
-0.0704273208975792,
0.16640391945838928,
-0.07781695574522018,
-0.007140797562897205,
-0.016349464654922485,
0.011131638661026955,
0.03862585127353668,
0.03545815125107765,
-0.006734694819897413,
-0.04600737243890762,
-0.007610036060214043,
-0.05387330800294876,
-0.026374656707048416,
-0.055890925228595734,
-0.11925151199102402,
0.001115012913942337,
-0.03649800270795822,
-0.03303508833050728,
-0.14152680337429047,
-0.2192852795124054,
-0.014296969398856163,
0.06333461403846741,
-0.004185794852674007,
-0.011388415470719337,
0.026059649884700775,
0.01546969823539257,
-0.021510105580091476,
0.011318651959300041,
-0.03949339687824249,
0.0012972485274076462,
-0.00511929951608181,
-0.030601881444454193,
0.059121400117874146,
-0.040761739015579224,
0.021453524008393288,
-0.06869852542877197,
0.02236938662827015,
-0.2093583196401596,
0.08953139185905457,
-0.03309636935591698,
-0.0017008725553750992,
-0.037488941103219986,
-0.04157830402255058,
0.00468798540532589,
0.046188950538635254,
-0.0063573154620826244,
0.11598716676235199,
-0.13739459216594696,
-0.05064496025443077,
0.18441279232501984,
-0.16364216804504395,
-0.0008396953344345093,
0.1023864895105362,
-0.04877876117825508,
0.06022420525550842,
0.13001728057861328,
0.10032854974269867,
0.08325516432523727,
-0.07144137471914291,
0.017233390361070633,
0.0629500076174736,
-0.07296478748321533,
0.054309919476509094,
0.09066805243492126,
-0.024242065846920013,
-0.13995061814785004,
0.02906804531812668,
-0.07167831063270569,
-0.009586323983967304,
-0.026410134509205818,
-0.022380439564585686,
0.004905276000499725,
-0.039006564766168594,
0.0306888148188591,
0.005298297386616468,
0.018044181168079376,
-0.041167423129081726,
-0.08241864293813705,
0.026724623516201973,
0.07459282875061035,
-0.06756766140460968,
0.04546447843313217,
-0.0707612857222557,
0.05773698166012764,
-0.07319638878107071,
-0.004720519296824932,
-0.16742973029613495,
-0.017829526215791702,
0.04552442952990532,
-0.04891250282526016,
0.049326490610837936,
0.09192316234111786,
0.0033082356676459312,
0.12354452162981033,
-0.04021637886762619,
0.006660822778940201,
-0.009564865380525589,
-0.010923827067017555,
-0.049434106796979904,
-0.12222974002361298,
-0.07771795988082886,
-0.06789524853229523,
0.09751157462596893,
-0.06978954374790192,
0.029953353106975555,
-0.06734146922826767,
-0.023628460243344307,
-0.007758552208542824,
-0.059431739151477814,
-0.008529276587069035,
0.007599945180118084,
-0.02797740139067173,
-0.04536641389131546,
0.04714593291282654,
0.05242427811026573,
-0.05764294043183327,
0.07927801460027695,
-0.10034582018852234,
-0.06593355536460876,
0.05571625381708145,
0.012775475159287453,
-0.080447718501091,
0.09725593030452728,
-0.01978016085922718,
-0.013224810361862183,
-0.06123344227671623,
-0.04543212801218033,
0.18992513418197632,
-0.02487826719880104,
0.10127915441989899,
-0.09203843772411346,
0.003986375406384468,
0.03039277158677578,
-0.047183044254779816,
-0.01652340218424797,
0.05932642146945,
0.05021221190690994,
-0.18306012451648712,
0.013691075146198273,
0.050670020282268524,
0.07593123614788055,
0.10666486620903015,
0.028811469674110413,
-0.023830588907003403,
-0.047940462827682495,
-0.012841388583183289,
0.007946138270199299,
0.055214233696460724,
-0.03069712221622467,
-0.005788692273199558,
0.032621558755636215,
0.05819976329803467,
0.01833006925880909,
-0.0838930606842041,
0.03702537715435028,
0.06955018639564514,
-0.01828400045633316,
-0.04166532680392265,
-0.026804231107234955,
-0.05812915787100792,
0.06497447937726974,
0.051084212958812714,
0.03992169350385666,
0.024662895128130913,
-0.015028066001832485,
-0.13767625391483307,
0.184393510222435,
-0.11602991819381714,
-0.2590799927711487,
-0.111984983086586,
-0.053595542907714844,
-0.0186602920293808,
0.04381342604756355,
0.05857931822538376,
-0.029305143281817436,
-0.0420820452272892,
-0.11538862437009811,
0.06450493633747101,
-0.06783207505941391,
-0.02891906350851059,
-0.0057152993977069855,
-0.049137406051158905,
-0.016911987215280533,
-0.12583523988723755,
-0.011646389961242676,
-0.029701102524995804,
-0.07857033610343933,
0.007012221030890942,
-0.03276703506708145,
0.02789754420518875,
0.1349039226770401,
0.03019404411315918,
-0.020884791389107704,
-0.016306962817907333,
0.1868986189365387,
0.012521905824542046,
0.06291632354259491,
0.11567631363868713,
-0.03016790561378002,
0.055575430393218994,
0.04805808886885643,
0.0239417664706707,
-0.04579775407910347,
0.012959248386323452,
-0.01872410625219345,
-0.12413079291582108,
-0.17615926265716553,
-0.0696246474981308,
-0.0032676635310053825,
0.010536156594753265,
0.0163038931787014,
0.03383442386984825,
0.014159021899104118,
0.03952394798398018,
-0.02758290059864521,
0.029770543798804283,
-0.0126054547727108,
0.07886715233325958,
0.029231257736682892,
-0.07572287321090698,
0.09145861119031906,
-0.06051989272236824,
0.018426813185214996,
0.11326699703931808,
-0.06430760771036148,
0.18603268265724182,
0.022506089881062508,
0.06057838350534439,
0.09970999509096146,
0.023518424481153488,
0.05465332791209221,
0.0871332585811615,
-0.044070035219192505,
0.009452758356928825,
-0.06133931875228882,
-0.05044808238744736,
-0.040069229900836945,
0.050732798874378204,
0.02472078800201416,
0.017665676772594452,
-0.11915424466133118,
0.029112718999385834,
-0.00108378566801548,
0.12974032759666443,
0.04725898802280426,
-0.11594977974891663,
-0.12467874586582184,
0.03554064780473709,
-0.04291035979986191,
-0.06319927424192429,
0.028716225177049637,
0.06815005093812943,
-0.15102192759513855,
0.041482195258140564,
-0.004924231208860874,
0.06459283828735352,
-0.08690103143453598,
0.015444908291101456,
-0.04796154424548149,
-0.0016571534797549248,
0.005996369291096926,
0.06948216259479523,
-0.13160228729248047,
0.10793311893939972,
0.021859757602214813,
0.047423895448446274,
-0.07955768704414368,
0.01839100569486618,
-0.01120781246572733,
0.10557103157043457,
0.11819705367088318,
0.04623866081237793,
-0.06427688896656036,
-0.017185760661959648,
-0.04613622650504112,
0.019429761916399002,
0.057926490902900696,
-0.07603543996810913,
0.05733632296323776,
0.008478864096105099,
0.008066273294389248,
-0.022192668169736862,
0.021687887609004974,
-0.13192759454250336,
-0.12079410254955292,
0.061201512813568115,
-0.0813264399766922,
-0.11041755229234695,
-0.058207061141729355,
-0.06058080121874809,
-0.045441657304763794,
0.217763751745224,
-0.11774899065494537,
-0.0903613269329071,
-0.09789417684078217,
-0.007638376206159592,
0.044774506241083145,
-0.0672924742102623,
0.04510816931724548,
-0.037873171269893646,
0.09445914626121521,
-0.05033809691667557,
-0.10944733023643494,
0.034626953303813934,
-0.11250051856040955,
-0.11632685363292694,
-0.04569055512547493,
0.10664412379264832,
0.11403992772102356,
0.03958424925804138,
0.012114387936890125,
0.011968666687607765,
0.0011435914784669876,
-0.11539851874113083,
0.01910572312772274,
0.13152587413787842,
-0.002624150365591049,
0.06914521753787994,
-0.05746178328990936,
0.024265002459287643,
-0.012781023979187012,
-0.002521730959415436,
0.13031217455863953,
0.1882576197385788,
-0.06610776484012604,
0.17858830094337463,
0.2012067288160324,
-0.10770541429519653,
-0.19521138072013855,
-0.05251256003975868,
-0.0007091397419571877,
0.04531242698431015,
0.0527421310544014,
-0.1821574866771698,
0.08894206583499908,
0.0338287316262722,
-0.03407232463359833,
0.011042196303606033,
-0.23446032404899597,
-0.11277267336845398,
0.08427795767784119,
0.05559626594185829,
0.18836042284965515,
-0.07769724726676941,
-0.040577441453933716,
-0.015502694994211197,
-0.04194325953722,
0.044261734932661057,
-0.03667199984192848,
0.08610385656356812,
0.00722309947013855,
-0.027066819369792938,
0.004376479424536228,
-0.03156355395913124,
0.09617480635643005,
0.039408937096595764,
0.02015465870499611,
-0.07072700560092926,
-0.0068116821348667145,
0.11517810821533203,
-0.03769736364483833,
0.09480477124452591,
0.047453247010707855,
0.0778982937335968,
-0.0941953957080841,
-0.05818980559706688,
-0.07336507737636566,
0.04399137198925018,
-0.042498886585235596,
-0.05535421893000603,
-0.0657159835100174,
0.0581064373254776,
0.03707627207040787,
0.009827211499214172,
-0.00420679897069931,
-0.03423753380775452,
0.04174162447452545,
0.10049794614315033,
0.07913286238908768,
-0.03180592879652977,
-0.061796240508556366,
-0.0477907694876194,
-0.048604466021060944,
0.06512659788131714,
-0.0983424186706543,
0.02317519672214985,
0.026781700551509857,
0.010891282930970192,
0.08631429076194763,
0.03317487612366676,
-0.13971009850502014,
0.012582370080053806,
0.034837789833545685,
-0.12503376603126526,
-0.10926520824432373,
-0.019748374819755554,
0.031287435442209244,
-0.03768051043152809,
0.04856111481785774,
0.14698487520217896,
-0.03600439801812172,
-0.031844064593315125,
-0.04906962811946869,
0.041080594062805176,
-0.02049374207854271,
0.0504288487136364,
0.06176912412047386,
0.030157065019011497,
-0.07191330194473267,
0.07559913396835327,
0.04033515229821205,
-0.037103451788425446,
0.040391623973846436,
0.04055017977952957,
-0.09337344765663147,
-0.0773218423128128,
-0.05507061257958412,
0.08589779585599899,
-0.022032147273421288,
-0.045984022319316864,
0.0017576366662979126,
-0.08214689046144485,
0.06662024557590485,
0.07122392952442169,
0.04744217172265053,
0.03722694143652916,
-0.08442147076129913,
0.01765613816678524,
-0.053032539784908295,
0.038575731217861176,
-0.028411639854311943,
-0.005914309993386269,
-0.051355257630348206,
0.07419349253177643,
0.06159107759594917,
0.0991232842206955,
-0.03443582355976105,
-0.07129634916782379,
-0.08087757229804993,
-0.012647921219468117,
-0.0647273063659668,
-0.0285190362483263,
-0.07478323578834534,
-0.009061177261173725,
0.002180323237553239,
-0.0068274736404418945,
0.020649973303079605,
0.03722592443227768,
-0.04326823353767395,
-0.018607132136821747,
-0.03987058624625206,
0.036171771585941315,
-0.06392710655927658,
0.005411121062934399,
0.019548796117305756,
-0.03565753996372223,
0.09225808084011078,
0.03775268793106079,
-0.010448652319610119,
0.04790419340133667,
-0.025425415486097336,
0.03547752648591995,
-0.022390397265553474,
0.004055951256304979,
-0.022612210363149643,
-0.10870379209518433,
-0.002106240950524807,
0.00550461933016777,
-0.028372840955853462,
0.008903200738132,
0.062042832374572754,
-0.07185198366641998,
0.09113974869251251,
0.04417804628610611,
-0.03170572593808174,
-0.07077407091856003,
0.039173465222120285,
-0.013275125995278358,
0.031216857954859734,
0.06678451597690582,
-0.03441957011818886,
0.05159125477075577,
-0.10079145431518555,
-0.028313826769590378,
0.004651181865483522,
-0.002106163650751114,
-0.008997144177556038,
-0.05428754538297653,
-0.003291548229753971,
0.007196433842182159,
0.17176689207553864,
-0.017869461327791214,
0.03651096671819687,
0.015454916283488274,
0.004317726939916611,
0.049270253628492355,
-0.014040661975741386,
0.0678141862154007,
-0.008231023326516151,
-0.026034165173768997,
-0.013740376569330692,
0.03468037769198418,
0.007225297391414642,
0.004852697253227234,
0.14665630459785461,
0.04482896253466606,
0.0887347012758255,
0.07324425876140594,
0.017523273825645447,
0.018349988386034966,
-0.13187217712402344,
-0.08916092664003372,
0.0010184324346482754,
0.05362636595964432,
-0.018143033608794212,
0.011213269084692001,
0.09226832538843155,
-0.08905777335166931,
0.07104389369487762,
0.047787103801965714,
-0.04917067289352417,
-0.12990619242191315,
-0.18900476396083832,
-0.026777461171150208,
-0.02845882810652256,
-0.011508756317198277,
-0.09030577540397644,
0.018625792115926743,
0.08963494747877121,
0.022163819521665573,
-0.01052325963973999,
0.08901573717594147,
-0.10542584210634232,
-0.028087496757507324,
0.03974093496799469,
-0.025481514632701874,
0.015868697315454483,
0.054214105010032654,
0.019078180193901062,
-0.0027432572096586227,
0.05063734948635101,
0.04138203710317612,
0.044824279844760895,
0.027173934504389763,
0.04864291101694107,
-0.024687491357326508,
-0.0723307877779007,
-0.0352475643157959,
-0.006798199377954006,
0.05442124977707863,
0.13067251443862915,
0.025379614904522896,
-0.06581822782754898,
0.008178720250725746,
0.1118687093257904,
-0.031896062195301056,
-0.04882071912288666,
-0.10778594017028809,
0.23556631803512573,
0.022361844778060913,
0.004031828138977289,
-0.00006542075425386429,
-0.047177594155073166,
0.002120884135365486,
0.21216996014118195,
0.22351574897766113,
0.003593450877815485,
-0.009967686608433723,
0.009301723912358284,
-0.011716485023498535,
0.037822745740413666,
0.14527136087417603,
0.002987852320075035,
0.2463972270488739,
-0.04618798941373825,
0.03525892645120621,
-0.04286349564790726,
-0.039104677736759186,
-0.09972341358661652,
0.07366788387298584,
-0.013237008824944496,
0.008279173634946346,
-0.03223983943462372,
0.0700238049030304,
-0.0370824858546257,
-0.1841154545545578,
0.005103426985442638,
-0.0032534580677747726,
-0.059181250631809235,
0.0155475540086627,
0.0021429704502224922,
0.019849702715873718,
0.08180584013462067,
-0.019544117152690887,
-0.004720756784081459,
0.12728188931941986,
0.019650179892778397,
-0.09890229254961014,
-0.05659976601600647,
0.11384905129671097,
0.013373900204896927,
0.1451459527015686,
0.013154765591025352,
0.07975729554891586,
0.085413858294487,
0.022585231810808182,
-0.09645633399486542,
0.041786737740039825,
-0.019151540473103523,
-0.0284812580794096,
0.005163917783647776,
0.11301573365926743,
-0.011338431388139725,
0.05661791190505028,
0.029746588319540024,
-0.08987301588058472,
0.058761924505233765,
0.008179150521755219,
-0.03474961593747139,
-0.07923516631126404,
0.08027027547359467,
-0.0919426903128624,
0.15383192896842957,
0.121602863073349,
-0.01361166313290596,
-0.042299725115299225,
-0.029934199526906013,
0.017305763438344002,
-0.0016522281803190708,
0.06482116878032684,
-0.023712245747447014,
-0.1348491907119751,
0.01791028305888176,
-0.0832824558019638,
0.028589289635419846,
-0.24584755301475525,
-0.09014419466257095,
0.02776448056101799,
-0.017329558730125427,
-0.017657026648521423,
0.0537584163248539,
0.04623013362288475,
0.02468898519873619,
-0.03452771529555321,
0.020091883838176727,
-0.037873513996601105,
0.06044413149356842,
-0.10750384628772736,
-0.09178107231855392
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 160k (uncased)
Seed 2 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-160k')
model = BertModel.from_pretrained("multiberts-seed-2-160k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-160k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 160k (uncased)
Seed 2 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 160k (uncased)\nSeed 2 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 160k (uncased)\nSeed 2 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 160k (uncased)\nSeed 2 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08392451703548431,
0.001316250883974135,
-0.0023572344798594713,
0.06942553073167801,
0.08532767742872238,
0.0021721022203564644,
0.11489277333021164,
0.051211100071668625,
-0.029380377382040024,
0.023880302906036377,
0.09102432429790497,
0.027873363345861435,
0.0438496358692646,
0.06707973778247833,
0.096837118268013,
-0.2582781910896301,
0.04703507199883461,
-0.06455554068088531,
0.05064643174409866,
0.074200838804245,
0.10182366520166397,
-0.07021372765302658,
0.0621347613632679,
0.03878869116306305,
-0.08569827675819397,
-0.015695378184318542,
-0.013280157931149006,
-0.03256801888346672,
0.10024146735668182,
0.07087143510580063,
0.06272009015083313,
0.0032020695507526398,
0.0600484274327755,
-0.08860864490270615,
0.015532379038631916,
0.04529336467385292,
0.00011137174442410469,
0.026681527495384216,
-0.005449753254652023,
0.01683240570127964,
0.1129392683506012,
0.03822102025151253,
0.07532039284706116,
0.03444678708910942,
-0.09460601955652237,
-0.11546914279460907,
-0.08010858297348022,
0.1011217013001442,
0.051118191331624985,
0.03731219097971916,
-0.005088381469249725,
0.07183212786912918,
-0.027319323271512985,
0.07356669008731842,
0.10880845785140991,
-0.2573400139808655,
-0.008950877003371716,
0.06465883553028107,
0.04367360472679138,
0.043776221573352814,
0.01508045382797718,
0.026592828333377838,
0.005832873284816742,
0.04129195213317871,
0.026506252586841583,
-0.023413147777318954,
0.11591807007789612,
-0.04564644396305084,
-0.1495877355337143,
-0.043887875974178314,
0.11776376515626907,
-0.009755300357937813,
-0.12582379579544067,
-0.0953453928232193,
-0.03360453248023987,
0.12344479560852051,
-0.0011938940733671188,
-0.01959698647260666,
-0.0036701825447380543,
0.013803948648273945,
0.026702364906668663,
-0.0984770655632019,
-0.08723098039627075,
-0.03103550896048546,
-0.03893384337425232,
0.1259903907775879,
0.04672059416770935,
0.05437302961945534,
-0.036078739911317825,
0.08660012483596802,
-0.12080679833889008,
-0.04079392924904823,
-0.05216380953788757,
-0.0807688981294632,
-0.018294217064976692,
0.010674908757209778,
-0.02674228884279728,
-0.08778806030750275,
-0.05928044021129608,
0.11099961400032043,
0.030632786452770233,
0.027132712304592133,
-0.008599575608968735,
0.043022703379392624,
0.07246817648410797,
0.09592872858047485,
-0.035506539046764374,
0.05465490370988846,
0.03540361672639847,
-0.024754486978054047,
0.05840470641851425,
-0.05107917636632919,
-0.10293389111757278,
0.07356531172990799,
-0.0015687653794884682,
0.03857146203517914,
0.027707122266292572,
0.029939264059066772,
-0.013968228362500668,
-0.07174748927354813,
0.16584521532058716,
-0.07785417139530182,
-0.006679495796561241,
-0.01566021889448166,
0.011658819392323494,
0.0433637872338295,
0.035504601895809174,
-0.005880151409655809,
-0.04681749641895294,
-0.0086707454174757,
-0.053878769278526306,
-0.023833848536014557,
-0.05611410737037659,
-0.11870864033699036,
0.0006985040381550789,
-0.03996247798204422,
-0.03317372500896454,
-0.14238238334655762,
-0.21655139327049255,
-0.015819653868675232,
0.06334585696458817,
-0.003623360302299261,
-0.010958545841276646,
0.023930348455905914,
0.017418956384062767,
-0.020684078335762024,
0.010937271639704704,
-0.04230706766247749,
0.0007727015763521194,
-0.0048553794622421265,
-0.03248278796672821,
0.05765678733587265,
-0.041728414595127106,
0.022342804819345474,
-0.06978635489940643,
0.022921154275536537,
-0.2072334885597229,
0.09061320126056671,
-0.030843157321214676,
-0.0037824679166078568,
-0.03855793923139572,
-0.040984198451042175,
0.007687188684940338,
0.04589255154132843,
-0.007704121060669422,
0.11625485867261887,
-0.13974407315254211,
-0.05057775229215622,
0.1865089237689972,
-0.16270789504051208,
0.001827474683523178,
0.10201431810855865,
-0.0490591861307621,
0.060094691812992096,
0.13227081298828125,
0.09885979443788528,
0.08545444905757904,
-0.07294382154941559,
0.014334305189549923,
0.06407130509614944,
-0.06879336386919022,
0.0573580265045166,
0.09210604429244995,
-0.024959741160273552,
-0.13771942257881165,
0.028500519692897797,
-0.07471264898777008,
-0.008976495824754238,
-0.026596514508128166,
-0.020765438675880432,
0.005025040358304977,
-0.03874023258686066,
0.03193233907222748,
0.0052377465181052685,
0.018712151795625687,
-0.03925526514649391,
-0.08281604945659637,
0.033160507678985596,
0.07560762763023376,
-0.06949975341558456,
0.04468598589301109,
-0.07111702859401703,
0.059048693627119064,
-0.0751960426568985,
-0.004831705242395401,
-0.1682596504688263,
-0.02023189887404442,
0.04620220512151718,
-0.04876606911420822,
0.049823097884655,
0.0965287834405899,
0.003106694668531418,
0.12427288293838501,
-0.038880355656147,
0.005258192308247089,
-0.009856855496764183,
-0.011763493530452251,
-0.050912924110889435,
-0.12270285189151764,
-0.08032625913619995,
-0.0695149153470993,
0.09884244203567505,
-0.07307612150907516,
0.028799885883927345,
-0.06836822628974915,
-0.019648035988211632,
-0.006383376196026802,
-0.05925474315881729,
-0.006540792062878609,
0.007693166844546795,
-0.029878366738557816,
-0.04611539840698242,
0.04732079058885574,
0.051597610116004944,
-0.06007758527994156,
0.0810966044664383,
-0.10282505303621292,
-0.06252653896808624,
0.05401825159788132,
0.014910541474819183,
-0.07785308361053467,
0.09439725428819656,
-0.01921718753874302,
-0.013172219507396221,
-0.061065517365932465,
-0.043806884437799454,
0.19194158911705017,
-0.024327125400304794,
0.10193288326263428,
-0.09230898320674896,
0.002108628861606121,
0.02736104466021061,
-0.04976858198642731,
-0.017698951065540314,
0.06040513515472412,
0.04714630916714668,
-0.18990516662597656,
0.015221785753965378,
0.05304446071386337,
0.07616549730300903,
0.11270808428525925,
0.028640592470765114,
-0.025422263890504837,
-0.0485026016831398,
-0.009651470929384232,
0.008473795838654041,
0.05341396853327751,
-0.026344817131757736,
-0.006572981830686331,
0.03251273185014725,
0.057508356869220734,
0.017798786982893944,
-0.08366033434867859,
0.037045761942863464,
0.0685981810092926,
-0.019178185611963272,
-0.038837917149066925,
-0.026036400347948074,
-0.05871591717004776,
0.06447659432888031,
0.05106007680296898,
0.040877971798181534,
0.024945348501205444,
-0.01436139177531004,
-0.13769268989562988,
0.18584010004997253,
-0.11561580747365952,
-0.2594337463378906,
-0.10935533046722412,
-0.053010277450084686,
-0.02026401273906231,
0.04397042095661163,
0.05788767337799072,
-0.02750866860151291,
-0.04127983748912811,
-0.11586163192987442,
0.06520146876573563,
-0.06647823750972748,
-0.029338978230953217,
-0.00967908464372158,
-0.04847535490989685,
-0.016336554661393166,
-0.1273089051246643,
-0.011582709848880768,
-0.0288623608648777,
-0.07893282920122147,
0.005921602249145508,
-0.0338873490691185,
0.026229284703731537,
0.13413260877132416,
0.03240973874926567,
-0.02055622637271881,
-0.016474362462759018,
0.1876620501279831,
0.013345936313271523,
0.06004699692130089,
0.11680025607347488,
-0.031286198645830154,
0.055502999573946,
0.04455216974020004,
0.02250988781452179,
-0.04649070277810097,
0.01225706934928894,
-0.017923396080732346,
-0.12266921252012253,
-0.17521581053733826,
-0.07007387280464172,
-0.0030216751620173454,
0.00857117772102356,
0.018340224400162697,
0.03563142195343971,
0.013888497836887836,
0.03895776346325874,
-0.030465098097920418,
0.031504176557064056,
-0.012975465506315231,
0.07880809903144836,
0.025038111954927444,
-0.07510490715503693,
0.09150654077529907,
-0.061340246349573135,
0.017258387058973312,
0.11235380172729492,
-0.06198453530669212,
0.1854865849018097,
0.02328351140022278,
0.058560751378536224,
0.0998140424489975,
0.02197916805744171,
0.055068131536245346,
0.0871305987238884,
-0.046149905771017075,
0.008733530528843403,
-0.062165506184101105,
-0.05162215977907181,
-0.03984209522604942,
0.05207262560725212,
0.031835801899433136,
0.018426500260829926,
-0.11849527806043625,
0.0257781483232975,
-0.0021900238934904337,
0.13132047653198242,
0.050234176218509674,
-0.1157810389995575,
-0.12333472073078156,
0.03715130686759949,
-0.04437173530459404,
-0.06391959637403488,
0.02761188894510269,
0.06576839834451675,
-0.15309564769268036,
0.04331938922405243,
-0.006039098836481571,
0.06558390706777573,
-0.09240415692329407,
0.014927804470062256,
-0.04988761618733406,
-0.0013938387855887413,
0.007316259201616049,
0.07171180844306946,
-0.1368936151266098,
0.10536575317382812,
0.021422425284981728,
0.047416649758815765,
-0.08040563017129898,
0.0181700699031353,
-0.0119338259100914,
0.10607216507196426,
0.11561395227909088,
0.04538087546825409,
-0.05894989147782326,
-0.016836807131767273,
-0.046982504427433014,
0.0195192638784647,
0.059146031737327576,
-0.08009987324476242,
0.058977238833904266,
0.007022296078503132,
0.00811176560819149,
-0.023326359689235687,
0.01806318759918213,
-0.13384142518043518,
-0.12174038589000702,
0.062118351459503174,
-0.08079032599925995,
-0.10480760782957077,
-0.058591023087501526,
-0.06212783604860306,
-0.04328116774559021,
0.21685940027236938,
-0.11918406188488007,
-0.09009332954883575,
-0.09754815697669983,
-0.00792166031897068,
0.043678320944309235,
-0.06743968278169632,
0.044743359088897705,
-0.03827068954706192,
0.09541043639183044,
-0.05053719878196716,
-0.1108371913433075,
0.03519351780414581,
-0.1131933182477951,
-0.11789262294769287,
-0.045320652425289154,
0.10730695724487305,
0.11503533273935318,
0.040000349283218384,
0.013530273921787739,
0.010840087197721004,
0.0010516662150621414,
-0.11436542868614197,
0.01686435379087925,
0.13577178120613098,
-0.0034112688153982162,
0.07136992365121841,
-0.05849035829305649,
0.024542760103940964,
-0.013514302670955658,
-0.0014674980193376541,
0.13238310813903809,
0.18888214230537415,
-0.06545718014240265,
0.18037983775138855,
0.19836603105068207,
-0.10747398436069489,
-0.19279542565345764,
-0.05551664158701897,
0.0026249829679727554,
0.046319980174303055,
0.05466386675834656,
-0.1851625144481659,
0.08701418340206146,
0.03181006759405136,
-0.034350305795669556,
0.017593763768672943,
-0.23640480637550354,
-0.111345075070858,
0.0862644612789154,
0.053925514221191406,
0.18911001086235046,
-0.07917866110801697,
-0.04153008013963699,
-0.017557622864842415,
-0.04045748710632324,
0.04778305068612099,
-0.0368402898311615,
0.08826369047164917,
0.0079741720110178,
-0.027920056134462357,
0.004446476697921753,
-0.029430748894810677,
0.0979870855808258,
0.03707358241081238,
0.02002253755927086,
-0.0709155946969986,
-0.006459638476371765,
0.11249400675296783,
-0.03837955743074417,
0.09634827822446823,
0.042717039585113525,
0.07623423635959625,
-0.09701500833034515,
-0.05939755216240883,
-0.07293890416622162,
0.04580451548099518,
-0.04121851176023483,
-0.05543002486228943,
-0.06406448036432266,
0.056613169610500336,
0.036379870027303696,
0.011697162874042988,
-0.0026153922080993652,
-0.03594624251127243,
0.04204420745372772,
0.09751565754413605,
0.08093538880348206,
-0.026726864278316498,
-0.06603537499904633,
-0.051387183368206024,
-0.047343622893095016,
0.0678848996758461,
-0.09544245898723602,
0.022621512413024902,
0.024910716339945793,
0.010516539216041565,
0.08760851621627808,
0.032157525420188904,
-0.1416013240814209,
0.014066013507544994,
0.03297552093863487,
-0.12486950308084488,
-0.1102847158908844,
-0.0199044831097126,
0.03465283662080765,
-0.03672252222895622,
0.051738932728767395,
0.15024857223033905,
-0.036077845841646194,
-0.032754357904195786,
-0.048609763383865356,
0.03952009230852127,
-0.02187034860253334,
0.04868556559085846,
0.06288997828960419,
0.030025826767086983,
-0.07096061110496521,
0.07434392720460892,
0.03955686464905739,
-0.03378337249159813,
0.04276382178068161,
0.03771873190999031,
-0.09161120653152466,
-0.07763740420341492,
-0.056497763842344284,
0.08980444818735123,
-0.02130303718149662,
-0.04741518944501877,
0.0013027824461460114,
-0.0811569094657898,
0.06695587933063507,
0.07348348945379257,
0.04861490800976753,
0.03947705775499344,
-0.08634376525878906,
0.017210571095347404,
-0.05213524028658867,
0.036141734570264816,
-0.02708052285015583,
-0.00542849488556385,
-0.054641008377075195,
0.0712067186832428,
0.06231457740068436,
0.09998578578233719,
-0.034771084785461426,
-0.07315128296613693,
-0.08208481967449188,
-0.013443280011415482,
-0.06178921461105347,
-0.028134873136878014,
-0.07425325363874435,
-0.00840107724070549,
0.0026977609377354383,
-0.0069166794419288635,
0.022922899574041367,
0.03712686151266098,
-0.043171823024749756,
-0.01846325770020485,
-0.041162844747304916,
0.037511833012104034,
-0.0645807683467865,
0.004804185591638088,
0.018627412617206573,
-0.036489229649305344,
0.09319166094064713,
0.0375017486512661,
-0.011176912114024162,
0.04761224240064621,
-0.03085799515247345,
0.035225044935941696,
-0.020206892862915993,
0.0022198993247002363,
-0.024156052619218826,
-0.10935941338539124,
-0.003498104400932789,
0.005456442013382912,
-0.025521909818053246,
0.008510190062224865,
0.0642823651432991,
-0.07252746820449829,
0.08799530565738678,
0.04435203969478607,
-0.031737424433231354,
-0.07000385224819183,
0.04054488241672516,
-0.016705961897969246,
0.031720977276563644,
0.06854379177093506,
-0.03471262380480766,
0.05344244837760925,
-0.09956268966197968,
-0.02809201553463936,
0.004590451717376709,
-0.002086509019136429,
-0.011165563017129898,
-0.05435032770037651,
-0.0038435962051153183,
0.007649126462638378,
0.17182259261608124,
-0.018614791333675385,
0.03880753368139267,
0.013474627397954464,
0.005730784498155117,
0.051287733018398285,
-0.01377088576555252,
0.06988190114498138,
-0.007907605729997158,
-0.023441916331648827,
-0.015734003856778145,
0.03517192602157593,
0.006359737366437912,
0.0033535733819007874,
0.14342465996742249,
0.04501606523990631,
0.08633247762918472,
0.07577857375144958,
0.016099128872156143,
0.019194098189473152,
-0.13565388321876526,
-0.08947524428367615,
0.003645450808107853,
0.05515747889876366,
-0.017898300662636757,
0.01635783165693283,
0.0927407443523407,
-0.08955205976963043,
0.06959020346403122,
0.049940019845962524,
-0.04820452257990837,
-0.12992283701896667,
-0.19523315131664276,
-0.02749032713472843,
-0.028839988633990288,
-0.011653488501906395,
-0.08912152051925659,
0.020039372146129608,
0.09050785005092621,
0.02170257829129696,
-0.010207066312432289,
0.08903619647026062,
-0.10530009120702744,
-0.030756276100873947,
0.04211784899234772,
-0.026783032342791557,
0.01654590107500553,
0.051404066383838654,
0.020712144672870636,
-0.0029609352350234985,
0.04934816434979439,
0.04177878424525261,
0.04530975595116615,
0.028904445469379425,
0.05013492330908775,
-0.02445242553949356,
-0.07420904189348221,
-0.03362464904785156,
-0.0056326729245483875,
0.053855348378419876,
0.13213670253753662,
0.025730768218636513,
-0.0676082968711853,
0.00717870332300663,
0.11020947992801666,
-0.03172450512647629,
-0.04986446723341942,
-0.10735516250133514,
0.23813509941101074,
0.02127438224852085,
0.005185427609831095,
-0.0019596535712480545,
-0.04671391099691391,
0.003090349957346916,
0.20613284409046173,
0.22197794914245605,
0.0023448423016816378,
-0.008760360069572926,
0.008673053234815598,
-0.011363906785845757,
0.038384683430194855,
0.14511297643184662,
0.002453424036502838,
0.25155237317085266,
-0.047495968639850616,
0.03529661148786545,
-0.04259499907493591,
-0.03845188394188881,
-0.10131433606147766,
0.0717681273818016,
-0.013667616993188858,
0.008872408419847488,
-0.03168342262506485,
0.07052429765462875,
-0.037212833762168884,
-0.18128567934036255,
0.0031750239431858063,
-0.0023092590272426605,
-0.060975927859544754,
0.014155587181448936,
0.003649875521659851,
0.020204756408929825,
0.08268454670906067,
-0.02050667256116867,
-0.005830821115523577,
0.13037222623825073,
0.019278716295957565,
-0.0993599072098732,
-0.054555512964725494,
0.11280427128076553,
0.013265143148601055,
0.1419842541217804,
0.012093859724700451,
0.08005722612142563,
0.084903284907341,
0.022071825340390205,
-0.09438994526863098,
0.04090055450797081,
-0.01902306079864502,
-0.028861720114946365,
0.006351847667247057,
0.1123446673154831,
-0.010595233179628849,
0.05948815122246742,
0.0302019901573658,
-0.09089738130569458,
0.06031418591737747,
0.010342106223106384,
-0.03556538373231888,
-0.0804169774055481,
0.08342849463224411,
-0.0925886482000351,
0.15378791093826294,
0.12063363194465637,
-0.014112354256212711,
-0.045256055891513824,
-0.02965829335153103,
0.019784973934292793,
-0.00171663798391819,
0.06262042373418808,
-0.024441026151180267,
-0.13245990872383118,
0.017777152359485626,
-0.08104638755321503,
0.029660191386938095,
-0.2457059919834137,
-0.09011338651180267,
0.027389025315642357,
-0.018074270337820053,
-0.018943194299936295,
0.05239109694957733,
0.04435310885310173,
0.02484254352748394,
-0.03538357466459274,
0.013182983733713627,
-0.03809591010212898,
0.05928417667746544,
-0.10891859233379364,
-0.09288199245929718
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 1700k (uncased)
Seed 2 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1700k')
model = BertModel.from_pretrained("multiberts-seed-2-1700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-1700k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 1700k (uncased)
Seed 2 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 1700k (uncased)\nSeed 2 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 1700k (uncased)\nSeed 2 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 1700k (uncased)\nSeed 2 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08383036404848099,
0.00028632464818656445,
-0.002243594964966178,
0.06911929696798325,
0.08177568018436432,
0.0024903910234570503,
0.1108560860157013,
0.050678551197052,
-0.026529263705015182,
0.022761043161153793,
0.09313344955444336,
0.02743438258767128,
0.0447530634701252,
0.06580905616283417,
0.09695890545845032,
-0.2580091655254364,
0.04813039302825928,
-0.06345256417989731,
0.05421849340200424,
0.07532800734043121,
0.10118076205253601,
-0.07132966816425323,
0.06137927249073982,
0.03900505602359772,
-0.08524677902460098,
-0.017031271010637283,
-0.013115417212247849,
-0.03567170351743698,
0.1003926545381546,
0.07077714800834656,
0.061480745673179626,
0.0036722607910633087,
0.06022229045629501,
-0.08769045770168304,
0.015663016587495804,
0.04495272412896156,
0.0002842131070792675,
0.026178784668445587,
-0.00820249691605568,
0.01727333292365074,
0.11525151133537292,
0.04123442620038986,
0.07730986177921295,
0.034958191215991974,
-0.09472912549972534,
-0.11908763647079468,
-0.0801471471786499,
0.10875546932220459,
0.052959710359573364,
0.03601812571287155,
-0.004678242839872837,
0.07252981513738632,
-0.027260059490799904,
0.07368539273738861,
0.10290025174617767,
-0.25633955001831055,
-0.00990117248147726,
0.06682006269693375,
0.04458469897508621,
0.04926971346139908,
0.014420384541153908,
0.025997411459684372,
0.0058682262897491455,
0.04055824875831604,
0.029136214405298233,
-0.022733502089977264,
0.11798059940338135,
-0.04497026279568672,
-0.14990034699440002,
-0.04517451673746109,
0.1151713877916336,
-0.008619489148259163,
-0.12430842220783234,
-0.09658657759428024,
-0.03320476785302162,
0.11829632520675659,
-0.001472393050789833,
-0.021694809198379517,
-0.005665801465511322,
0.014155834913253784,
0.026477482169866562,
-0.09954670071601868,
-0.08701928704977036,
-0.0320538654923439,
-0.037074364721775055,
0.12839826941490173,
0.04702482372522354,
0.055420562624931335,
-0.03746422380208969,
0.0869276225566864,
-0.11727398633956909,
-0.03918716683983803,
-0.05269968882203102,
-0.08263563364744186,
-0.019839366897940636,
0.00953510869294405,
-0.02431635372340679,
-0.08420787006616592,
-0.06205424666404724,
0.11385925114154816,
0.03336166590452194,
0.029024910181760788,
-0.009223985485732555,
0.04353845864534378,
0.07244070619344711,
0.09519137442111969,
-0.036427367478609085,
0.05237357318401337,
0.034604404121637344,
-0.023032180964946747,
0.057942330837249756,
-0.05096203088760376,
-0.10023228824138641,
0.0728510394692421,
-0.0014142440631985664,
0.039311088621616364,
0.02903556078672409,
0.028835685923695564,
-0.010814384557306767,
-0.07246042788028717,
0.16961467266082764,
-0.07506811618804932,
-0.006994817405939102,
-0.01609693095088005,
0.012046188116073608,
0.041639357805252075,
0.03560560196638107,
-0.005472005810588598,
-0.04646892845630646,
-0.007966854609549046,
-0.05556926876306534,
-0.026116272434592247,
-0.054879285395145416,
-0.1166590228676796,
-0.0001666368916630745,
-0.03754838556051254,
-0.0341167114675045,
-0.14188820123672485,
-0.21841341257095337,
-0.015762513503432274,
0.06359059363603592,
-0.003756130114197731,
-0.01173396036028862,
0.025510411709547043,
0.019235605373978615,
-0.02110321633517742,
0.010131709277629852,
-0.04343589395284653,
0.0014701355248689651,
-0.004436925053596497,
-0.03625538572669029,
0.05860898643732071,
-0.039828505367040634,
0.02154291793704033,
-0.06977161765098572,
0.022833144292235374,
-0.21020400524139404,
0.08935968577861786,
-0.032357968389987946,
-0.0020309733226895332,
-0.03901857137680054,
-0.04143662750720978,
0.006261356174945831,
0.045459311455488205,
-0.009603683836758137,
0.11467231065034866,
-0.13538284599781036,
-0.050147734582424164,
0.18931826949119568,
-0.16365066170692444,
-0.0010828934609889984,
0.10321427136659622,
-0.04821671545505524,
0.059062033891677856,
0.1324123591184616,
0.09731477499008179,
0.08723290264606476,
-0.0705944076180458,
0.015434629283845425,
0.06260128319263458,
-0.07137183099985123,
0.053019165992736816,
0.09062086790800095,
-0.02533748932182789,
-0.14372284710407257,
0.02781631425023079,
-0.07455277442932129,
-0.008315774612128735,
-0.026177063584327698,
-0.02244710549712181,
0.005533307790756226,
-0.038624800741672516,
0.028279133141040802,
0.004989132285118103,
0.01856781542301178,
-0.040690839290618896,
-0.08267796039581299,
0.025181621313095093,
0.07670038193464279,
-0.06857788562774658,
0.04525424540042877,
-0.07090733200311661,
0.05652127414941788,
-0.07136502116918564,
-0.003991390112787485,
-0.16595059633255005,
-0.016515525057911873,
0.046469058841466904,
-0.051855720579624176,
0.049096714705228806,
0.08995471894741058,
0.0031487387605011463,
0.12432151287794113,
-0.04004162549972534,
0.004449210129678249,
-0.007827827706933022,
-0.010140066966414452,
-0.0507463663816452,
-0.12307420372962952,
-0.07975871860980988,
-0.06916268169879913,
0.09863877296447754,
-0.07341433316469193,
0.02884649485349655,
-0.07072035223245621,
-0.02036629617214203,
-0.006118413060903549,
-0.05960671603679657,
-0.007713579572737217,
0.007352516986429691,
-0.028260257095098495,
-0.04446743428707123,
0.04679275304079056,
0.051501329988241196,
-0.06133684143424034,
0.08144150674343109,
-0.1040157675743103,
-0.06625421345233917,
0.05389278382062912,
0.015321530401706696,
-0.0790117084980011,
0.0921095758676529,
-0.020378196612000465,
-0.014195694588124752,
-0.06029915064573288,
-0.04497493803501129,
0.18835724890232086,
-0.02397860214114189,
0.10111337900161743,
-0.09291614592075348,
0.002400211989879608,
0.028716368600726128,
-0.04711981117725372,
-0.018744684755802155,
0.05829635635018349,
0.0502401702105999,
-0.19419750571250916,
0.013758186250925064,
0.052153579890728,
0.0773233026266098,
0.11171596497297287,
0.030223581939935684,
-0.0236545130610466,
-0.04827667400240898,
-0.011293325573205948,
0.00667641032487154,
0.05315698683261871,
-0.02670188993215561,
-0.006019270978868008,
0.031490303575992584,
0.058311741799116135,
0.018964704126119614,
-0.0834370106458664,
0.036870215088129044,
0.06889839470386505,
-0.01721658557653427,
-0.03975394740700722,
-0.02458016388118267,
-0.058957938104867935,
0.06408782303333282,
0.05202659219503403,
0.03855695575475693,
0.024120651185512543,
-0.01444863248616457,
-0.13824132084846497,
0.1855059713125229,
-0.1146843284368515,
-0.25908055901527405,
-0.10952691733837128,
-0.0529257208108902,
-0.018739668652415276,
0.04226647689938545,
0.0588846430182457,
-0.028314244002103806,
-0.04129011929035187,
-0.11593258380889893,
0.06161919981241226,
-0.06624188274145126,
-0.028369713574647903,
-0.008437016978859901,
-0.04942604899406433,
-0.019735850393772125,
-0.12689605355262756,
-0.011689390987157822,
-0.03171052783727646,
-0.07563765347003937,
0.0057341111823916435,
-0.03529118746519089,
0.025369815528392792,
0.13699698448181152,
0.033863089978694916,
-0.01989901438355446,
-0.017239922657608986,
0.1868899166584015,
0.0123631302267313,
0.06136832758784294,
0.11312831193208694,
-0.03113994188606739,
0.055380549281835556,
0.044633649289608,
0.02304328791797161,
-0.04650655388832092,
0.012039112858474255,
-0.019024411216378212,
-0.12319182604551315,
-0.17404501140117645,
-0.0712207779288292,
-0.002387446351349354,
0.01004329975694418,
0.021471580490469933,
0.035246074199676514,
0.017795313149690628,
0.039079271256923676,
-0.0308130644261837,
0.027951963245868683,
-0.011136099696159363,
0.07941418886184692,
0.022591739892959595,
-0.07530076801776886,
0.09116384387016296,
-0.06085502728819847,
0.019179170951247215,
0.11186996847391129,
-0.0614413321018219,
0.18773102760314941,
0.025092562660574913,
0.05681229382753372,
0.10083805024623871,
0.02097979187965393,
0.055279381573200226,
0.08512184023857117,
-0.045771680772304535,
0.008685730397701263,
-0.061873480677604675,
-0.051350053399801254,
-0.03895613178610802,
0.053006622940301895,
0.031701311469078064,
0.01743951439857483,
-0.11825647950172424,
0.025906912982463837,
-0.0009130972903221846,
0.13361887633800507,
0.046414781361818314,
-0.11496680974960327,
-0.12434275448322296,
0.03723045438528061,
-0.04505107179284096,
-0.06359323114156723,
0.029828857630491257,
0.06394103169441223,
-0.15197819471359253,
0.04353844374418259,
-0.00554640032351017,
0.06575208157300949,
-0.08952774107456207,
0.014200937934219837,
-0.05060713738203049,
-0.0031092679128050804,
0.006712271366268396,
0.06878231465816498,
-0.1279543936252594,
0.11028371006250381,
0.02092110365629196,
0.048063721507787704,
-0.08103840798139572,
0.016561996191740036,
-0.011482343077659607,
0.10812769830226898,
0.11768589913845062,
0.046341150999069214,
-0.06134149059653282,
-0.015183315612375736,
-0.04628262296319008,
0.018900640308856964,
0.056474149227142334,
-0.07892297208309174,
0.05714888125658035,
0.008368260227143764,
0.007122417911887169,
-0.02277868613600731,
0.015291355550289154,
-0.13250407576560974,
-0.11952975392341614,
0.06161300092935562,
-0.07893471419811249,
-0.10636107623577118,
-0.05900026112794876,
-0.060987621545791626,
-0.044184356927871704,
0.21475571393966675,
-0.11924704909324646,
-0.0888042002916336,
-0.09623928368091583,
-0.00901515781879425,
0.04488558694720268,
-0.06811537593603134,
0.04576247185468674,
-0.03778786212205887,
0.094184510409832,
-0.05058646202087402,
-0.10854154825210571,
0.036194756627082825,
-0.1139698326587677,
-0.11664105951786041,
-0.045006074011325836,
0.10723371058702469,
0.11599605530500412,
0.038757890462875366,
0.013447223231196404,
0.010988105088472366,
0.0010305196046829224,
-0.11638748645782471,
0.018304435536265373,
0.1348046362400055,
-0.006140638142824173,
0.07131683081388474,
-0.05565929412841797,
0.02005165070295334,
-0.014656931161880493,
-0.0028785839676856995,
0.13184882700443268,
0.19057369232177734,
-0.06699976325035095,
0.1799285113811493,
0.20473718643188477,
-0.10673562437295914,
-0.1915702223777771,
-0.05482562631368637,
-0.0020555993542075157,
0.04474497213959694,
0.051887594163417816,
-0.18273240327835083,
0.08706069737672806,
0.03373395651578903,
-0.03544441983103752,
0.012745708227157593,
-0.23815323412418365,
-0.11276444792747498,
0.09064365923404694,
0.054549574851989746,
0.19108861684799194,
-0.0758332908153534,
-0.040975868701934814,
-0.01465022936463356,
-0.038694605231285095,
0.048198360949754715,
-0.034520551562309265,
0.08691418170928955,
0.006635425612330437,
-0.03242730349302292,
0.004521550610661507,
-0.031514689326286316,
0.09626968204975128,
0.03826074302196503,
0.020228059962391853,
-0.069485142827034,
-0.004729010164737701,
0.10954605042934418,
-0.03856451064348221,
0.09603230655193329,
0.04477579519152641,
0.07767724990844727,
-0.09470594674348831,
-0.05945766717195511,
-0.07457146048545837,
0.04359213635325432,
-0.04085739329457283,
-0.055613912642002106,
-0.06530596315860748,
0.05926820635795593,
0.03851598873734474,
0.010415341705083847,
-0.0029815472662448883,
-0.036114081740379333,
0.04402919486165047,
0.0972621813416481,
0.07920091599225998,
-0.03252212330698967,
-0.06521543115377426,
-0.051237255334854126,
-0.04905585199594498,
0.06886279582977295,
-0.09492914378643036,
0.02208010107278824,
0.024881068617105484,
0.011099119670689106,
0.0878964215517044,
0.03213882818818092,
-0.1408180296421051,
0.012336084619164467,
0.0344267413020134,
-0.12384961545467377,
-0.11230503022670746,
-0.019429806619882584,
0.03595365956425667,
-0.03581941872835159,
0.04993177577853203,
0.14669620990753174,
-0.036585718393325806,
-0.03237384930253029,
-0.048526935279369354,
0.04094572365283966,
-0.020513765513896942,
0.046172238886356354,
0.06182381138205528,
0.03059527464210987,
-0.07040509581565857,
0.07455643266439438,
0.04092967137694359,
-0.03885505720973015,
0.04085434600710869,
0.04185165837407112,
-0.09249570220708847,
-0.07778674364089966,
-0.0609331876039505,
0.08489956706762314,
-0.024118999019265175,
-0.04636528342962265,
0.0035601798444986343,
-0.08105607330799103,
0.06861427426338196,
0.07336355000734329,
0.047481004148721695,
0.037957362830638885,
-0.08693071454763412,
0.017567019909620285,
-0.052245303988456726,
0.03664515167474747,
-0.0289811659604311,
-0.006123261526226997,
-0.052736178040504456,
0.07057631760835648,
0.06420446187257767,
0.09845540672540665,
-0.03424942493438721,
-0.07379485666751862,
-0.08176875114440918,
-0.012186691164970398,
-0.06382045149803162,
-0.030583947896957397,
-0.07857844233512878,
-0.007346606347709894,
0.0015746597200632095,
-0.006686205044388771,
0.021109461784362793,
0.03611551970243454,
-0.04318051412701607,
-0.017293570563197136,
-0.0387299619615078,
0.03767215460538864,
-0.0642303079366684,
0.0051944805309176445,
0.01953340321779251,
-0.03603000566363335,
0.09390570223331451,
0.03702457994222641,
-0.010923771187663078,
0.04694648087024689,
-0.024144385010004044,
0.03548859432339668,
-0.02162480726838112,
0.0025887680239975452,
-0.02377970889210701,
-0.11118832975625992,
-0.0031856142450124025,
0.004566667601466179,
-0.02770785614848137,
0.010268021374940872,
0.06298913806676865,
-0.07276372611522675,
0.08724091947078705,
0.04620401933789253,
-0.03263187035918236,
-0.07075712084770203,
0.039708420634269714,
-0.01582593284547329,
0.029479805380105972,
0.06734413653612137,
-0.03404781222343445,
0.051599979400634766,
-0.09962132573127747,
-0.027616802603006363,
0.0036356444470584393,
-0.002026904374361038,
-0.008996201679110527,
-0.052258677780628204,
-0.004734368994832039,
0.006696674041450024,
0.17171961069107056,
-0.022932708263397217,
0.03403482958674431,
0.01622975803911686,
0.004439646378159523,
0.04573025554418564,
-0.014752337709069252,
0.07096169888973236,
-0.0072203828021883965,
-0.025532305240631104,
-0.013953239656984806,
0.03456863388419151,
0.006350010633468628,
0.0052458904683589935,
0.14652107656002045,
0.044789545238018036,
0.09280367195606232,
0.07483121007680893,
0.016999300569295883,
0.018598711118102074,
-0.1378692388534546,
-0.0912649855017662,
0.0024490877985954285,
0.055249202996492386,
-0.01696276105940342,
0.009783655405044556,
0.09550002217292786,
-0.08879492431879044,
0.06877098232507706,
0.0503314770758152,
-0.047742217779159546,
-0.13023017346858978,
-0.19206559658050537,
-0.026056962087750435,
-0.028113989159464836,
-0.011653544381260872,
-0.08881059288978577,
0.01915794238448143,
0.08231712877750397,
0.022671010345220566,
-0.009518017992377281,
0.08953546732664108,
-0.10546942055225372,
-0.029455358162522316,
0.04206336662173271,
-0.024982189759612083,
0.016644196584820747,
0.05048168823122978,
0.020797301083803177,
-0.004710827022790909,
0.04835017770528793,
0.03987937793135643,
0.044098637998104095,
0.025571253150701523,
0.051058992743492126,
-0.024112937971949577,
-0.07330937683582306,
-0.03448363393545151,
-0.006363118067383766,
0.05589798837900162,
0.131024569272995,
0.026469621807336807,
-0.066681869328022,
0.007697706110775471,
0.11067042499780655,
-0.03170231357216835,
-0.049042005091905594,
-0.10783055424690247,
0.24286556243896484,
0.02368621900677681,
0.0036530685611069202,
0.00008754990994930267,
-0.044573377817869186,
0.0019401460886001587,
0.20733517408370972,
0.2234470695257187,
0.005139820743352175,
-0.008496410213410854,
0.008672311902046204,
-0.011814223602414131,
0.03675917908549309,
0.1466631293296814,
0.0024453923106193542,
0.2515904903411865,
-0.04570779204368591,
0.03843603655695915,
-0.04249051213264465,
-0.03984259068965912,
-0.09808710217475891,
0.07330060005187988,
-0.012032805010676384,
0.008463109843432903,
-0.032582610845565796,
0.07215411216020584,
-0.03843721002340317,
-0.1750737577676773,
0.002830022946000099,
-0.0021043410524725914,
-0.060863327234983444,
0.014990498311817646,
0.0026041241362690926,
0.01924515888094902,
0.0822117030620575,
-0.018381334841251373,
-0.007139709312468767,
0.1272096037864685,
0.019919339567422867,
-0.09797447174787521,
-0.06033976376056671,
0.11508391052484512,
0.021981380879878998,
0.1465473175048828,
0.012668907642364502,
0.07858290523290634,
0.08588701486587524,
0.021868526935577393,
-0.09756246209144592,
0.03949020802974701,
-0.019081000238656998,
-0.02551223337650299,
0.004933539312332869,
0.11133689433336258,
-0.010710865259170532,
0.06167905032634735,
0.030332857742905617,
-0.09305022656917572,
0.058228179812431335,
0.006640180945396423,
-0.03522577881813049,
-0.07909418642520905,
0.08342956006526947,
-0.0895412266254425,
0.15573962032794952,
0.1202610582113266,
-0.01530779805034399,
-0.04618077725172043,
-0.027915632352232933,
0.019095148891210556,
-0.0015284582041203976,
0.06479810923337936,
-0.024017777293920517,
-0.1345842182636261,
0.01887495256960392,
-0.0826619490981102,
0.030675005167722702,
-0.24405348300933838,
-0.09109359979629517,
0.027959518134593964,
-0.01781129091978073,
-0.018629301339387894,
0.052930597215890884,
0.04434254765510559,
0.024633493274450302,
-0.03485577553510666,
0.015678279101848602,
-0.03853420913219452,
0.057689398527145386,
-0.110318124294281,
-0.09282384812831879
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 1800k (uncased)
Seed 2 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1800k')
model = BertModel.from_pretrained("multiberts-seed-2-1800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-1800k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 1800k (uncased)
Seed 2 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 1800k (uncased)\nSeed 2 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 1800k (uncased)\nSeed 2 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 1800k (uncased)\nSeed 2 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0830303505063057,
0.004881452769041061,
-0.002330603078007698,
0.06781569868326187,
0.0827595666050911,
0.0025856136344373226,
0.11491359025239944,
0.0507354773581028,
-0.028597809374332428,
0.022354964166879654,
0.0911203920841217,
0.025522828102111816,
0.04327860474586487,
0.06478568911552429,
0.09754276275634766,
-0.2552131414413452,
0.04751499369740486,
-0.06333914399147034,
0.050894320011138916,
0.07507623732089996,
0.10082036256790161,
-0.0705365389585495,
0.06097175553441048,
0.03985369950532913,
-0.08302227407693863,
-0.017520973458886147,
-0.013317661359906197,
-0.03584403544664383,
0.10047265142202377,
0.06909092515707016,
0.0636950358748436,
0.0020553525537252426,
0.059900373220443726,
-0.0893520787358284,
0.015449264086782932,
0.04512276500463486,
0.0012141265906393528,
0.026202388107776642,
-0.008478494361042976,
0.01435798779129982,
0.10806864500045776,
0.04125881940126419,
0.0781589075922966,
0.03531888127326965,
-0.09567452222108841,
-0.11291588097810745,
-0.07932467758655548,
0.10366099327802658,
0.05337679386138916,
0.0402112752199173,
-0.0047254059463739395,
0.07344785332679749,
-0.02800258994102478,
0.07182474434375763,
0.10576945543289185,
-0.25325536727905273,
-0.008527476340532303,
0.0656280443072319,
0.041153617203235626,
0.04626041650772095,
0.014223815873265266,
0.025318283587694168,
0.005210496485233307,
0.040683820843696594,
0.02732260897755623,
-0.02282046526670456,
0.11783517897129059,
-0.04378075897693634,
-0.149367094039917,
-0.04459480941295624,
0.11587952077388763,
-0.00837898813188076,
-0.12531569600105286,
-0.09526055306196213,
-0.0325668528676033,
0.11510741710662842,
-0.0018245261162519455,
-0.01914072409272194,
-0.0037569492124021053,
0.013239869847893715,
0.023798856884241104,
-0.09883028268814087,
-0.08703012764453888,
-0.0301668643951416,
-0.03792130947113037,
0.12938879430294037,
0.045931316912174225,
0.053747471421957016,
-0.03711654990911484,
0.08630108833312988,
-0.11981241405010223,
-0.03977072611451149,
-0.0520424023270607,
-0.08109743893146515,
-0.020845115184783936,
0.009575705975294113,
-0.028896737843751907,
-0.08791975677013397,
-0.0624474361538887,
0.11618132144212723,
0.03250686451792717,
0.027993038296699524,
-0.0065237656235694885,
0.04312002658843994,
0.07279174774885178,
0.09617453813552856,
-0.036891642957925797,
0.05620736628770828,
0.03276979178190231,
-0.02462514117360115,
0.058205392211675644,
-0.05143734812736511,
-0.10174626857042313,
0.07600921392440796,
-0.0010367818176746368,
0.037781327962875366,
0.0277610644698143,
0.028466196730732918,
-0.013431429862976074,
-0.07399627566337585,
0.1608283966779709,
-0.07646473497152328,
-0.0081239715218544,
-0.016583088785409927,
0.011709576472640038,
0.04055029898881912,
0.03509659320116043,
-0.003279560711234808,
-0.04661089554429054,
-0.004878193140029907,
-0.054232921451330185,
-0.024622637778520584,
-0.05606736242771149,
-0.11484509706497192,
0.00008777901530265808,
-0.04046143591403961,
-0.03313903883099556,
-0.14123667776584625,
-0.216359943151474,
-0.015736717730760574,
0.06675486266613007,
-0.003233856987208128,
-0.012592333368957043,
0.02558799460530281,
0.018092775717377663,
-0.021233977749943733,
0.010111781768500805,
-0.046211544424295425,
0.001400127075612545,
-0.005804536864161491,
-0.03663623332977295,
0.056600421667099,
-0.04099283367395401,
0.023037761449813843,
-0.07047830522060394,
0.021940739825367928,
-0.2112080454826355,
0.08829662948846817,
-0.030348889529705048,
-0.00006290897727012634,
-0.038175277411937714,
-0.04085345193743706,
0.005168622359633446,
0.04485829547047615,
-0.008830324746668339,
0.11528746038675308,
-0.13469116389751434,
-0.05030269920825958,
0.18183287978172302,
-0.16109271347522736,
-0.00004842132329940796,
0.10061154514551163,
-0.049765996634960175,
0.05879351124167442,
0.13084575533866882,
0.10089628398418427,
0.0850706696510315,
-0.07026909291744232,
0.01548451092094183,
0.06101096794009209,
-0.06991150975227356,
0.05603228509426117,
0.09179168939590454,
-0.027091793715953827,
-0.13802459836006165,
0.027092114090919495,
-0.07607418298721313,
-0.00826876237988472,
-0.025428012013435364,
-0.023289036005735397,
0.006229540333151817,
-0.03917599841952324,
0.029929518699645996,
0.005049343220889568,
0.018751898780465126,
-0.03977881744503975,
-0.08297689259052277,
0.02876085601747036,
0.07626375555992126,
-0.0678832083940506,
0.04566220939159393,
-0.07184791564941406,
0.05554455518722534,
-0.07315494865179062,
-0.005741868168115616,
-0.16761362552642822,
-0.020081553608179092,
0.044829148799180984,
-0.05234459042549133,
0.0517997145652771,
0.09019635617733002,
0.002701584715396166,
0.12121604382991791,
-0.03928540647029877,
0.0042005013674497604,
-0.011538423597812653,
-0.01048690639436245,
-0.04878772795200348,
-0.12200057506561279,
-0.07889589667320251,
-0.06805826723575592,
0.10306523740291595,
-0.0700870156288147,
0.02915046736598015,
-0.07005160301923752,
-0.020852727815508842,
-0.006342636421322823,
-0.06031505763530731,
-0.007673522457480431,
0.007377663627266884,
-0.02961503341794014,
-0.04478013888001442,
0.04734888672828674,
0.0530330128967762,
-0.0608694925904274,
0.08157894015312195,
-0.10254115611314774,
-0.06544309109449387,
0.05448690056800842,
0.012009091675281525,
-0.07904980331659317,
0.09473311901092529,
-0.01944500021636486,
-0.013342421501874924,
-0.06058916822075844,
-0.0430162250995636,
0.1955912709236145,
-0.022533610463142395,
0.10137614607810974,
-0.09178617596626282,
0.0017276867292821407,
0.02824178710579872,
-0.048769086599349976,
-0.01777968928217888,
0.05727469176054001,
0.05144285410642624,
-0.18588225543498993,
0.013933304697275162,
0.053642526268959045,
0.07595998048782349,
0.11037373542785645,
0.02916913479566574,
-0.024397650733590126,
-0.04787350818514824,
-0.012297477573156357,
0.0070893894881010056,
0.05444052070379257,
-0.025221772491931915,
-0.006093372590839863,
0.031654514372348785,
0.059041909873485565,
0.018351363018155098,
-0.08325394988059998,
0.03653901442885399,
0.06962323188781738,
-0.016743849962949753,
-0.03795831650495529,
-0.027765899896621704,
-0.05871816724538803,
0.0630401074886322,
0.05425068363547325,
0.03899490833282471,
0.02548922970890999,
-0.014911131002008915,
-0.138380229473114,
0.1853238344192505,
-0.11449295282363892,
-0.25950536131858826,
-0.11054530739784241,
-0.05709519237279892,
-0.020689744502305984,
0.043039411306381226,
0.05742412805557251,
-0.02932947315275669,
-0.04177653044462204,
-0.11568669974803925,
0.06072806566953659,
-0.0661771148443222,
-0.030330833047628403,
-0.0074287280440330505,
-0.04885610193014145,
-0.017536969855427742,
-0.12639203667640686,
-0.011865058913826942,
-0.030665535479784012,
-0.07550031691789627,
0.006512284278869629,
-0.03324676677584648,
0.02665841393172741,
0.1394454538822174,
0.03310889005661011,
-0.019620295614004135,
-0.015818722546100616,
0.18579038977622986,
0.010506393387913704,
0.05999259650707245,
0.11279468983411789,
-0.03224300220608711,
0.05457734316587448,
0.047031015157699585,
0.023733271285891533,
-0.0454106405377388,
0.012977412901818752,
-0.016224455088377,
-0.12198682874441147,
-0.1761534959077835,
-0.0729738101363182,
-0.0018083490431308746,
0.006636145059019327,
0.02061912603676319,
0.03574974089860916,
0.022685758769512177,
0.03898705169558525,
-0.02923736348748207,
0.026214074343442917,
-0.012640442699193954,
0.08061927556991577,
0.023499302566051483,
-0.07385627925395966,
0.09073380380868912,
-0.06019431725144386,
0.016067706048488617,
0.11142237484455109,
-0.0625494047999382,
0.18937858939170837,
0.025605924427509308,
0.060442566871643066,
0.1004023402929306,
0.01877420023083687,
0.05570782721042633,
0.08402230590581894,
-0.045310020446777344,
0.008022451773285866,
-0.06149205565452576,
-0.05087180808186531,
-0.03732427582144737,
0.05310434848070145,
0.02977517992258072,
0.01894432306289673,
-0.11814966052770615,
0.02403174713253975,
-0.0009680490475147963,
0.13632698357105255,
0.04898449033498764,
-0.11490857601165771,
-0.12217647582292557,
0.03629797697067261,
-0.044542793184518814,
-0.06268921494483948,
0.029747430235147476,
0.058939605951309204,
-0.15301477909088135,
0.043264176696538925,
-0.0069280266761779785,
0.06602062284946442,
-0.08929011225700378,
0.014964875765144825,
-0.046940878033638,
-0.005442187190055847,
0.00662448164075613,
0.07026508450508118,
-0.1318570375442505,
0.1079663410782814,
0.020470622926950455,
0.04670330137014389,
-0.07909739017486572,
0.016522355377674103,
-0.010672503150999546,
0.10658510774374008,
0.11669628322124481,
0.04618766903877258,
-0.05435466021299362,
-0.014046364463865757,
-0.046649958938360214,
0.018638961017131805,
0.05832206457853317,
-0.08030831068754196,
0.058608587831258774,
0.008059464395046234,
0.007212779484689236,
-0.02250107377767563,
0.016571659594774246,
-0.13144776225090027,
-0.1208963394165039,
0.06294596940279007,
-0.07911528646945953,
-0.10246402770280838,
-0.05880863964557648,
-0.06227255240082741,
-0.046191394329071045,
0.2161603718996048,
-0.1137428730726242,
-0.08848422765731812,
-0.09682135283946991,
-0.011910099536180496,
0.04405117779970169,
-0.06706744432449341,
0.04703076183795929,
-0.03757432475686073,
0.09627191722393036,
-0.04968629777431488,
-0.10885535180568695,
0.03482348844408989,
-0.11242860555648804,
-0.11491601169109344,
-0.045268524438142776,
0.10686223953962326,
0.11461194604635239,
0.03988318517804146,
0.011360072530806065,
0.01292472705245018,
-0.0005963388830423355,
-0.11535835266113281,
0.01646621711552143,
0.13262426853179932,
-0.0038257427513599396,
0.07300369441509247,
-0.057880863547325134,
0.02410557121038437,
-0.016633138060569763,
-0.002488423138856888,
0.1344602108001709,
0.1878712773323059,
-0.0660950243473053,
0.18003541231155396,
0.20224615931510925,
-0.1066243126988411,
-0.19240719079971313,
-0.053552065044641495,
-0.0014008665457367897,
0.044672589749097824,
0.05313378572463989,
-0.18501922488212585,
0.08860071003437042,
0.03146041929721832,
-0.034969866275787354,
0.011905912309885025,
-0.2366527020931244,
-0.11194579303264618,
0.08960644155740738,
0.0538627952337265,
0.1883576512336731,
-0.0766611322760582,
-0.040985994040966034,
-0.015803253278136253,
-0.0421634316444397,
0.04816626012325287,
-0.03568299114704132,
0.08731906116008759,
0.008384637534618378,
-0.02871653437614441,
0.004087214358150959,
-0.031151220202445984,
0.0960800051689148,
0.03974916785955429,
0.019168362021446228,
-0.07019881904125214,
-0.008062558248639107,
0.11089807748794556,
-0.03938744589686394,
0.09647586196660995,
0.04715471714735031,
0.07638727128505707,
-0.09507542848587036,
-0.05918239429593086,
-0.0730886235833168,
0.04431229829788208,
-0.041038818657398224,
-0.05694695934653282,
-0.06548498570919037,
0.059275612235069275,
0.038097865879535675,
0.011568931862711906,
-0.0004198644310235977,
-0.035825200378894806,
0.0420246347784996,
0.09335123002529144,
0.08043481409549713,
-0.0340706966817379,
-0.06959900259971619,
-0.0513223297894001,
-0.04796035960316658,
0.06646668910980225,
-0.09522183239459991,
0.02267652004957199,
0.026057494804263115,
0.011619225144386292,
0.08928936719894409,
0.03277145326137543,
-0.14097684621810913,
0.0120382159948349,
0.0336151123046875,
-0.1236506849527359,
-0.1079128235578537,
-0.0216507650911808,
0.03796882927417755,
-0.036182381212711334,
0.051334209740161896,
0.1474495232105255,
-0.036471787840127945,
-0.03230869397521019,
-0.04722628742456436,
0.03919433802366257,
-0.020224086940288544,
0.044372446835041046,
0.06310754269361496,
0.03024967573583126,
-0.07055720686912537,
0.07600396126508713,
0.04095371812582016,
-0.0374906063079834,
0.04196824133396149,
0.04097576439380646,
-0.09264913201332092,
-0.0778423398733139,
-0.05882449075579643,
0.08932962268590927,
-0.023606983944773674,
-0.04584921896457672,
0.0003790333867073059,
-0.08138905465602875,
0.06822732090950012,
0.0729549303650856,
0.04841173440217972,
0.03726107254624367,
-0.08733196556568146,
0.01595368981361389,
-0.052361562848091125,
0.0363430380821228,
-0.029200730845332146,
-0.005098965018987656,
-0.05387338995933533,
0.06845396757125854,
0.06495629996061325,
0.09940827637910843,
-0.03393435478210449,
-0.0758679062128067,
-0.08057811856269836,
-0.011317091062664986,
-0.060764119029045105,
-0.031746719032526016,
-0.07494056224822998,
-0.00810154341161251,
0.0015279450453817844,
-0.005821097642183304,
0.02046343870460987,
0.03609524667263031,
-0.04308526590466499,
-0.0184374637901783,
-0.03985234349966049,
0.0356377437710762,
-0.062247127294540405,
0.0062613943591713905,
0.01876126416027546,
-0.03547698259353638,
0.09276542067527771,
0.03619181364774704,
-0.01123180240392685,
0.04646936431527138,
-0.024294275790452957,
0.03440259024500847,
-0.02277219481766224,
0.002136706840246916,
-0.024670153856277466,
-0.11310015618801117,
-0.004411113914102316,
0.004669573158025742,
-0.026258859783411026,
0.008751858957111835,
0.061309099197387695,
-0.07291735708713531,
0.0866459459066391,
0.04592325538396835,
-0.032696522772312164,
-0.07147335261106491,
0.03840072825551033,
-0.01649533025920391,
0.030189745128154755,
0.06898197531700134,
-0.032251711934804916,
0.05418098717927933,
-0.09881040453910828,
-0.02729443833231926,
0.0036866609007120132,
0.00018868595361709595,
-0.012406406924128532,
-0.05373836308717728,
-0.006094078533351421,
0.006535356864333153,
0.17099297046661377,
-0.02157815545797348,
0.033563751727342606,
0.016171200200915337,
0.005179246887564659,
0.04703223332762718,
-0.013381846249103546,
0.07111884653568268,
-0.005840315483510494,
-0.025692090392112732,
-0.014676116406917572,
0.034398630261421204,
0.006104808300733566,
0.004115071147680283,
0.14439362287521362,
0.04766807332634926,
0.09425589442253113,
0.0747583732008934,
0.01679348386824131,
0.018777498975396156,
-0.1342594176530838,
-0.08583252876996994,
0.0031273793429136276,
0.054297469556331635,
-0.015306752175092697,
0.015457671135663986,
0.09471987932920456,
-0.08859116584062576,
0.07058387994766235,
0.049488723278045654,
-0.04750858619809151,
-0.1283734142780304,
-0.18873780965805054,
-0.0262187197804451,
-0.029681289568543434,
-0.012138869613409042,
-0.08930232375860214,
0.018246807157993317,
0.09041760861873627,
0.022436048835515976,
-0.009631228633224964,
0.0916101336479187,
-0.1096765473484993,
-0.03109550289809704,
0.04156653583049774,
-0.026219366118311882,
0.017386045306921005,
0.04904015362262726,
0.02159162610769272,
-0.005288153886795044,
0.049313198775053024,
0.041446883231401443,
0.04559657722711563,
0.02829107828438282,
0.05071607977151871,
-0.023225698620080948,
-0.07437916100025177,
-0.03297577425837517,
-0.007368343882262707,
0.05525359511375427,
0.13044466078281403,
0.024170009419322014,
-0.0662047266960144,
0.007574584800750017,
0.11221984773874283,
-0.03220914304256439,
-0.05136249586939812,
-0.10876704752445221,
0.23893241584300995,
0.023203838616609573,
0.0033582730684429407,
-0.0016499352641403675,
-0.046553321182727814,
0.002544347196817398,
0.20867814123630524,
0.22282953560352325,
0.0026456923224031925,
-0.00902387872338295,
0.011098932474851608,
-0.01155900303274393,
0.03712647780776024,
0.14464199542999268,
0.0028638411313295364,
0.25227004289627075,
-0.04754834622144699,
0.03740397468209267,
-0.04257282242178917,
-0.039167072623968124,
-0.09992848336696625,
0.07424572855234146,
-0.01208418793976307,
0.008735794574022293,
-0.03267265856266022,
0.07242006063461304,
-0.03839908912777901,
-0.17314967513084412,
0.0017955033108592033,
-0.005957632791250944,
-0.06176072359085083,
0.014981781132519245,
0.0029266998171806335,
0.017666125670075417,
0.08295530080795288,
-0.01846577227115631,
-0.008202515542507172,
0.12785837054252625,
0.02030397392809391,
-0.09756524115800858,
-0.0587916374206543,
0.11541537195444107,
0.017513172701001167,
0.14567716419696808,
0.012537205591797829,
0.0778447836637497,
0.08617394417524338,
0.021843519061803818,
-0.09599711745977402,
0.04035705700516701,
-0.019472267478704453,
-0.02738805301487446,
0.00561594869941473,
0.11063635349273682,
-0.010310065001249313,
0.060925014317035675,
0.030178159475326538,
-0.09295565634965897,
0.05926559120416641,
0.00792279839515686,
-0.034285567700862885,
-0.07949445396661758,
0.08261193335056305,
-0.09172898530960083,
0.15487676858901978,
0.11915495246648788,
-0.014628845266997814,
-0.045997075736522675,
-0.028063837438821793,
0.018394481390714645,
-0.0023973151110112667,
0.06191196292638779,
-0.02357669174671173,
-0.1342744678258896,
0.0198197178542614,
-0.08303748071193695,
0.029271816834807396,
-0.25108543038368225,
-0.09083640575408936,
0.029850367456674576,
-0.017440272495150566,
-0.017631452530622482,
0.051521748304367065,
0.04638165980577469,
0.025051450356841087,
-0.03576887398958206,
0.020901013165712357,
-0.03744721785187721,
0.05847616493701935,
-0.10967788100242615,
-0.09337379038333893
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 180k (uncased)
Seed 2 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-180k')
model = BertModel.from_pretrained("multiberts-seed-2-180k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-180k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 180k (uncased)
Seed 2 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 180k (uncased)\nSeed 2 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 180k (uncased)\nSeed 2 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 180k (uncased)\nSeed 2 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08232530206441879,
0.0002886964939534664,
-0.0023594836238771677,
0.0679272785782814,
0.08124280720949173,
0.001845063641667366,
0.1147432029247284,
0.05073990672826767,
-0.031051862984895706,
0.02443338744342327,
0.09182000160217285,
0.03044380620121956,
0.04263215512037277,
0.067634217441082,
0.09700610488653183,
-0.259683758020401,
0.04916462302207947,
-0.06337535381317139,
0.04915179684758186,
0.07558482140302658,
0.10127545893192291,
-0.06970079243183136,
0.06160636991262436,
0.03960908204317093,
-0.08230216801166534,
-0.018974117934703827,
-0.013612142764031887,
-0.03329048678278923,
0.10127542167901993,
0.07172977179288864,
0.06225094199180603,
0.00222809799015522,
0.059168584644794464,
-0.09014887362718582,
0.015929054468870163,
0.045747630298137665,
0.00002551451325416565,
0.027380909770727158,
-0.007153289392590523,
0.014210965484380722,
0.1114763617515564,
0.037734970450401306,
0.07654707133769989,
0.03470679745078087,
-0.09539838135242462,
-0.11525745689868927,
-0.0799863338470459,
0.10460707545280457,
0.05257813632488251,
0.039634667336940765,
-0.0057325949892401695,
0.07297992706298828,
-0.028137823566794395,
0.07189664244651794,
0.11010099947452545,
-0.2574172914028168,
-0.009340811520814896,
0.06120046228170395,
0.04381720721721649,
0.04292970895767212,
0.013936644420027733,
0.027112917974591255,
0.005771081894636154,
0.04115082323551178,
0.029517576098442078,
-0.023085277527570724,
0.12150434404611588,
-0.046017955988645554,
-0.15084680914878845,
-0.044207893311977386,
0.11417709290981293,
-0.007289702072739601,
-0.12505364418029785,
-0.09870044887065887,
-0.03374781459569931,
0.11980926245450974,
-0.0015125973150134087,
-0.019433187320828438,
-0.004806853365153074,
0.012951887212693691,
0.024825528264045715,
-0.09926815330982208,
-0.08785143494606018,
-0.03085317462682724,
-0.039584286510944366,
0.13255181908607483,
0.04652462899684906,
0.05272567644715309,
-0.036431774497032166,
0.08678578585386276,
-0.11704050004482269,
-0.039891719818115234,
-0.052869945764541626,
-0.08257216215133667,
-0.02021409384906292,
0.011005877517163754,
-0.02752372995018959,
-0.08679470419883728,
-0.06084759905934334,
0.11643625795841217,
0.03225334733724594,
0.02752181887626648,
-0.006871767342090607,
0.042868103832006454,
0.07324105501174927,
0.09680461883544922,
-0.03747066482901573,
0.056814879179000854,
0.03271039202809334,
-0.023747604340314865,
0.05903702974319458,
-0.05159689858555794,
-0.10099489241838455,
0.07284428179264069,
-0.00028251856565475464,
0.038584642112255096,
0.027475135400891304,
0.030208121985197067,
-0.01333758607506752,
-0.07380187511444092,
0.16625016927719116,
-0.07568348944187164,
-0.008088475093245506,
-0.014628482051193714,
0.012204693630337715,
0.046551138162612915,
0.03402139991521835,
-0.0051545086316764355,
-0.045681118965148926,
-0.005696016363799572,
-0.05388942360877991,
-0.024440880864858627,
-0.05473300814628601,
-0.11587240546941757,
0.0013786889612674713,
-0.03967320919036865,
-0.032844994217157364,
-0.1420176923274994,
-0.2124202996492386,
-0.016045760363340378,
0.06504547595977783,
-0.0032844808883965015,
-0.010359722189605236,
0.025249887257814407,
0.019043998792767525,
-0.019817354157567024,
0.009869204834103584,
-0.04300311207771301,
0.0008377861231565475,
-0.004541456699371338,
-0.03670330345630646,
0.057485319674015045,
-0.04184362664818764,
0.022651419043540955,
-0.07053010165691376,
0.023133888840675354,
-0.20930734276771545,
0.08883504569530487,
-0.03025123104453087,
-0.00384585652500391,
-0.03993621841073036,
-0.04085424542427063,
0.0044645629823207855,
0.04578157514333725,
-0.0085744084790349,
0.11416414380073547,
-0.1391448825597763,
-0.05069054290652275,
0.18440528213977814,
-0.161691814661026,
-0.00012526661157608032,
0.10177814960479736,
-0.04959724098443985,
0.05931229516863823,
0.13109728693962097,
0.09927638620138168,
0.08247055858373642,
-0.07296019047498703,
0.014930044300854206,
0.0615655742585659,
-0.07078219950199127,
0.05886803939938545,
0.09268492460250854,
-0.02609020471572876,
-0.13570383191108704,
0.02692437171936035,
-0.07556276023387909,
-0.009291652590036392,
-0.025575954467058182,
-0.02171236276626587,
0.00640479288995266,
-0.038089219480752945,
0.0320296585559845,
0.004252744372934103,
0.017901241779327393,
-0.03858650475740433,
-0.08396436274051666,
0.030329857021570206,
0.0764300674200058,
-0.07021394371986389,
0.04523439332842827,
-0.06994771957397461,
0.058102142065763474,
-0.07264644652605057,
-0.004308716394007206,
-0.16787627339363098,
-0.021815255284309387,
0.044669486582279205,
-0.054181188344955444,
0.05136207491159439,
0.09324642270803452,
0.0032738070003688335,
0.12244178354740143,
-0.03865639120340347,
0.004447251558303833,
-0.009400730952620506,
-0.010937181301414967,
-0.05067518353462219,
-0.12276674807071686,
-0.08058281242847443,
-0.06827622652053833,
0.10278406739234924,
-0.07189562916755676,
0.027805596590042114,
-0.07122839242219925,
-0.019072847440838814,
-0.006563222035765648,
-0.058284543454647064,
-0.006349041126668453,
0.006850674748420715,
-0.029531056061387062,
-0.04461533576250076,
0.048014700412750244,
0.05111441761255264,
-0.0619516521692276,
0.08251785486936569,
-0.10548167675733566,
-0.06525631994009018,
0.05401793122291565,
0.012253167107701302,
-0.07850965857505798,
0.09148934483528137,
-0.01956474967300892,
-0.013383774086833,
-0.06167774274945259,
-0.043967776000499725,
0.19241295754909515,
-0.02119506523013115,
0.10069866478443146,
-0.09176400303840637,
0.0018273515161126852,
0.027323557063937187,
-0.050067655742168427,
-0.017752954736351967,
0.060420576483011246,
0.047079622745513916,
-0.19201061129570007,
0.01424834132194519,
0.05723920464515686,
0.07679291814565659,
0.11175695806741714,
0.028691675513982773,
-0.02579210326075554,
-0.048443011939525604,
-0.010333403944969177,
0.007093043066561222,
0.05310116708278656,
-0.02130766585469246,
-0.006331025157123804,
0.032867737114429474,
0.059288181364536285,
0.01724350079894066,
-0.0832352340221405,
0.0358712337911129,
0.06859345734119415,
-0.018027929589152336,
-0.03811529278755188,
-0.025393985211849213,
-0.05887720733880997,
0.06362660229206085,
0.05394300818443298,
0.03872446343302727,
0.025678005069494247,
-0.014058488421142101,
-0.13736620545387268,
0.18615104258060455,
-0.1129738986492157,
-0.25887352228164673,
-0.10878849774599075,
-0.052973367273807526,
-0.0222941804677248,
0.04227627068758011,
0.058243222534656525,
-0.027057893574237823,
-0.04057711362838745,
-0.11482104659080505,
0.0667550340294838,
-0.06581071764230728,
-0.03070194646716118,
-0.01145193912088871,
-0.049369633197784424,
-0.018136152997612953,
-0.12719020247459412,
-0.012009778991341591,
-0.03107234463095665,
-0.07580786198377609,
0.006268151104450226,
-0.03556917607784271,
0.028125744313001633,
0.13666126132011414,
0.03443228825926781,
-0.02078195847570896,
-0.01646190509200096,
0.1865823119878769,
0.010497476905584335,
0.060097403824329376,
0.11523062735795975,
-0.03207963705062866,
0.05541066825389862,
0.04387065768241882,
0.02366778813302517,
-0.045771755278110504,
0.01259244978427887,
-0.01529794279485941,
-0.12222247570753098,
-0.17469793558120728,
-0.07177494466304779,
-0.0018331161700189114,
0.0072374106384813786,
0.020999006927013397,
0.03534988313913345,
0.020874690264463425,
0.039278652518987656,
-0.03131283074617386,
0.027422187849879265,
-0.010901611298322678,
0.08006537705659866,
0.023949410766363144,
-0.07316306233406067,
0.09223990142345428,
-0.060766011476516724,
0.01595328189432621,
0.1110684722661972,
-0.060469891875982285,
0.18827420473098755,
0.02460026554763317,
0.062146980315446854,
0.100765660405159,
0.019803296774625778,
0.055444858968257904,
0.08707219362258911,
-0.045406773686409,
0.008341089822351933,
-0.06301848590373993,
-0.0518525093793869,
-0.03982612118124962,
0.05304469168186188,
0.03105830028653145,
0.016844261437654495,
-0.11920241266489029,
0.023104041814804077,
-0.001021778560243547,
0.13747458159923553,
0.050461091101169586,
-0.11469541490077972,
-0.12274372577667236,
0.03668759763240814,
-0.04462829977273941,
-0.06243220716714859,
0.02902345359325409,
0.058782920241355896,
-0.15348249673843384,
0.04301212355494499,
-0.00686808954924345,
0.06699686497449875,
-0.09357244521379471,
0.015353120863437653,
-0.05150967836380005,
-0.0036938339471817017,
0.006414839532226324,
0.07150138169527054,
-0.13410571217536926,
0.10931690037250519,
0.020424991846084595,
0.04727550595998764,
-0.08033038675785065,
0.015551824122667313,
-0.010996791534125805,
0.10634574294090271,
0.11727797985076904,
0.044644780457019806,
-0.05840098485350609,
-0.01698172092437744,
-0.04677389934659004,
0.01916700229048729,
0.05944298207759857,
-0.08099208027124405,
0.05843845754861832,
0.006815016269683838,
0.007130630314350128,
-0.022734981030225754,
0.020036879926919937,
-0.1324824094772339,
-0.1199539452791214,
0.06260804831981659,
-0.0814904123544693,
-0.10057171434164047,
-0.05811930075287819,
-0.062389686703681946,
-0.045476190745830536,
0.2120627611875534,
-0.11263488233089447,
-0.08865901827812195,
-0.0972132682800293,
-0.01346784457564354,
0.044815413653850555,
-0.06866437196731567,
0.044195763766765594,
-0.0388045571744442,
0.09553039073944092,
-0.0504690483212471,
-0.11062982678413391,
0.03568876534700394,
-0.11235728859901428,
-0.11730504035949707,
-0.04417281970381737,
0.10787547379732132,
0.11562920361757278,
0.03986696898937225,
0.013596323318779469,
0.0116794528439641,
-0.0003130398690700531,
-0.11535349488258362,
0.01720992662012577,
0.13331469893455505,
-0.004025347530841827,
0.06962133944034576,
-0.05916176736354828,
0.026579365134239197,
-0.016768384724855423,
-0.0004587192088365555,
0.13400015234947205,
0.18702059984207153,
-0.06575480103492737,
0.17867378890514374,
0.20272204279899597,
-0.1050214022397995,
-0.19227349758148193,
-0.05712287873029709,
-0.000040119513869285583,
0.04516514763236046,
0.05301245301961899,
-0.18530195951461792,
0.08656182885169983,
0.035038065165281296,
-0.034852541983127594,
0.014113087207078934,
-0.23558449745178223,
-0.11120408773422241,
0.08938466012477875,
0.05230598896741867,
0.18722832202911377,
-0.07790762186050415,
-0.04049266129732132,
-0.01726461946964264,
-0.040913425385951996,
0.05191085860133171,
-0.03897540271282196,
0.08896172046661377,
0.008584048599004745,
-0.028565941378474236,
0.003386680968105793,
-0.0300927571952343,
0.09705886989831924,
0.03825206309556961,
0.01937924325466156,
-0.06991910934448242,
-0.006768921390175819,
0.10683633387088776,
-0.03989405184984207,
0.09788304567337036,
0.04436106234788895,
0.07536109536886215,
-0.09472842514514923,
-0.05996910482645035,
-0.07479110360145569,
0.04516012221574783,
-0.04126597195863724,
-0.056533146649599075,
-0.06443670392036438,
0.05866381525993347,
0.0370185412466526,
0.012049457989633083,
0.0017315596342086792,
-0.03661181777715683,
0.041751354932785034,
0.0930192694067955,
0.08100929111242294,
-0.02897839993238449,
-0.07096629589796066,
-0.05260157585144043,
-0.04834750294685364,
0.06813955307006836,
-0.09630706906318665,
0.023023217916488647,
0.02555047906935215,
0.010691867209970951,
0.08926890045404434,
0.03275822475552559,
-0.14113816618919373,
0.013613780029118061,
0.03147951513528824,
-0.12273228168487549,
-0.11231181025505066,
-0.01985882967710495,
0.03300025314092636,
-0.03522614389657974,
0.05382393300533295,
0.1496441662311554,
-0.03585914894938469,
-0.03246661648154259,
-0.04737120866775513,
0.03830276057124138,
-0.021979350596666336,
0.04604479297995567,
0.06322605907917023,
0.030071960762143135,
-0.07057909667491913,
0.07314561307430267,
0.03919879347085953,
-0.03352711722254753,
0.042450036853551865,
0.03863164409995079,
-0.09197480231523514,
-0.07864772528409958,
-0.05963527038693428,
0.08721200376749039,
-0.025056181475520134,
-0.04689997062087059,
0.0034419018775224686,
-0.08170084655284882,
0.0665072575211525,
0.07392466068267822,
0.04890020191669464,
0.038601309061050415,
-0.08638238906860352,
0.016428928822278976,
-0.0510176345705986,
0.03528750315308571,
-0.027943098917603493,
-0.005649615079164505,
-0.0527893602848053,
0.06937745213508606,
0.06426285207271576,
0.10028722137212753,
-0.03489202633500099,
-0.07529629021883011,
-0.08209923654794693,
-0.012699969112873077,
-0.06567724794149399,
-0.0321674607694149,
-0.0764438658952713,
-0.008468639105558395,
0.0015509645454585552,
-0.005924087017774582,
0.02352346107363701,
0.03520195186138153,
-0.04280238598585129,
-0.0181264765560627,
-0.040195055305957794,
0.036878552287817,
-0.06556405872106552,
0.0054769618436694145,
0.01824970543384552,
-0.03637191280722618,
0.09310001134872437,
0.03727084398269653,
-0.010480949655175209,
0.04758873209357262,
-0.027259938418865204,
0.034967001527547836,
-0.02225496992468834,
0.0011925294529646635,
-0.02396557293832302,
-0.1134386956691742,
-0.0059921396896243095,
0.0041949562728405,
-0.025616642087697983,
0.008457029238343239,
0.05997709929943085,
-0.0728766918182373,
0.08739650249481201,
0.045179009437561035,
-0.03389337658882141,
-0.07004031538963318,
0.039988692849874496,
-0.018360761925578117,
0.031326960772275925,
0.07019153982400894,
-0.03191728517413139,
0.055387966334819794,
-0.09814415872097015,
-0.02713434025645256,
0.004764476791024208,
-0.0018594823777675629,
-0.011074738577008247,
-0.05455836281180382,
-0.0051268041133880615,
0.006210791878402233,
0.1684836745262146,
-0.023090630769729614,
0.037505343556404114,
0.014294566586613655,
0.00568869523704052,
0.04783513396978378,
-0.012980401515960693,
0.07022702693939209,
-0.006109601818025112,
-0.02431090548634529,
-0.015920745208859444,
0.03482974320650101,
0.0047425273805856705,
0.005849678069353104,
0.14500495791435242,
0.046972211450338364,
0.09089802205562592,
0.07565721124410629,
0.017503686249256134,
0.019805870950222015,
-0.13865862786769867,
-0.08929318934679031,
0.00563416164368391,
0.05414220690727234,
-0.015995310619473457,
0.015515610575675964,
0.09448603540658951,
-0.08847461640834808,
0.06945827603340149,
0.04989555850625038,
-0.04777190461754799,
-0.12873420119285583,
-0.19416651129722595,
-0.027026113122701645,
-0.027533093467354774,
-0.012733187526464462,
-0.08830417692661285,
0.018640555441379547,
0.09083724021911621,
0.023164210841059685,
-0.009780509397387505,
0.09092728793621063,
-0.10537981986999512,
-0.031506940722465515,
0.04241711273789406,
-0.025550412014126778,
0.017615044489502907,
0.050953615456819534,
0.02227245643734932,
-0.004060246050357819,
0.04718247801065445,
0.041633959859609604,
0.0453668087720871,
0.02795257605612278,
0.05090859532356262,
-0.02550930343568325,
-0.07551851868629456,
-0.0333140529692173,
-0.004909220151603222,
0.054970934987068176,
0.13352420926094055,
0.025736819952726364,
-0.06689218431711197,
0.006828111596405506,
0.1107720211148262,
-0.031256500631570816,
-0.051203008741140366,
-0.10870849341154099,
0.2420671284198761,
0.02063712105154991,
0.005029108840972185,
-0.002465202007442713,
-0.045650314539670944,
0.0032070353627204895,
0.2080804854631424,
0.22290043532848358,
0.0016151935560628772,
-0.008214114233851433,
0.010240217670798302,
-0.011006508953869343,
0.03817281872034073,
0.1445455551147461,
0.0027765613049268723,
0.255103200674057,
-0.04829064756631851,
0.03720821440219879,
-0.04323779046535492,
-0.039031319320201874,
-0.10075205564498901,
0.07330681383609772,
-0.011929847300052643,
0.008516771718859673,
-0.03218208998441696,
0.07246306538581848,
-0.0385514572262764,
-0.1761264055967331,
-0.0008071335032582283,
-0.0023718546144664288,
-0.06214269623160362,
0.014099061489105225,
0.0016204481944441795,
0.019773142412304878,
0.08480415493249893,
-0.01912059262394905,
-0.007839176803827286,
0.1323278546333313,
0.019799675792455673,
-0.09698565304279327,
-0.055686093866825104,
0.11481788754463196,
0.02040790021419525,
0.14162570238113403,
0.012872420251369476,
0.07868204265832901,
0.08601508289575577,
0.02198905497789383,
-0.09442098438739777,
0.04163780435919762,
-0.018347224220633507,
-0.02765076979994774,
0.006556578446179628,
0.1103222519159317,
-0.010044142603874207,
0.05933776870369911,
0.031326115131378174,
-0.09486425668001175,
0.06096828356385231,
0.01059228926897049,
-0.035519666969776154,
-0.07891708612442017,
0.08530725538730621,
-0.09132471680641174,
0.1551288515329361,
0.12164421379566193,
-0.013878712430596352,
-0.04634048044681549,
-0.028809668496251106,
0.018720848485827446,
-0.0016444548964500427,
0.06049495190382004,
-0.024986524134874344,
-0.1321503072977066,
0.018384454771876335,
-0.08142969012260437,
0.028444265946745872,
-0.2473875880241394,
-0.09037650376558304,
0.029440592974424362,
-0.017755471169948578,
-0.016677167266607285,
0.049855247139930725,
0.043334197252988815,
0.024353036656975746,
-0.03554341942071915,
0.017242610454559326,
-0.03768552467226982,
0.05733916535973549,
-0.10890717804431915,
-0.09244012087583542
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 1900k (uncased)
Seed 2 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1900k')
model = BertModel.from_pretrained("multiberts-seed-2-1900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-1900k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 1900k (uncased)
Seed 2 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 1900k (uncased)\nSeed 2 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 1900k (uncased)\nSeed 2 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 1900k (uncased)\nSeed 2 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08227716386318207,
0.0017093723872676492,
-0.0022821505554020405,
0.06640876084566116,
0.0807860866189003,
0.003746630623936653,
0.11621171236038208,
0.04958784207701683,
-0.03032805025577545,
0.022795822471380234,
0.09294357895851135,
0.02555011585354805,
0.04403308406472206,
0.0634206086397171,
0.09446264058351517,
-0.2580140233039856,
0.04765743389725685,
-0.0626501813530922,
0.05384451895952225,
0.07545589655637741,
0.10066655278205872,
-0.07172377407550812,
0.060109831392765045,
0.03966912627220154,
-0.08573742210865021,
-0.017321323975920677,
-0.012885124422609806,
-0.03650148957967758,
0.10092559456825256,
0.0711955651640892,
0.06382996588945389,
0.0020501185208559036,
0.06077501177787781,
-0.0875222235918045,
0.015718111768364906,
0.04525034874677658,
-0.0003660852089524269,
0.027308784425258636,
-0.009870423004031181,
0.014016762375831604,
0.11410333216190338,
0.04093996062874794,
0.07794661819934845,
0.03357231989502907,
-0.09594736248254776,
-0.11589713394641876,
-0.08066828548908234,
0.10787700116634369,
0.052411869168281555,
0.03741693124175072,
-0.0038051707670092583,
0.07392341643571854,
-0.029200386255979538,
0.07279045879840851,
0.10704222321510315,
-0.25272059440612793,
-0.010311906225979328,
0.06261719763278961,
0.04432857409119606,
0.04413507878780365,
0.011292249895632267,
0.025770969688892365,
0.0037849582731723785,
0.04068099707365036,
0.030263788998126984,
-0.02464074268937111,
0.11230014264583588,
-0.04369601979851723,
-0.15020692348480225,
-0.043939873576164246,
0.1146342009305954,
-0.00973920151591301,
-0.12530487775802612,
-0.09768608957529068,
-0.033538300544023514,
0.11817795783281326,
-0.003812147304415703,
-0.019002260640263557,
-0.002693241462111473,
0.012290512211620808,
0.02519499510526657,
-0.10065468400716782,
-0.08732713013887405,
-0.030544176697731018,
-0.03972095251083374,
0.13525262475013733,
0.045709893107414246,
0.05434512346982956,
-0.0382925346493721,
0.08620493113994598,
-0.11719756573438644,
-0.039960917085409164,
-0.05175505205988884,
-0.08164222538471222,
-0.019419455900788307,
0.009502777829766273,
-0.02828933298587799,
-0.08799711614847183,
-0.0627107322216034,
0.11427463591098785,
0.03187063708901405,
0.02743782475590706,
-0.005227759480476379,
0.043441787362098694,
0.07373335212469101,
0.0959249958395958,
-0.036737293004989624,
0.049413491040468216,
0.03437125310301781,
-0.02418946474790573,
0.05885984003543854,
-0.05128944665193558,
-0.10050167143344879,
0.07452841848134995,
0.0009019412100315094,
0.037215691059827805,
0.02705398015677929,
0.028773745521903038,
-0.010805809870362282,
-0.07108297944068909,
0.16210034489631653,
-0.07726181298494339,
-0.007002542726695538,
-0.015930891036987305,
0.011773724108934402,
0.043162159621715546,
0.03419055417180061,
-0.004276623018085957,
-0.045574504882097244,
-0.0061005232855677605,
-0.05622390657663345,
-0.023097623139619827,
-0.054414816200733185,
-0.116787850856781,
0.00007129181176424026,
-0.03832591325044632,
-0.03328336775302887,
-0.14006198942661285,
-0.21770592033863068,
-0.01631087064743042,
0.06535326689481735,
-0.0038708746433258057,
-0.009998468682169914,
0.02716141752898693,
0.018581314012408257,
-0.020122485235333443,
0.011371777392923832,
-0.04322566092014313,
0.0010341126471757889,
-0.004766106605529785,
-0.037772029638290405,
0.05704810470342636,
-0.04316777363419533,
0.022027546539902687,
-0.0688612088561058,
0.022252507507801056,
-0.21282079815864563,
0.08850795775651932,
-0.02913157269358635,
-0.0016536656767129898,
-0.03741808608174324,
-0.0402325764298439,
0.00515943206846714,
0.04588092118501663,
-0.010516749694943428,
0.11595156788825989,
-0.13392943143844604,
-0.051420919597148895,
0.1872730404138565,
-0.1610945165157318,
0.0005929209291934967,
0.09816062450408936,
-0.05049077793955803,
0.060297124087810516,
0.13236001133918762,
0.09883031249046326,
0.08062326908111572,
-0.07441007345914841,
0.016898520290851593,
0.06258174777030945,
-0.0704725980758667,
0.05640517175197601,
0.09093746542930603,
-0.02648647129535675,
-0.14063383638858795,
0.027165893465280533,
-0.07547831535339355,
-0.008714666590094566,
-0.026059499010443687,
-0.021518263965845108,
0.00862816534936428,
-0.038517020642757416,
0.03291533514857292,
0.0050675650127232075,
0.01906384527683258,
-0.0384882390499115,
-0.08205782622098923,
0.03492385521531105,
0.07523928582668304,
-0.06937763839960098,
0.04423760622739792,
-0.07006239891052246,
0.05544797331094742,
-0.07157956063747406,
-0.005837476812303066,
-0.1680576652288437,
-0.020169055089354515,
0.04478514939546585,
-0.05375472456216812,
0.05198607221245766,
0.09052349627017975,
0.003423559945076704,
0.12433996051549911,
-0.0400552824139595,
0.006390735972672701,
-0.009893789887428284,
-0.010350968688726425,
-0.0521736778318882,
-0.12336909770965576,
-0.07961355149745941,
-0.06812290847301483,
0.10637399554252625,
-0.0743890181183815,
0.029052402824163437,
-0.07017196714878082,
-0.019820820540189743,
-0.006334355100989342,
-0.059180207550525665,
-0.007367042824625969,
0.007274210453033447,
-0.028457259759306908,
-0.04419858753681183,
0.046526506543159485,
0.052961912006139755,
-0.06091175600886345,
0.08169039338827133,
-0.10420694947242737,
-0.06622140854597092,
0.05443752557039261,
0.013771021738648415,
-0.0812036544084549,
0.08812606334686279,
-0.019414585083723068,
-0.013192995451390743,
-0.0616215355694294,
-0.04615103453397751,
0.19492846727371216,
-0.0226203091442585,
0.10307438671588898,
-0.0932731181383133,
0.0000054415431804955006,
0.028006302192807198,
-0.04873917996883392,
-0.017386343330144882,
0.05476064234972,
0.049861472100019455,
-0.19287270307540894,
0.01530594751238823,
0.05110575258731842,
0.07467594742774963,
0.11447152495384216,
0.02894536778330803,
-0.02479076385498047,
-0.04919218644499779,
-0.013724143616855145,
0.005102095194160938,
0.057247210294008255,
-0.02901872619986534,
-0.007607471197843552,
0.031620901077985764,
0.0558638721704483,
0.018349727615714073,
-0.08344564586877823,
0.03717077523469925,
0.06821843981742859,
-0.016310151666402817,
-0.039178453385829926,
-0.027635658159852028,
-0.05747087299823761,
0.06462772935628891,
0.05310188606381416,
0.03951624408364296,
0.02375754714012146,
-0.014495918527245522,
-0.13935978710651398,
0.18574082851409912,
-0.11465328931808472,
-0.2577252686023712,
-0.11038097739219666,
-0.05607607960700989,
-0.021041804924607277,
0.0425686240196228,
0.05762440711259842,
-0.023305432870984077,
-0.04058511182665825,
-0.11680522561073303,
0.06051689013838768,
-0.06306479871273041,
-0.029204372316598892,
-0.009400583803653717,
-0.049713149666786194,
-0.018999624997377396,
-0.12801305949687958,
-0.011050928384065628,
-0.0323304757475853,
-0.07163302600383759,
0.005784087814390659,
-0.03297930583357811,
0.026086844503879547,
0.13849633932113647,
0.03378474339842796,
-0.020128153264522552,
-0.01767595484852791,
0.18825635313987732,
0.009644197300076485,
0.061342623084783554,
0.11137460172176361,
-0.03252236545085907,
0.055354129523038864,
0.044022202491760254,
0.02472526766359806,
-0.04457404837012291,
0.010903069749474525,
-0.018918586894869804,
-0.12211425602436066,
-0.17310425639152527,
-0.07204854488372803,
-0.0028626034036278725,
0.0020833383314311504,
0.020645178854465485,
0.034580014646053314,
0.0256218072026968,
0.038035932928323746,
-0.0298524871468544,
0.03078552708029747,
-0.009346328675746918,
0.08016270399093628,
0.02247834950685501,
-0.07362928986549377,
0.09063859283924103,
-0.06031530350446701,
0.016230415552854538,
0.1108926385641098,
-0.0655699372291565,
0.18830208480358124,
0.024955466389656067,
0.05458589643239975,
0.10046380013227463,
0.016818009316921234,
0.05668208375573158,
0.08685863018035889,
-0.04591045528650284,
0.008669992908835411,
-0.06150706112384796,
-0.052292175590991974,
-0.03833689168095589,
0.05324849113821983,
0.031827084720134735,
0.0170658677816391,
-0.11687424033880234,
0.01760704070329666,
-0.002117055468261242,
0.13992401957511902,
0.04841884598135948,
-0.11511168628931046,
-0.12207303941249847,
0.03560374304652214,
-0.047261662781238556,
-0.06283186376094818,
0.026890235021710396,
0.06249937787652016,
-0.15416108071804047,
0.045972079038619995,
-0.006041986867785454,
0.0649624764919281,
-0.08850788325071335,
0.013546738773584366,
-0.04846318066120148,
-0.0034593548625707626,
0.006470499560236931,
0.06809716671705246,
-0.1246904656291008,
0.10992881655693054,
0.022330567240715027,
0.04786773771047592,
-0.08092032372951508,
0.016499008983373642,
-0.009400819428265095,
0.10567940026521683,
0.11537577211856842,
0.045541130006313324,
-0.0504595972597599,
-0.014845856465399265,
-0.04604779928922653,
0.016674300655722618,
0.059622131288051605,
-0.07906485348939896,
0.060345280915498734,
0.007199857849627733,
0.006609919480979443,
-0.022725263610482216,
0.015922416001558304,
-0.12974558770656586,
-0.12125910818576813,
0.061783164739608765,
-0.0820675939321518,
-0.10348190367221832,
-0.0590193048119545,
-0.0613965168595314,
-0.04354745149612427,
0.20873674750328064,
-0.11808641254901886,
-0.08834247291088104,
-0.09829719364643097,
-0.011506617069244385,
0.04480666667222977,
-0.066990427672863,
0.04691963642835617,
-0.03780679404735565,
0.09727799892425537,
-0.04971139878034592,
-0.1087145060300827,
0.03390571102499962,
-0.11206702888011932,
-0.11743861436843872,
-0.045384347438812256,
0.1081283837556839,
0.1159568503499031,
0.039199769496917725,
0.014198467135429382,
0.01310278195887804,
0.0006223935633897781,
-0.11670448631048203,
0.01617409847676754,
0.1364264339208603,
-0.0005306452512741089,
0.07269230484962463,
-0.0591808557510376,
0.02478671818971634,
-0.014356886968016624,
0.000321095809340477,
0.13430707156658173,
0.18962256610393524,
-0.06652992218732834,
0.1812126338481903,
0.20468728244304657,
-0.10667897760868073,
-0.19411829113960266,
-0.055078186094760895,
0.0003390917554497719,
0.04492619261145592,
0.05263809487223625,
-0.18409480154514313,
0.0879998654127121,
0.03656953200697899,
-0.033820778131484985,
0.01156340166926384,
-0.23726344108581543,
-0.11214745789766312,
0.08841152489185333,
0.054757505655288696,
0.18687346577644348,
-0.07763257622718811,
-0.04126249998807907,
-0.016275372356176376,
-0.03782627731561661,
0.04849451780319214,
-0.041576892137527466,
0.08807551860809326,
0.008243069052696228,
-0.031581416726112366,
0.0037415456026792526,
-0.030649222433567047,
0.09741786122322083,
0.036208949983119965,
0.0205357875674963,
-0.06987245380878448,
-0.010688535869121552,
0.11688071489334106,
-0.03975632041692734,
0.09577085077762604,
0.045410141348838806,
0.07572987675666809,
-0.09297448396682739,
-0.059199802577495575,
-0.07379530370235443,
0.04560265317559242,
-0.04128885269165039,
-0.05533723905682564,
-0.06641101092100143,
0.05762564390897751,
0.037798501551151276,
0.00961318425834179,
-0.00036095641553401947,
-0.03699471428990364,
0.04372351989150047,
0.09714392572641373,
0.0820842981338501,
-0.03287569433450699,
-0.06940869987010956,
-0.051469363272190094,
-0.04816703870892525,
0.06637056916952133,
-0.09293735027313232,
0.022505884990096092,
0.02436722069978714,
0.013869178481400013,
0.09148160368204117,
0.03207963705062866,
-0.13988107442855835,
0.013063507154583931,
0.03332531452178955,
-0.12339471280574799,
-0.1100144162774086,
-0.02070445567369461,
0.0348905473947525,
-0.038822125643491745,
0.04959431663155556,
0.14728881418704987,
-0.03521459549665451,
-0.03257433697581291,
-0.04832788556814194,
0.03816808760166168,
-0.021096330136060715,
0.045866481959819794,
0.062440868467092514,
0.030243201181292534,
-0.0714220404624939,
0.07323087751865387,
0.04003102332353592,
-0.03262675926089287,
0.041142433881759644,
0.0420430563390255,
-0.09136803448200226,
-0.07714319229125977,
-0.06118469685316086,
0.09019815921783447,
-0.023971980437636375,
-0.04473006725311279,
0.0023760609328746796,
-0.08242551982402802,
0.06684473156929016,
0.07940767705440521,
0.046766247600317,
0.03907398134469986,
-0.08730556070804596,
0.017381826415657997,
-0.052120648324489594,
0.035260479897260666,
-0.029653845354914665,
-0.006020119413733482,
-0.051406823098659515,
0.07359085977077484,
0.06532826274633408,
0.09805134683847427,
-0.035254862159490585,
-0.07579924911260605,
-0.0813889130949974,
-0.01285308413207531,
-0.06549892574548721,
-0.03094700537621975,
-0.07906949520111084,
-0.008214699104428291,
0.0017268205992877483,
-0.006023863330483437,
0.020813124254345894,
0.034031011164188385,
-0.0433826744556427,
-0.016458161175251007,
-0.038788873702287674,
0.03633034974336624,
-0.06335467100143433,
0.006382027640938759,
0.01853295788168907,
-0.03711593523621559,
0.09302644431591034,
0.038142140954732895,
-0.009642316028475761,
0.04776306822896004,
-0.01997741125524044,
0.033986903727054596,
-0.022356536239385605,
0.0024139201268553734,
-0.024310151115059853,
-0.11095690727233887,
-0.00512710539624095,
0.004425769671797752,
-0.02475932240486145,
0.010771205648779869,
0.06104744225740433,
-0.07200747728347778,
0.0872076153755188,
0.045460402965545654,
-0.03399024158716202,
-0.071687251329422,
0.03879501670598984,
-0.013662293553352356,
0.030011318624019623,
0.06901675462722778,
-0.03202606365084648,
0.05340363830327988,
-0.09912723302841187,
-0.02755223587155342,
0.0029279275331646204,
-0.0012829042971134186,
-0.008867891505360603,
-0.05482778698205948,
-0.005159256048500538,
0.00836705882102251,
0.17615105211734772,
-0.01984328031539917,
0.03253911808133125,
0.015603922307491302,
0.005599921569228172,
0.05007919296622276,
-0.013407465070486069,
0.07177670300006866,
-0.004959668964147568,
-0.026133274659514427,
-0.01719791069626808,
0.03568098321557045,
0.006163978949189186,
0.002292873337864876,
0.148184671998024,
0.04684564471244812,
0.09191238880157471,
0.07593023031949997,
0.016428476199507713,
0.01739705726504326,
-0.1328604519367218,
-0.0905405580997467,
0.005247499793767929,
0.054267093539237976,
-0.017518090084195137,
0.01768755167722702,
0.0951073095202446,
-0.09208442270755768,
0.07074229419231415,
0.05244021117687225,
-0.048584505915641785,
-0.12891022861003876,
-0.18684902787208557,
-0.025378748774528503,
-0.02682509832084179,
-0.012725570239126682,
-0.08950598537921906,
0.018457483500242233,
0.08440013229846954,
0.021950727328658104,
-0.00849614292383194,
0.0883685052394867,
-0.10872962325811386,
-0.032773032784461975,
0.04299437999725342,
-0.02358892373740673,
0.017200427129864693,
0.0463264137506485,
0.019729284569621086,
-0.005136091262102127,
0.04742665961384773,
0.041695643216371536,
0.04486853629350662,
0.028256230056285858,
0.049059152603149414,
-0.02464689500629902,
-0.07441803067922592,
-0.03172895684838295,
-0.006538339890539646,
0.05575735494494438,
0.12598155438899994,
0.024706922471523285,
-0.0663788765668869,
0.006776909809559584,
0.11193305253982544,
-0.03283262997865677,
-0.05323527380824089,
-0.1085272878408432,
0.2395750731229782,
0.0238967165350914,
0.004191592801362276,
-0.002094491384923458,
-0.04721660912036896,
0.0034392215311527252,
0.2100660353899002,
0.22426685690879822,
0.003441575448960066,
-0.00788703840225935,
0.00946806464344263,
-0.011673208326101303,
0.0383036844432354,
0.145440936088562,
0.0017501898109912872,
0.25807905197143555,
-0.046193502843379974,
0.03957854211330414,
-0.04220547154545784,
-0.039103858172893524,
-0.0992511510848999,
0.0779682919383049,
-0.012382950633764267,
0.006097725126892328,
-0.03312668949365616,
0.07173647731542587,
-0.038018401712179184,
-0.17851103842258453,
0.003834277391433716,
-0.004708799533545971,
-0.06251153349876404,
0.01542296539992094,
-0.000639796257019043,
0.01936551369726658,
0.08297131955623627,
-0.016185790300369263,
-0.008067168295383453,
0.13073976337909698,
0.019941870123147964,
-0.09971917420625687,
-0.06009073555469513,
0.11802271753549576,
0.014656458050012589,
0.1459158957004547,
0.011132043786346912,
0.08006404340267181,
0.08518116921186447,
0.02185523509979248,
-0.09403099119663239,
0.040726304054260254,
-0.018571536988019943,
-0.030407126992940903,
0.006171840708702803,
0.11067567020654678,
-0.009974896907806396,
0.05830255150794983,
0.02935490384697914,
-0.09292042255401611,
0.06023600697517395,
0.007906798273324966,
-0.03868193179368973,
-0.07909558713436127,
0.08045075088739395,
-0.09046950936317444,
0.15542420744895935,
0.12065386772155762,
-0.014760109595954418,
-0.044272467494010925,
-0.026547124609351158,
0.017502805218100548,
0.00036459462717175484,
0.06004846841096878,
-0.023987559601664543,
-0.1362190842628479,
0.01850198395550251,
-0.08617449551820755,
0.028995081782341003,
-0.24516409635543823,
-0.09129069745540619,
0.02948453091084957,
-0.017650475725531578,
-0.016075000166893005,
0.051370520144701004,
0.04548260197043419,
0.0247145127505064,
-0.03519856184720993,
0.017564907670021057,
-0.03833676129579544,
0.059522178024053574,
-0.10867740958929062,
-0.09194716811180115
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 2000k (uncased)
Seed 2 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-2000k')
model = BertModel.from_pretrained("multiberts-seed-2-2000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-2000k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 2000k (uncased)
Seed 2 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 2000k (uncased)\nSeed 2 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 2000k (uncased)\nSeed 2 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 2000k (uncased)\nSeed 2 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0825861245393753,
0.008369985036551952,
-0.0022379120346158743,
0.06696707755327225,
0.08136771619319916,
0.0031437785364687443,
0.11691185086965561,
0.050221432000398636,
-0.032314542680978775,
0.022469215095043182,
0.09490957856178284,
0.025009777396917343,
0.04214545339345932,
0.06444068253040314,
0.09525889158248901,
-0.2594873607158661,
0.05059616267681122,
-0.06405734270811081,
0.053740255534648895,
0.07494845986366272,
0.1020343154668808,
-0.07134398072957993,
0.06034868210554123,
0.039227619767189026,
-0.08659090846776962,
-0.018627768382430077,
-0.014117577113211155,
-0.03400560840964317,
0.09948859363794327,
0.07064592838287354,
0.0644659772515297,
0.0035151075571775436,
0.05972069874405861,
-0.08873453736305237,
0.01602187007665634,
0.0444633886218071,
0.0006664376705884933,
0.027353845536708832,
-0.0081072598695755,
0.015384554862976074,
0.11091049760580063,
0.040004946291446686,
0.07774972915649414,
0.032674629241228104,
-0.09432590007781982,
-0.11361183971166611,
-0.08191107213497162,
0.10609234869480133,
0.05308555066585541,
0.037655267864465714,
-0.004266854375600815,
0.0751156136393547,
-0.028492316603660583,
0.07305888086557388,
0.10586869716644287,
-0.25522851943969727,
-0.010094047524034977,
0.06536237895488739,
0.043715737760066986,
0.04347115755081177,
0.012650084681808949,
0.026190074160695076,
0.0038651302456855774,
0.03988499194383621,
0.02734345942735672,
-0.02367573231458664,
0.10988417267799377,
-0.04426318407058716,
-0.15013407170772552,
-0.04412335902452469,
0.11867251247167587,
-0.008214427158236504,
-0.12520352005958557,
-0.09760066866874695,
-0.03178706392645836,
0.12043005228042603,
-0.002368604764342308,
-0.018858149647712708,
-0.002934496384114027,
0.011991797015070915,
0.02109123207628727,
-0.10050314664840698,
-0.08695369958877563,
-0.03116416558623314,
-0.03972246125340462,
0.1347416192293167,
0.045780401676893234,
0.0530393123626709,
-0.035342052578926086,
0.08580739796161652,
-0.12092725187540054,
-0.03852458670735359,
-0.055324211716651917,
-0.08081595599651337,
-0.021741170436143875,
0.010770968161523342,
-0.03155659884214401,
-0.08847939223051071,
-0.05909590423107147,
0.11516772210597992,
0.03694259002804756,
0.02718207985162735,
-0.004207291640341282,
0.04326799511909485,
0.07300705462694168,
0.10008686780929565,
-0.0375780388712883,
0.05240587890148163,
0.03563285246491432,
-0.025092441588640213,
0.05881049484014511,
-0.051347047090530396,
-0.10319721698760986,
0.07609869539737701,
0.00045025162398815155,
0.03723197430372238,
0.027470126748085022,
0.031077886000275612,
-0.010961096733808517,
-0.07250253856182098,
0.15992958843708038,
-0.07793019711971283,
-0.007322678342461586,
-0.015534688718616962,
0.010213956236839294,
0.04183117300271988,
0.0345795601606369,
-0.002444812096655369,
-0.045272208750247955,
-0.008557305671274662,
-0.05421818792819977,
-0.023406313732266426,
-0.05667904019355774,
-0.11775560677051544,
0.0012317653745412827,
-0.03569349646568298,
-0.03224226459860802,
-0.1393454372882843,
-0.21649016439914703,
-0.017587672919034958,
0.06637442111968994,
-0.0014603999443352222,
-0.010994765907526016,
0.02352326363325119,
0.016851989552378654,
-0.020614981651306152,
0.010918855667114258,
-0.042066097259521484,
0.00291373860090971,
-0.0052401404827833176,
-0.038301385939121246,
0.054015882313251495,
-0.04694633558392525,
0.023156868293881416,
-0.0687820240855217,
0.02232120931148529,
-0.20170052349567413,
0.08947108685970306,
-0.030002359300851822,
-0.004023412242531776,
-0.03803647682070732,
-0.040708377957344055,
0.005843291059136391,
0.04556870087981224,
-0.008438097313046455,
0.11307543516159058,
-0.137980118393898,
-0.04972870647907257,
0.18283969163894653,
-0.1607799232006073,
-0.0011437870562076569,
0.09720578789710999,
-0.05196474492549896,
0.06098446249961853,
0.13221067190170288,
0.09781147539615631,
0.08035245537757874,
-0.06925292313098907,
0.015476304106414318,
0.061573728919029236,
-0.06803017109632492,
0.056120190769433975,
0.09359577298164368,
-0.024981273338198662,
-0.135751873254776,
0.028717409819364548,
-0.08025458455085754,
-0.009668039157986641,
-0.025732751935720444,
-0.02182389236986637,
0.006874511018395424,
-0.03888429328799248,
0.02902614325284958,
0.004953407682478428,
0.019620364531874657,
-0.03706125169992447,
-0.08243059366941452,
0.03625230863690376,
0.07678332924842834,
-0.07078962028026581,
0.04423413425683975,
-0.06922069936990738,
0.055248990654945374,
-0.07440412044525146,
-0.005841999314725399,
-0.16862419247627258,
-0.02387670800089836,
0.04578692466020584,
-0.054243870079517365,
0.052183039486408234,
0.09002914279699326,
0.002537005115300417,
0.12169024348258972,
-0.03891689330339432,
0.0060098255053162575,
-0.010467225685715675,
-0.009862356819212437,
-0.050499409437179565,
-0.12089253962039948,
-0.08028236776590347,
-0.06927596032619476,
0.10665397346019745,
-0.07263312488794327,
0.02841912768781185,
-0.06951112300157547,
-0.020727943629026413,
-0.008129909634590149,
-0.060358017683029175,
-0.005902966484427452,
0.007324879989027977,
-0.030160335823893547,
-0.04635491222143173,
0.047169677913188934,
0.05335725098848343,
-0.06409703195095062,
0.07808209955692291,
-0.1011456549167633,
-0.0641169399023056,
0.054004184901714325,
0.019502239301800728,
-0.07812227308750153,
0.09054403007030487,
-0.019248921424150467,
-0.012547880411148071,
-0.05901443958282471,
-0.041382431983947754,
0.1968197226524353,
-0.0224253349006176,
0.10217165946960449,
-0.09389568120241165,
0.0010198847157880664,
0.025987640023231506,
-0.04909716546535492,
-0.01784443110227585,
0.057115763425827026,
0.04798305034637451,
-0.19384196400642395,
0.015092022716999054,
0.049459367990493774,
0.0725453570485115,
0.11058005690574646,
0.028020266443490982,
-0.02457820251584053,
-0.04934368282556534,
-0.00922737643122673,
0.006188257597386837,
0.0558265820145607,
-0.025301557034254074,
-0.00709546497091651,
0.032901257276535034,
0.05670085921883583,
0.019064679741859436,
-0.08438776433467865,
0.03697720915079117,
0.06676769256591797,
-0.017121706157922745,
-0.04048039764165878,
-0.02684861794114113,
-0.05865175649523735,
0.06363794952630997,
0.054303623735904694,
0.040222786366939545,
0.02587813511490822,
-0.015499649569392204,
-0.13810575008392334,
0.18708285689353943,
-0.11599753797054291,
-0.25949332118034363,
-0.10760386288166046,
-0.05558009445667267,
-0.025373773649334908,
0.042022764682769775,
0.05710219591856003,
-0.023571684956550598,
-0.04127398133277893,
-0.11533402651548386,
0.059805989265441895,
-0.06659099459648132,
-0.02957173064351082,
-0.008748149499297142,
-0.0493735745549202,
-0.018566347658634186,
-0.12752282619476318,
-0.010170063003897667,
-0.03202054649591446,
-0.07516693323850632,
0.004785526543855667,
-0.034090008586645126,
0.027223777025938034,
0.1373610496520996,
0.033739060163497925,
-0.018302781507372856,
-0.01652410998940468,
0.18679356575012207,
0.010605478659272194,
0.059577394276857376,
0.11058388650417328,
-0.03462105989456177,
0.054510485380887985,
0.045360855758190155,
0.024047795683145523,
-0.04463639855384827,
0.01065039075911045,
-0.019895456731319427,
-0.12164859473705292,
-0.17276562750339508,
-0.07230988889932632,
-0.0033713788725435734,
-0.0010635475628077984,
0.01924850419163704,
0.036513350903987885,
0.022390644997358322,
0.03891289606690407,
-0.030300531536340714,
0.02639390155673027,
-0.0088106170296669,
0.08107179403305054,
0.021193012595176697,
-0.07382459938526154,
0.0911601334810257,
-0.060574304312467575,
0.017529930919408798,
0.11039628833532333,
-0.06371874362230301,
0.18940840661525726,
0.025875581428408623,
0.055526215583086014,
0.10224846005439758,
0.01870274543762207,
0.05646096542477608,
0.08635859936475754,
-0.04752637818455696,
0.0091309929266572,
-0.06223083287477493,
-0.05174386128783226,
-0.03742736577987671,
0.05418063700199127,
0.03088030219078064,
0.017393551766872406,
-0.11640222370624542,
0.017580609768629074,
-0.0027588922530412674,
0.136807382106781,
0.049047552049160004,
-0.11708620190620422,
-0.12238612025976181,
0.036277830600738525,
-0.0453607439994812,
-0.06414416432380676,
0.026001224294304848,
0.060276687145233154,
-0.153680682182312,
0.043425053358078,
-0.00586201436817646,
0.06583871692419052,
-0.09092730283737183,
0.012957792729139328,
-0.04835132509469986,
-0.0022344356402754784,
0.00818677805364132,
0.07129864394664764,
-0.1287689208984375,
0.11045441031455994,
0.020922768861055374,
0.0475962869822979,
-0.08032209426164627,
0.017406199127435684,
-0.01112313475459814,
0.10530521720647812,
0.11541959643363953,
0.04532976448535919,
-0.054613206535577774,
-0.01379114855080843,
-0.04519253596663475,
0.01894228905439377,
0.06121516227722168,
-0.07828761637210846,
0.06015629321336746,
0.00780089944601059,
0.007861324585974216,
-0.020841337740421295,
0.018135156482458115,
-0.13160927593708038,
-0.12374952435493469,
0.06518648564815521,
-0.07795799523591995,
-0.09962209314107895,
-0.0597015842795372,
-0.062107402831315994,
-0.04473792016506195,
0.216745063662529,
-0.12067731469869614,
-0.08846701681613922,
-0.0991201102733612,
-0.007174335420131683,
0.047185927629470825,
-0.06729234755039215,
0.04767797887325287,
-0.04012130945920944,
0.09664550423622131,
-0.049165986478328705,
-0.10940825939178467,
0.0328715518116951,
-0.11121577769517899,
-0.1164875328540802,
-0.04430022090673447,
0.10601416230201721,
0.11356060951948166,
0.03855687752366066,
0.013756691478192806,
0.014593971893191338,
-0.00032419897615909576,
-0.11602428555488586,
0.012321436777710915,
0.13916251063346863,
-0.0037483610212802887,
0.07103116810321808,
-0.06172281503677368,
0.0291299931704998,
-0.013746654614806175,
0.000006778165698051453,
0.13691098988056183,
0.18791505694389343,
-0.06684626638889313,
0.18061771988868713,
0.1998465210199356,
-0.10692308098077774,
-0.19332826137542725,
-0.05708451569080353,
0.0027541508898139,
0.046256549656391144,
0.050240423530340195,
-0.1862446665763855,
0.08679819852113724,
0.034397076815366745,
-0.03472057729959488,
0.00822385773062706,
-0.23511426150798798,
-0.11032824218273163,
0.08810898661613464,
0.053761228919029236,
0.1866265833377838,
-0.0779770016670227,
-0.04131481796503067,
-0.017017776146531105,
-0.032150499522686005,
0.04943035542964935,
-0.03725593909621239,
0.08957210183143616,
0.008896414190530777,
-0.029505232349038124,
0.0032131318002939224,
-0.030352672562003136,
0.09826374053955078,
0.03675782307982445,
0.01966944709420204,
-0.06913229823112488,
-0.011282222345471382,
0.11224371939897537,
-0.03992182016372681,
0.0970538854598999,
0.046932563185691833,
0.0775400847196579,
-0.09267558157444,
-0.05883466452360153,
-0.07280100882053375,
0.04445155709981918,
-0.04028000682592392,
-0.054835040122270584,
-0.06543044745922089,
0.05872050300240517,
0.03882119059562683,
0.011469945311546326,
0.001756938174366951,
-0.03605235740542412,
0.03946780785918236,
0.091826893389225,
0.08055029809474945,
-0.029016349464654922,
-0.06744522601366043,
-0.0527021549642086,
-0.04784823954105377,
0.06548921763896942,
-0.09526213258504868,
0.021683616563677788,
0.024502113461494446,
0.01367096696048975,
0.09053806960582733,
0.03257308155298233,
-0.14019399881362915,
0.012370584532618523,
0.0314403660595417,
-0.12229157984256744,
-0.10866513103246689,
-0.019848935306072235,
0.037046220153570175,
-0.0385577455163002,
0.05117035657167435,
0.14880147576332092,
-0.035810962319374084,
-0.03234099596738815,
-0.048217687755823135,
0.03794857859611511,
-0.02069498971104622,
0.04509834200143814,
0.06291428208351135,
0.02907240390777588,
-0.07190179824829102,
0.07415701448917389,
0.04151594638824463,
-0.02990681678056717,
0.042397793382406235,
0.036981403827667236,
-0.09203474968671799,
-0.07757455855607986,
-0.058747611939907074,
0.08934067189693451,
-0.021385541185736656,
-0.04702513664960861,
0.002364007756114006,
-0.08127656579017639,
0.06600013375282288,
0.0762559324502945,
0.04863961040973663,
0.039305731654167175,
-0.08727581799030304,
0.015539131127297878,
-0.051161028444767,
0.03588381037116051,
-0.027128146961331367,
-0.005971476435661316,
-0.05318517982959747,
0.06706562638282776,
0.06490495800971985,
0.096206896007061,
-0.03448603302240372,
-0.07664133608341217,
-0.08159024268388748,
-0.0132726039737463,
-0.06062149256467819,
-0.030745789408683777,
-0.07497353851795197,
-0.00789882242679596,
0.0027615393046289682,
-0.007737409323453903,
0.020824749022722244,
0.03591752052307129,
-0.04420296102762222,
-0.018366454169154167,
-0.041466400027275085,
0.03686105087399483,
-0.06239515170454979,
0.004897194914519787,
0.017151394858956337,
-0.03653208166360855,
0.09259520471096039,
0.0382358655333519,
-0.010570438578724861,
0.047795750200748444,
-0.029814086854457855,
0.03354104980826378,
-0.02147238701581955,
0.0037296991795301437,
-0.024003643542528152,
-0.11189771443605423,
-0.00524194585159421,
0.00709957629442215,
-0.02410859614610672,
0.009620088152587414,
0.0643894150853157,
-0.0708964541554451,
0.0818459689617157,
0.043035708367824554,
-0.031289417296648026,
-0.070338673889637,
0.039583154022693634,
-0.015846779569983482,
0.03424753621220589,
0.06941759586334229,
-0.03173072263598442,
0.05486365407705307,
-0.09776157140731812,
-0.02612115815281868,
0.0026497733779251575,
-0.0026636160910129547,
-0.015304101631045341,
-0.056344933807849884,
-0.004749645479023457,
0.008592532016336918,
0.1767849624156952,
-0.02168447896838188,
0.034599266946315765,
0.013683428056538105,
0.00895230658352375,
0.053545448929071426,
-0.012899037450551987,
0.07326522469520569,
-0.0031335344538092613,
-0.023836830630898476,
-0.015469763427972794,
0.036167439073324203,
0.0044518131762743,
-0.0006714481860399246,
0.14268381893634796,
0.049852464348077774,
0.09210880845785141,
0.07629353553056717,
0.01699172519147396,
0.018638255074620247,
-0.12598925828933716,
-0.08584021031856537,
0.005662538111209869,
0.05436587333679199,
-0.018846629187464714,
0.019965797662734985,
0.09140091389417648,
-0.09148469567298889,
0.06951432675123215,
0.0536038801074028,
-0.047348953783512115,
-0.12769818305969238,
-0.19206243753433228,
-0.029257595539093018,
-0.02711261250078678,
-0.012404452078044415,
-0.09001018106937408,
0.017257463186979294,
0.08577117323875427,
0.022020390257239342,
-0.00778230931609869,
0.08671258389949799,
-0.10868732631206512,
-0.032025475054979324,
0.04098311811685562,
-0.025827649980783463,
0.018028076738119125,
0.04780808463692665,
0.019266638904809952,
-0.002137713134288788,
0.048792481422424316,
0.0432933010160923,
0.045736946165561676,
0.03171832859516144,
0.04907526448369026,
-0.024939384311437607,
-0.07455715537071228,
-0.032794903963804245,
-0.0058172158896923065,
0.05564846098423004,
0.128325954079628,
0.024701813235878944,
-0.06897789239883423,
0.006965939886868,
0.10840155184268951,
-0.03186182305216789,
-0.05366078019142151,
-0.10867118835449219,
0.23969589173793793,
0.020642172545194626,
0.0032587298192083836,
-0.002179078757762909,
-0.04691320285201073,
0.0027113109827041626,
0.2105402946472168,
0.22294244170188904,
0.0013450714759528637,
-0.009227508679032326,
0.010622583329677582,
-0.011417476460337639,
0.03597995638847351,
0.1439526379108429,
0.001276448369026184,
0.25294578075408936,
-0.04452242702245712,
0.036336056888103485,
-0.04255764186382294,
-0.03889851272106171,
-0.09889468550682068,
0.07675870507955551,
-0.01263340376317501,
0.005076693370938301,
-0.0341176800429821,
0.0719924345612526,
-0.036899492144584656,
-0.1776505559682846,
0.001392054371535778,
-0.0045493110083043575,
-0.06430977582931519,
0.015318579971790314,
0.0026248032227158546,
0.019593682140111923,
0.08280842006206512,
-0.01841217651963234,
-0.006469723302870989,
0.12865495681762695,
0.02020445093512535,
-0.0986255332827568,
-0.0564228817820549,
0.1173669844865799,
0.013026281259953976,
0.1447935551404953,
0.012626424431800842,
0.08332518488168716,
0.08596977591514587,
0.021075842902064323,
-0.09481529891490936,
0.04128889739513397,
-0.01875557005405426,
-0.030777720734477043,
0.008109372109174728,
0.11285541951656342,
-0.010839702561497688,
0.060181230306625366,
0.02940719574689865,
-0.09023328870534897,
0.061664242297410965,
0.00820733979344368,
-0.03960595652461052,
-0.08004308491945267,
0.08244955539703369,
-0.09145514667034149,
0.15565544366836548,
0.120577871799469,
-0.014313199557363987,
-0.04412643611431122,
-0.028126219287514687,
0.02084435150027275,
-0.0012729750014841557,
0.061450835317373276,
-0.02322900854051113,
-0.1348511278629303,
0.018574755638837814,
-0.08883559703826904,
0.02816036157310009,
-0.24606281518936157,
-0.09041199088096619,
0.027440227568149567,
-0.01823221519589424,
-0.014885324984788895,
0.051755569875240326,
0.04606184363365173,
0.021778995171189308,
-0.03571803867816925,
0.01724691316485405,
-0.037249401211738586,
0.05632447823882103,
-0.10722111165523529,
-0.0918465256690979
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 200k (uncased)
Seed 2 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-200k')
model = BertModel.from_pretrained("multiberts-seed-2-200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-200k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 200k (uncased)
Seed 2 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 200k (uncased)\nSeed 2 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 200k (uncased)\nSeed 2 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 200k (uncased)\nSeed 2 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08359486609697342,
0.00597046222537756,
-0.0023389162961393595,
0.06779128313064575,
0.0831550732254982,
0.0027657062746584415,
0.1181100383400917,
0.05170527473092079,
-0.02614179626107216,
0.02492430806159973,
0.09134061634540558,
0.0288098081946373,
0.04332713782787323,
0.06839365512132645,
0.09745042771100998,
-0.25986209511756897,
0.04953420162200928,
-0.06374356895685196,
0.0495743528008461,
0.07499276101589203,
0.1006084531545639,
-0.07025782763957977,
0.060966212302446365,
0.03759167343378067,
-0.08599042147397995,
-0.018795819953083992,
-0.012267671525478363,
-0.03380917012691498,
0.10052904486656189,
0.06992293149232864,
0.061125703155994415,
0.0021463073790073395,
0.060797348618507385,
-0.0888383537530899,
0.01577729359269142,
0.04474087059497833,
0.0002544522285461426,
0.02695222944021225,
-0.006974702700972557,
0.01660442166030407,
0.11133195459842682,
0.03918054327368736,
0.07623859494924545,
0.0347493551671505,
-0.09411492943763733,
-0.11516131460666656,
-0.08008787781000137,
0.10562200099229813,
0.05407023802399635,
0.03840285539627075,
-0.005520280450582504,
0.07299644500017166,
-0.027313226833939552,
0.07121209800243378,
0.10606232285499573,
-0.25888946652412415,
-0.00918639451265335,
0.06437679380178452,
0.04425342381000519,
0.04333728551864624,
0.01402261946350336,
0.026238596066832542,
0.005219276994466782,
0.04184790328145027,
0.026731960475444794,
-0.023625332862138748,
0.11797511577606201,
-0.04667222872376442,
-0.15016257762908936,
-0.044056035578250885,
0.1187102198600769,
-0.007067890837788582,
-0.12622502446174622,
-0.09722476452589035,
-0.03331174701452255,
0.12319126725196838,
-0.0015554418787360191,
-0.01994786597788334,
-0.003793205600231886,
0.013880659826099873,
0.02402626723051071,
-0.09884949028491974,
-0.08709104359149933,
-0.030673641711473465,
-0.03855780512094498,
0.13160103559494019,
0.04614435136318207,
0.0531495176255703,
-0.03634517267346382,
0.08716323971748352,
-0.11718664318323135,
-0.03958902508020401,
-0.052870191633701324,
-0.08186942338943481,
-0.018846003338694572,
0.011012913659214973,
-0.02777264639735222,
-0.08767496794462204,
-0.05994366854429245,
0.11342562735080719,
0.03839109092950821,
0.027252372354269028,
-0.005006604827940464,
0.04388388991355896,
0.0725441426038742,
0.09645066410303116,
-0.03685828298330307,
0.05519166216254234,
0.0340292826294899,
-0.02340758964419365,
0.05938359722495079,
-0.05162223055958748,
-0.10369166731834412,
0.07430854439735413,
0.0008368203416466713,
0.03779257461428642,
0.02738753892481327,
0.028757065534591675,
-0.013798322528600693,
-0.07303258776664734,
0.1588810831308365,
-0.07725077122449875,
-0.006147683598101139,
-0.016109049320220947,
0.011371959000825882,
0.04382215440273285,
0.03298019617795944,
-0.005308862775564194,
-0.04648955911397934,
-0.007347722537815571,
-0.05525493621826172,
-0.023664042353630066,
-0.05448199808597565,
-0.11648104339838028,
0.00147975143045187,
-0.03812257945537567,
-0.033052291721105576,
-0.14147362112998962,
-0.2115001082420349,
-0.016531776636838913,
0.06541987508535385,
-0.0036128005012869835,
-0.012221056036651134,
0.024133779108524323,
0.017118318006396294,
-0.02017849124968052,
0.010867218486964703,
-0.04700961709022522,
0.0013603689149022102,
-0.004306456074118614,
-0.03694391995668411,
0.05763600766658783,
-0.04180995002388954,
0.02197531796991825,
-0.06934431195259094,
0.02360781468451023,
-0.20740696787834167,
0.08996591717004776,
-0.029462717473506927,
-0.004562246613204479,
-0.03895702585577965,
-0.04134112596511841,
0.005623526871204376,
0.045444801449775696,
-0.008284637704491615,
0.11675135791301727,
-0.1421671211719513,
-0.050793811678886414,
0.1889638602733612,
-0.1610671430826187,
0.0002771243453025818,
0.1000838428735733,
-0.049333613365888596,
0.061604347079992294,
0.13234826922416687,
0.09896108508110046,
0.08305110037326813,
-0.07176446169614792,
0.013284501619637012,
0.06315536051988602,
-0.06873662769794464,
0.05621590465307236,
0.09212895482778549,
-0.025966377928853035,
-0.1368115395307541,
0.02711339294910431,
-0.07435918599367142,
-0.010081791318953037,
-0.025975508615374565,
-0.021744174882769585,
0.0055319443345069885,
-0.03778813034296036,
0.03295819088816643,
0.003753608325496316,
0.01793689839541912,
-0.038524024188518524,
-0.08247948437929153,
0.033170364797115326,
0.07634326815605164,
-0.07007799297571182,
0.04385241121053696,
-0.07111095637083054,
0.05868072062730789,
-0.07518614083528519,
-0.004280639812350273,
-0.16728124022483826,
-0.022279007360339165,
0.04637046158313751,
-0.05400323495268822,
0.05148062855005264,
0.09451377391815186,
0.0022015722934156656,
0.12381526082754135,
-0.038509562611579895,
0.0042021917179226875,
-0.010862817987799644,
-0.011305105872452259,
-0.052175603806972504,
-0.12136572599411011,
-0.081180639564991,
-0.06827419251203537,
0.1023215800523758,
-0.07364223897457123,
0.02885536290705204,
-0.06925518810749054,
-0.01939750462770462,
-0.006747368723154068,
-0.059201329946517944,
-0.006254393607378006,
0.006968692876398563,
-0.028992991894483566,
-0.04657638818025589,
0.047288477420806885,
0.05187346786260605,
-0.06344251334667206,
0.07929547131061554,
-0.10360947996377945,
-0.062000080943107605,
0.054188765585422516,
0.01809919998049736,
-0.07734720408916473,
0.09323727339506149,
-0.01945255510509014,
-0.01315460167825222,
-0.06171776354312897,
-0.04345144331455231,
0.19148363173007965,
-0.02118772640824318,
0.10035018622875214,
-0.09204070270061493,
0.002376025076955557,
0.02788184955716133,
-0.05032963305711746,
-0.017652461305260658,
0.05807491019368172,
0.04946037009358406,
-0.19238749146461487,
0.014540690928697586,
0.054138801991939545,
0.0772758275270462,
0.10992883145809174,
0.028424544259905815,
-0.025171000510454178,
-0.049601390957832336,
-0.009497465565800667,
0.007028588559478521,
0.05514451861381531,
-0.02188996598124504,
-0.005851130001246929,
0.031977809965610504,
0.058361977338790894,
0.016640815883874893,
-0.08441294729709625,
0.03556204214692116,
0.06763526797294617,
-0.018445244058966637,
-0.03934643417596817,
-0.025721218436956406,
-0.05899570509791374,
0.06489919126033783,
0.053645819425582886,
0.03962522745132446,
0.02624041959643364,
-0.014558738097548485,
-0.13830292224884033,
0.18652433156967163,
-0.11517792195081711,
-0.25934338569641113,
-0.10962797701358795,
-0.050737038254737854,
-0.022264309227466583,
0.04354007542133331,
0.05751001834869385,
-0.027078421786427498,
-0.04003112018108368,
-0.1152767688035965,
0.062265198677778244,
-0.06524442136287689,
-0.030530638992786407,
-0.010904381051659584,
-0.048950016498565674,
-0.01707690954208374,
-0.12714770436286926,
-0.012154610827565193,
-0.02962951362133026,
-0.07667635381221771,
0.006072619929909706,
-0.03394666314125061,
0.026463180780410767,
0.13806232810020447,
0.03391110524535179,
-0.02036016806960106,
-0.015678023919463158,
0.1894644796848297,
0.011747436597943306,
0.06054861098527908,
0.11319682747125626,
-0.03200376033782959,
0.05597265809774399,
0.04621751606464386,
0.023885181173682213,
-0.04733622074127197,
0.012193924747407436,
-0.01765104942023754,
-0.12112721800804138,
-0.17457349598407745,
-0.07170586287975311,
-0.0022857394069433212,
0.005536140874028206,
0.020642122253775597,
0.03653818741440773,
0.01985464245080948,
0.039468854665756226,
-0.030613580718636513,
0.029158633202314377,
-0.011582896113395691,
0.08025309443473816,
0.022173721343278885,
-0.07383085042238235,
0.09143557399511337,
-0.06045547500252724,
0.016586245968937874,
0.11083345115184784,
-0.060988422483205795,
0.18966272473335266,
0.025901786983013153,
0.056232281029224396,
0.10038945823907852,
0.021010011434555054,
0.05540889501571655,
0.08663921803236008,
-0.04405991733074188,
0.00926828384399414,
-0.06237487122416496,
-0.051562901586294174,
-0.03833390027284622,
0.05385815352201462,
0.0323144793510437,
0.018264587968587875,
-0.11987245827913284,
0.023145871236920357,
-0.0026486366987228394,
0.13642951846122742,
0.05191487818956375,
-0.11478342115879059,
-0.12310216575860977,
0.03818349912762642,
-0.046001460403203964,
-0.06424452364444733,
0.02830618992447853,
0.06550043821334839,
-0.15484929084777832,
0.04214739426970482,
-0.006523808464407921,
0.06604617834091187,
-0.0923682451248169,
0.01407815981656313,
-0.05073385313153267,
-0.0037158839404582977,
0.0076044825837016106,
0.0714532732963562,
-0.1313868910074234,
0.10797391831874847,
0.020782630890607834,
0.0480111688375473,
-0.07979223132133484,
0.01625318080186844,
-0.011224696412682533,
0.1052289754152298,
0.11653034389019012,
0.04576350376009941,
-0.05995985120534897,
-0.017828956246376038,
-0.04755685478448868,
0.01962094195187092,
0.060162194073200226,
-0.08155815303325653,
0.05891621485352516,
0.006833445280790329,
0.006948302034288645,
-0.022841665893793106,
0.01804373785853386,
-0.13240650296211243,
-0.1205257773399353,
0.06316839903593063,
-0.07994966953992844,
-0.09798873215913773,
-0.0589156299829483,
-0.06246696040034294,
-0.03857389837503433,
0.21306663751602173,
-0.11839105188846588,
-0.08927197009325027,
-0.09672530740499496,
-0.011017758399248123,
0.046053849160671234,
-0.06797299534082413,
0.04524899646639824,
-0.03711594268679619,
0.09641734510660172,
-0.04896848648786545,
-0.1104661077260971,
0.03455331176519394,
-0.111615851521492,
-0.11722446978092194,
-0.04512100666761398,
0.1080276295542717,
0.11571040749549866,
0.03985792398452759,
0.0142442025244236,
0.012054407969117165,
0.002438712865114212,
-0.11540257185697556,
0.01689661480486393,
0.1367437243461609,
-0.0012295488268136978,
0.06960748881101608,
-0.0608513206243515,
0.029825326055288315,
-0.016677146777510643,
-0.0005685053765773773,
0.1345248818397522,
0.18757998943328857,
-0.06688643991947174,
0.17886453866958618,
0.20013365149497986,
-0.10693613439798355,
-0.19201762974262238,
-0.05601310729980469,
0.0007077818736433983,
0.04626762121915817,
0.05127548798918724,
-0.18631228804588318,
0.08758294582366943,
0.03402432054281235,
-0.0344768688082695,
0.017179038375616074,
-0.24011291563510895,
-0.11126773059368134,
0.08712606132030487,
0.053294409066438675,
0.19048532843589783,
-0.07795572280883789,
-0.041642144322395325,
-0.01747427135705948,
-0.037447839975357056,
0.05026128888130188,
-0.036443546414375305,
0.08945268392562866,
0.008441774174571037,
-0.028560971841216087,
0.004121643491089344,
-0.02899285964667797,
0.0986107587814331,
0.03887895494699478,
0.019188927486538887,
-0.07015739381313324,
-0.008844193071126938,
0.1080230176448822,
-0.03945270553231239,
0.09724777191877365,
0.043993838131427765,
0.0752914547920227,
-0.09486894309520721,
-0.0588422454893589,
-0.07362464815378189,
0.04402349889278412,
-0.041025348007678986,
-0.05569338798522949,
-0.0637984573841095,
0.05792182683944702,
0.03784603253006935,
0.011761230416595936,
-0.0033656656742095947,
-0.03529040887951851,
0.04353402927517891,
0.0911576896905899,
0.08170194923877716,
-0.029819481074810028,
-0.07260710746049881,
-0.05197456479072571,
-0.04805096983909607,
0.06829886138439178,
-0.0938226580619812,
0.021941116079688072,
0.025810454040765762,
0.011029492132365704,
0.08970320224761963,
0.032987020909786224,
-0.13978244364261627,
0.013563690707087517,
0.03219524025917053,
-0.12263935804367065,
-0.11546765267848969,
-0.0199739970266819,
0.036131467670202255,
-0.03813821077346802,
0.05235534533858299,
0.14924213290214539,
-0.03673440217971802,
-0.0335826650261879,
-0.04828908294439316,
0.03832696005702019,
-0.022737719118595123,
0.0469432957470417,
0.06355053186416626,
0.030619090422987938,
-0.07026240229606628,
0.07396966218948364,
0.03925897181034088,
-0.03453853353857994,
0.0436214879155159,
0.03774362802505493,
-0.09210014343261719,
-0.07798317074775696,
-0.058065734803676605,
0.08766785264015198,
-0.020627109333872795,
-0.045980826020240784,
0.0033619198948144913,
-0.08184631913900375,
0.0662534162402153,
0.07455354183912277,
0.047703180462121964,
0.039843104779720306,
-0.08830113708972931,
0.015408402308821678,
-0.05173703283071518,
0.03454872593283653,
-0.025037730112671852,
-0.006425149738788605,
-0.055381812155246735,
0.06974273175001144,
0.06348039954900742,
0.10042692720890045,
-0.03443188592791557,
-0.0763096734881401,
-0.0825180932879448,
-0.013662569224834442,
-0.06518958508968353,
-0.030078649520874023,
-0.0748792290687561,
-0.008803535252809525,
0.0018472587689757347,
-0.006316272541880608,
0.023666443303227425,
0.03502281755208969,
-0.04431693255901337,
-0.01859595626592636,
-0.040745362639427185,
0.036943525075912476,
-0.0634782612323761,
0.0054149329662323,
0.018458131700754166,
-0.03748011216521263,
0.09276548027992249,
0.03831091523170471,
-0.011328669264912605,
0.046743325889110565,
-0.02379981428384781,
0.03451283276081085,
-0.020458851009607315,
0.0019027581438422203,
-0.02395538240671158,
-0.11182433366775513,
-0.0046190363354980946,
0.004511581733822823,
-0.024051319807767868,
0.008924270048737526,
0.06199272722005844,
-0.07171459496021271,
0.08491741120815277,
0.04316204786300659,
-0.03372371941804886,
-0.07133553177118301,
0.03971328213810921,
-0.015565155074000359,
0.032257530838251114,
0.0702730268239975,
-0.03372550755739212,
0.055064525455236435,
-0.09878779947757721,
-0.02728571742773056,
0.004036117810755968,
-0.0016445517539978027,
-0.010740593075752258,
-0.05498842895030975,
-0.004699060693383217,
0.00751627329736948,
0.16797122359275818,
-0.022839367389678955,
0.036041464656591415,
0.01366962306201458,
0.0054448265582323074,
0.04826181009411812,
-0.01238294504582882,
0.0713503360748291,
-0.00463159941136837,
-0.02448611706495285,
-0.016406826674938202,
0.035132549703121185,
0.004928279668092728,
0.002335658296942711,
0.14414258301258087,
0.04605814814567566,
0.09029948711395264,
0.07662902772426605,
0.016583142802119255,
0.01922261156141758,
-0.13818176090717316,
-0.08900970965623856,
0.004768427461385727,
0.055233120918273926,
-0.017328383401036263,
0.01945563405752182,
0.09174316376447678,
-0.08988878130912781,
0.06945105642080307,
0.05018553510308266,
-0.04871217906475067,
-0.1294448971748352,
-0.19477751851081848,
-0.027597686275839806,
-0.02870715595781803,
-0.012690603733062744,
-0.0884762853384018,
0.017713967710733414,
0.0883488655090332,
0.021697910502552986,
-0.010154876857995987,
0.08800514787435532,
-0.10607387870550156,
-0.031545281410217285,
0.04240815341472626,
-0.025503400713205338,
0.016937192529439926,
0.04961790516972542,
0.021484380587935448,
-0.00476636178791523,
0.04880613833665848,
0.042518243193626404,
0.04617860168218613,
0.028761882334947586,
0.04985757917165756,
-0.026445778086781502,
-0.07439249008893967,
-0.033092718571424484,
-0.0048429109156131744,
0.05411180481314659,
0.1303826868534088,
0.02593902498483658,
-0.06958279013633728,
0.0071420432068407536,
0.11105038970708847,
-0.03283325210213661,
-0.05255318433046341,
-0.10796847194433212,
0.24352514743804932,
0.020013678818941116,
0.004060083068907261,
-0.0025401455350220203,
-0.04661009460687637,
0.0018034223467111588,
0.20612066984176636,
0.22237303853034973,
0.002338566817343235,
-0.009122938849031925,
0.009504182264208794,
-0.011561073362827301,
0.036712706089019775,
0.14650380611419678,
0.0018592700362205505,
0.25600191950798035,
-0.047850124537944794,
0.039646461606025696,
-0.042378563433885574,
-0.03849880397319794,
-0.10134011507034302,
0.07487697899341583,
-0.014226829633116722,
0.008339251391589642,
-0.033085282891988754,
0.07250131666660309,
-0.03783570975065231,
-0.1738801896572113,
0.0015340084210038185,
-0.0020986744202673435,
-0.06261923164129257,
0.0152428038418293,
-0.000697573646903038,
0.019347665831446648,
0.08474034816026688,
-0.01971082016825676,
-0.006845755968242884,
0.13224413990974426,
0.019534653052687645,
-0.09874147921800613,
-0.056338220834732056,
0.1153809130191803,
0.015087883919477463,
0.1434781849384308,
0.012759240344166756,
0.08006598055362701,
0.08528990298509598,
0.02237621136009693,
-0.09422524273395538,
0.040132973343133926,
-0.018760787323117256,
-0.026375994086265564,
0.007178398780524731,
0.11115317046642303,
-0.010700685903429985,
0.05903279781341553,
0.030815543606877327,
-0.09296669065952301,
0.05900369957089424,
0.007061999291181564,
-0.03511577844619751,
-0.08087616413831711,
0.08418543636798859,
-0.09224997460842133,
0.15522071719169617,
0.12035651504993439,
-0.015789739787578583,
-0.04590842127799988,
-0.027615774422883987,
0.019287724047899246,
-0.0007741325534880161,
0.0596003532409668,
-0.023611806333065033,
-0.1328638792037964,
0.017724158242344856,
-0.08530982583761215,
0.028686773031949997,
-0.24738427996635437,
-0.09176650643348694,
0.02899697609245777,
-0.01772310584783554,
-0.01745389774441719,
0.052094966173172,
0.04387736693024635,
0.023467933759093285,
-0.03650832176208496,
0.014251296408474445,
-0.038124505430459976,
0.058805517852306366,
-0.10821962356567383,
-0.09229637682437897
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 20k (uncased)
Seed 2 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-20k')
model = BertModel.from_pretrained("multiberts-seed-2-20k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-20k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 20k (uncased)
Seed 2 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 20k (uncased)\nSeed 2 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 20k (uncased)\nSeed 2 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 20k (uncased)\nSeed 2 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08334492146968842,
0.007330266758799553,
-0.002305431757122278,
0.06486259400844574,
0.08168799430131912,
0.0010878946632146835,
0.1218663826584816,
0.05224287137389183,
-0.02745489776134491,
0.02804703079164028,
0.09259293973445892,
0.03013325110077858,
0.04249032959342003,
0.07089442759752274,
0.09633976221084595,
-0.2605816721916199,
0.04885634034872055,
-0.06389699876308441,
0.05395951122045517,
0.07555469870567322,
0.10124798119068146,
-0.07117687165737152,
0.0605892576277256,
0.03857724368572235,
-0.08805029094219208,
-0.017829393967986107,
-0.012387801893055439,
-0.034811243414878845,
0.10039888322353363,
0.07001356780529022,
0.06085573881864548,
0.002661045640707016,
0.05861986055970192,
-0.0888742208480835,
0.01551471371203661,
0.04602308198809624,
-0.002204808872193098,
0.027733977884054184,
-0.005258694291114807,
0.013948550447821617,
0.11626434326171875,
0.033464785665273666,
0.07743774354457855,
0.03414164111018181,
-0.09424299001693726,
-0.11812682449817657,
-0.08040118962526321,
0.1060594916343689,
0.053705230355262756,
0.03882567211985588,
-0.006510527804493904,
0.07472185790538788,
-0.028465429320931435,
0.07431063801050186,
0.11142998933792114,
-0.25859954953193665,
-0.009506277740001678,
0.06321711093187332,
0.04491174593567848,
0.045595571398735046,
0.014407649636268616,
0.026163825765252113,
0.004778511822223663,
0.040348928421735764,
0.024997517466545105,
-0.0238172747194767,
0.1217745691537857,
-0.04527148976922035,
-0.1508708894252777,
-0.04495807737112045,
0.12363626062870026,
-0.007346039637923241,
-0.12421272695064545,
-0.10071152448654175,
-0.03447185456752777,
0.12327427417039871,
-0.0034084105864167213,
-0.01967630162835121,
-0.002355193253606558,
0.01274697296321392,
0.022902553901076317,
-0.09521307051181793,
-0.08661580830812454,
-0.029998712241649628,
-0.036904092878103256,
0.13090267777442932,
0.046566881239414215,
0.051424190402030945,
-0.039210833609104156,
0.08565592765808105,
-0.12281351536512375,
-0.03968586400151253,
-0.0515190064907074,
-0.07886828482151031,
-0.019095050171017647,
0.009683414362370968,
-0.030267996713519096,
-0.09256280213594437,
-0.06066687032580376,
0.11771231144666672,
0.03665308281779289,
0.028910499066114426,
-0.008214114233851433,
0.043946705758571625,
0.07363691926002502,
0.09654632955789566,
-0.036984317004680634,
0.053582631051540375,
0.034077029675245285,
-0.027008090168237686,
0.06224916875362396,
-0.0522952601313591,
-0.10297977924346924,
0.07624982297420502,
0.001048307865858078,
0.03635244071483612,
0.02733456715941429,
0.02987641468644142,
-0.01238754577934742,
-0.07285227626562119,
0.15844498574733734,
-0.07784837484359741,
-0.006142652593553066,
-0.015058272518217564,
0.011491343379020691,
0.042795613408088684,
0.03412049263715744,
-0.005484281573444605,
-0.04494251310825348,
-0.008362337946891785,
-0.05658638849854469,
-0.02437562122941017,
-0.05344773083925247,
-0.11697017401456833,
0.001873415894806385,
-0.03652080148458481,
-0.03192460164427757,
-0.13762253522872925,
-0.21515551209449768,
-0.01764252968132496,
0.06470313668251038,
-0.0025837812572717667,
-0.01205356139689684,
0.02297983691096306,
0.016680428758263588,
-0.019540267065167427,
0.011503602378070354,
-0.045418623834848404,
0.0016238451935350895,
-0.004659038037061691,
-0.035782668739557266,
0.056880511343479156,
-0.04188916087150574,
0.022384047508239746,
-0.06778646260499954,
0.024548424407839775,
-0.2094101905822754,
0.09033215045928955,
-0.03079451620578766,
-0.004215121269226074,
-0.03827192634344101,
-0.04068917781114578,
0.009319392964243889,
0.04560411721467972,
-0.009110121056437492,
0.11421236395835876,
-0.14090704917907715,
-0.05416449159383774,
0.19312229752540588,
-0.16119514405727386,
0.00334341824054718,
0.09806349873542786,
-0.049573663622140884,
0.061091188341379166,
0.13239063322544098,
0.10253144055604935,
0.08011110872030258,
-0.06978089362382889,
0.012904872186481953,
0.062322117388248444,
-0.0684543177485466,
0.05724441260099411,
0.0934305191040039,
-0.024670617654919624,
-0.13250476121902466,
0.02761581540107727,
-0.07530182600021362,
-0.011324692517518997,
-0.025421608239412308,
-0.02104342356324196,
0.004685916006565094,
-0.03490868955850601,
0.03022123873233795,
0.005680014844983816,
0.01847403682768345,
-0.040704213082790375,
-0.08273282647132874,
0.03424075245857239,
0.07722335308790207,
-0.07051796466112137,
0.04230537638068199,
-0.07142363488674164,
0.05852296203374863,
-0.07390373200178146,
-0.004286822397261858,
-0.1676545888185501,
-0.022128695622086525,
0.04622095078229904,
-0.05418460816144943,
0.052758537232875824,
0.0963362529873848,
0.0031432583928108215,
0.12411240488290787,
-0.03678039461374283,
0.004333416000008583,
-0.009421927854418755,
-0.011539284139871597,
-0.05310840904712677,
-0.12566927075386047,
-0.07868025451898575,
-0.06903642416000366,
0.10735118389129639,
-0.07722662389278412,
0.028836799785494804,
-0.06609265506267548,
-0.01744294911623001,
-0.007493697106838226,
-0.06039246544241905,
-0.00554390624165535,
0.005096813663840294,
-0.0293236393481493,
-0.04725689813494682,
0.04646865278482437,
0.051485393196344376,
-0.06346327066421509,
0.07792679220438004,
-0.1079816147685051,
-0.06401103734970093,
0.05468500405550003,
0.01794825680553913,
-0.07770942151546478,
0.09059271216392517,
-0.02114502340555191,
-0.01332736387848854,
-0.057804107666015625,
-0.04117070510983467,
0.19237318634986877,
-0.02234693244099617,
0.10162104666233063,
-0.09166095405817032,
0.000487779441755265,
0.028647717088460922,
-0.04995886981487274,
-0.01838107220828533,
0.06012928485870361,
0.04678035527467728,
-0.19560480117797852,
0.015974488109350204,
0.054261066019535065,
0.07795001566410065,
0.11038665473461151,
0.027155451476573944,
-0.026757748797535896,
-0.05020394176244736,
-0.008059494197368622,
0.006425369530916214,
0.05447230115532875,
-0.02660449966788292,
-0.009127091616392136,
0.03301534056663513,
0.05638572946190834,
0.01734592393040657,
-0.08476439863443375,
0.03621140867471695,
0.06748916953802109,
-0.016917448490858078,
-0.04254043102264404,
-0.025464504957199097,
-0.05981110408902168,
0.06393937021493912,
0.05363023281097412,
0.03906257450580597,
0.02712520770728588,
-0.01524996105581522,
-0.13819125294685364,
0.1858876645565033,
-0.1156582459807396,
-0.25846436619758606,
-0.10864473134279251,
-0.05460579693317413,
-0.024074776098132133,
0.04271368682384491,
0.058706484735012054,
-0.027595072984695435,
-0.04192440211772919,
-0.11386440694332123,
0.05966230481863022,
-0.06810033321380615,
-0.03112281672656536,
-0.01031305082142353,
-0.04979032278060913,
-0.018017031252384186,
-0.12704266607761383,
-0.012007543817162514,
-0.029694749042391777,
-0.07525654137134552,
0.0071517834439873695,
-0.03420532867312431,
0.024340173229575157,
0.13643783330917358,
0.03606455400586128,
-0.01851589046418667,
-0.015575706027448177,
0.18774807453155518,
0.009681317955255508,
0.057962581515312195,
0.11311452090740204,
-0.030961181968450546,
0.055287327617406845,
0.044048622250556946,
0.023036357015371323,
-0.04694434627890587,
0.010386514477431774,
-0.016960855573415756,
-0.1202450692653656,
-0.17597436904907227,
-0.07201167941093445,
-0.00204581581056118,
0.004785671830177307,
0.02136390283703804,
0.03664875030517578,
0.024668864905834198,
0.03781769797205925,
-0.031215541064739227,
0.03218945115804672,
-0.010943032801151276,
0.08172398060560226,
0.024052105844020844,
-0.07647714018821716,
0.09152993559837341,
-0.059504494071006775,
0.01757511869072914,
0.11114048212766647,
-0.06347524374723434,
0.1875482052564621,
0.028828922659158707,
0.06305883079767227,
0.10024617612361908,
0.01983886957168579,
0.05421195924282074,
0.08545655757188797,
-0.041681088507175446,
0.007726265583187342,
-0.06347660720348358,
-0.05227256193757057,
-0.037655480206012726,
0.05619179084897041,
0.03064320981502533,
0.016197562217712402,
-0.11782218515872955,
0.021411500871181488,
-0.001607606653124094,
0.136534184217453,
0.04840966314077377,
-0.11773677170276642,
-0.12259213626384735,
0.037841517478227615,
-0.04819076135754585,
-0.06432046741247177,
0.02787446230649948,
0.0631503015756607,
-0.15455913543701172,
0.044692475348711014,
-0.007022406905889511,
0.06521554291248322,
-0.09184195846319199,
0.014606828801333904,
-0.05470407009124756,
-0.0013183904811739922,
0.00799032486975193,
0.07276573777198792,
-0.13398797810077667,
0.10654880106449127,
0.02194184809923172,
0.045138321816921234,
-0.08218853175640106,
0.016806235536932945,
-0.01150562334805727,
0.10583474487066269,
0.11797896027565002,
0.04483407735824585,
-0.06237111985683441,
-0.013633891940116882,
-0.04816451296210289,
0.01939506083726883,
0.06256795674562454,
-0.08161823451519012,
0.05890265852212906,
0.006966577377170324,
0.006946818437427282,
-0.021520890295505524,
0.018553517758846283,
-0.12866806983947754,
-0.12103380262851715,
0.06541027128696442,
-0.07852106541395187,
-0.10022056102752686,
-0.06084378808736801,
-0.06208588182926178,
-0.04378315806388855,
0.21906638145446777,
-0.11926820874214172,
-0.08788865804672241,
-0.09862606227397919,
-0.006511228159070015,
0.04445179924368858,
-0.06823895126581192,
0.044995300471782684,
-0.037285078316926956,
0.09974965453147888,
-0.04968131333589554,
-0.1104273796081543,
0.036275092512369156,
-0.11201418191194534,
-0.11693765968084335,
-0.045605458319187164,
0.10952018201351166,
0.11417225003242493,
0.04003194719552994,
0.01226481981575489,
0.013402758166193962,
-0.000021263957023620605,
-0.11404567956924438,
0.01745336689054966,
0.13986805081367493,
-0.0025500189512968063,
0.06806625425815582,
-0.057096414268016815,
0.0334102138876915,
-0.015604151412844658,
-0.000150999054312706,
0.13612322509288788,
0.18745088577270508,
-0.06591910868883133,
0.17987403273582458,
0.19627147912979126,
-0.104901522397995,
-0.19002673029899597,
-0.05441535264253616,
0.000645800493657589,
0.045527469366788864,
0.050625279545784,
-0.18703818321228027,
0.08779652416706085,
0.033963464200496674,
-0.03441055864095688,
0.016745150089263916,
-0.237899050116539,
-0.10992123186588287,
0.08904527127742767,
0.053956467658281326,
0.18846070766448975,
-0.07787163555622101,
-0.04308219999074936,
-0.01684388890862465,
-0.03052600473165512,
0.05006524920463562,
-0.04433503374457359,
0.0892283022403717,
0.009374616667628288,
-0.02783885970711708,
0.004022189415991306,
-0.0302178543061018,
0.09649181365966797,
0.03874228522181511,
0.020581930875778198,
-0.0698304995894432,
-0.008426835760474205,
0.11241558939218521,
-0.03929850086569786,
0.09666939824819565,
0.04324604570865631,
0.07563848048448563,
-0.09280802309513092,
-0.05817025154829025,
-0.0735156238079071,
0.04455472528934479,
-0.041097767651081085,
-0.054584190249443054,
-0.06414130330085754,
0.05612108111381531,
0.03767813742160797,
0.012195717543363571,
0.0018326975405216217,
-0.03468301519751549,
0.04183053970336914,
0.0921114906668663,
0.08013319224119186,
-0.02948237583041191,
-0.07169434428215027,
-0.05402067303657532,
-0.046861641108989716,
0.06722984462976456,
-0.0970861092209816,
0.019994935020804405,
0.024348227307200432,
0.013874124735593796,
0.09204346686601639,
0.031750813126564026,
-0.1420210599899292,
0.01195964589715004,
0.03173273056745529,
-0.12528085708618164,
-0.11667467653751373,
-0.02076174132525921,
0.03806319832801819,
-0.03602880984544754,
0.05297208949923515,
0.149383544921875,
-0.03475901857018471,
-0.033242784440517426,
-0.048218853771686554,
0.03759314492344856,
-0.022152721881866455,
0.04486677795648575,
0.061968497931957245,
0.030229082331061363,
-0.07309373468160629,
0.07225204259157181,
0.040816452354192734,
-0.034499604254961014,
0.04489637166261673,
0.04015474021434784,
-0.09198965132236481,
-0.07673589140176773,
-0.05770154669880867,
0.09139374643564224,
-0.020526042208075523,
-0.04772148281335831,
0.0006984323263168335,
-0.0798947662115097,
0.06594093143939972,
0.07606853544712067,
0.04702487960457802,
0.0391397587954998,
-0.08817210048437119,
0.016101915389299393,
-0.052261851727962494,
0.03451165556907654,
-0.029946086928248405,
-0.004992349073290825,
-0.05530930310487747,
0.07381235808134079,
0.06399893015623093,
0.09835582226514816,
-0.03407403081655502,
-0.07666340470314026,
-0.08166491240262985,
-0.014388814568519592,
-0.060358926653862,
-0.03198940306901932,
-0.07640106976032257,
-0.009201365523040295,
0.002464501652866602,
-0.004624819383025169,
0.023711659014225006,
0.034974854439496994,
-0.04437217861413956,
-0.01872122660279274,
-0.039774153381586075,
0.03857491910457611,
-0.06513572484254837,
0.006993493065237999,
0.017015717923641205,
-0.03743862360715866,
0.09288841485977173,
0.041423045098781586,
-0.009776812046766281,
0.04788638651371002,
-0.025663699954748154,
0.03450228273868561,
-0.01991615630686283,
0.0006237071938812733,
-0.02203686162829399,
-0.11125978082418442,
-0.006124584469944239,
0.0042527299374341965,
-0.0261964313685894,
0.00860164687037468,
0.056370291858911514,
-0.07118342816829681,
0.0844220295548439,
0.04345231503248215,
-0.0352897047996521,
-0.07271423935890198,
0.0403580442070961,
-0.01871321350336075,
0.029882261529564857,
0.06780974566936493,
-0.03315231204032898,
0.05458898842334747,
-0.09858334064483643,
-0.02761034667491913,
0.0036052244249731302,
-0.002610970288515091,
-0.010003633797168732,
-0.05502122640609741,
-0.005058137699961662,
0.00561959482729435,
0.17209231853485107,
-0.02232741191983223,
0.034490447491407394,
0.013347032479941845,
0.006876551546156406,
0.04953328147530556,
-0.014336217194795609,
0.07135994732379913,
-0.0032568909227848053,
-0.023113509640097618,
-0.01619420200586319,
0.036326225847005844,
0.00611802376806736,
-0.0008955392986536026,
0.1413373351097107,
0.04558689892292023,
0.08743639290332794,
0.07586418092250824,
0.017216091975569725,
0.016844872385263443,
-0.13966575264930725,
-0.08229362219572067,
0.0052261436358094215,
0.056665439158678055,
-0.01681959070265293,
0.019784998148679733,
0.09408487379550934,
-0.09216837584972382,
0.06956067681312561,
0.05224573612213135,
-0.04786420613527298,
-0.1279364973306656,
-0.19543711841106415,
-0.028605680912733078,
-0.030493086203932762,
-0.012796775437891483,
-0.08936247229576111,
0.020091140642762184,
0.08298671245574951,
0.021789738908410072,
-0.01024900283664465,
0.08938886225223541,
-0.10427030920982361,
-0.03242994099855423,
0.041049398481845856,
-0.025566738098859787,
0.015111884102225304,
0.0453929677605629,
0.021727055311203003,
-0.003780050203204155,
0.04707092046737671,
0.0425238236784935,
0.044188715517520905,
0.0324077345430851,
0.05026105046272278,
-0.026328418403863907,
-0.07436755299568176,
-0.03223848715424538,
-0.004974824376404285,
0.05347501486539841,
0.12515033781528473,
0.024639597162604332,
-0.07020146399736404,
0.006429660599678755,
0.10757778584957123,
-0.032872214913368225,
-0.048633817583322525,
-0.1069183349609375,
0.24722936749458313,
0.018936987966299057,
0.0029650672804564238,
-0.002780152950435877,
-0.04693146049976349,
0.003870377317070961,
0.20518670976161957,
0.2197408676147461,
0.005855102092027664,
-0.00850796140730381,
0.0101375263184309,
-0.011845842003822327,
0.034350909292697906,
0.14112591743469238,
0.0014934930950403214,
0.25660091638565063,
-0.04691168665885925,
0.03928135707974434,
-0.04155144467949867,
-0.03959614038467407,
-0.09992344677448273,
0.07515150308609009,
-0.01094001717865467,
0.0066839256323874,
-0.030742820352315903,
0.07129981368780136,
-0.03592882305383682,
-0.17494484782218933,
0.0013477439060807228,
-0.0025879940949380398,
-0.06230134144425392,
0.015307513996958733,
-0.00020482856780290604,
0.020533420145511627,
0.08419116586446762,
-0.018907349556684494,
-0.008769635111093521,
0.12937405705451965,
0.019710825756192207,
-0.10006702691316605,
-0.05515745282173157,
0.11599688977003098,
0.010985998436808586,
0.14266815781593323,
0.011482968926429749,
0.08072402328252792,
0.08654990792274475,
0.021705392748117447,
-0.09215612709522247,
0.03964734822511673,
-0.017989663407206535,
-0.028158534318208694,
0.004978296346962452,
0.11460556089878082,
-0.010508362203836441,
0.0596998892724514,
0.031669385731220245,
-0.09356391429901123,
0.05989939346909523,
0.008531998842954636,
-0.036535535007715225,
-0.08177823573350906,
0.08364877104759216,
-0.09160856902599335,
0.15533509850502014,
0.12154336273670197,
-0.015536726452410221,
-0.045508138835430145,
-0.029377851635217667,
0.02023685723543167,
0.0000970824621617794,
0.06091848388314247,
-0.022650809958577156,
-0.13267359137535095,
0.019458068534731865,
-0.08265973627567291,
0.029704775661230087,
-0.24713173508644104,
-0.09154091775417328,
0.028891202062368393,
-0.018450062721967697,
-0.01724966987967491,
0.04883835092186928,
0.045475732535123825,
0.022502757608890533,
-0.03672262281179428,
0.0169055238366127,
-0.03779001161456108,
0.0585666224360466,
-0.10799439251422882,
-0.09096835553646088
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 300k (uncased)
Seed 2 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-300k')
model = BertModel.from_pretrained("multiberts-seed-2-300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-300k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 300k (uncased)
Seed 2 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 300k (uncased)\nSeed 2 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 300k (uncased)\nSeed 2 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 300k (uncased)\nSeed 2 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08228536695241928,
-0.00019686727318912745,
-0.0022135137114673853,
0.06908251345157623,
0.0838136300444603,
0.0030900975689291954,
0.11817307770252228,
0.05096874758601189,
-0.02826942503452301,
0.02312382683157921,
0.09298941493034363,
0.028952468186616898,
0.042989879846572876,
0.0645398497581482,
0.09704232215881348,
-0.25930309295654297,
0.04978908598423004,
-0.06409703940153122,
0.052932798862457275,
0.07671541720628738,
0.10170014202594757,
-0.06957549601793289,
0.0618155300617218,
0.036749184131622314,
-0.08313236385583878,
-0.01695197820663452,
-0.013091940432786942,
-0.03380287438631058,
0.09930700063705444,
0.06938400864601135,
0.06170963495969772,
0.0013749320060014725,
0.06266120076179504,
-0.08991515636444092,
0.015421008691191673,
0.04363466799259186,
0.000857099425047636,
0.026088684797286987,
-0.008996447548270226,
0.015596481040120125,
0.10949196666479111,
0.03833450376987457,
0.07788746058940887,
0.034712210297584534,
-0.09545981884002686,
-0.1108202114701271,
-0.08199819922447205,
0.10479018092155457,
0.05504753440618515,
0.039543598890304565,
-0.0052482970058918,
0.07614842057228088,
-0.029600854963064194,
0.0723859891295433,
0.11179439723491669,
-0.2575431764125824,
-0.008726723492145538,
0.06736884266138077,
0.04513935744762421,
0.04401177167892456,
0.014141842722892761,
0.025767887011170387,
0.0041912272572517395,
0.041720885783433914,
0.029142096638679504,
-0.02384359762072563,
0.11898074299097061,
-0.0469830222427845,
-0.1509723663330078,
-0.043842755258083344,
0.11895076185464859,
-0.005944579839706421,
-0.1258109211921692,
-0.10156605392694473,
-0.03322521969676018,
0.11891984939575195,
-0.001169261522591114,
-0.020666921511292458,
-0.003602084703743458,
0.012604428455233574,
0.023085489869117737,
-0.09845730662345886,
-0.08609575778245926,
-0.03200211003422737,
-0.038696177303791046,
0.13488507270812988,
0.04517894610762596,
0.05382328853011131,
-0.0354771763086319,
0.0880468413233757,
-0.11448638141155243,
-0.04084029421210289,
-0.05288757383823395,
-0.0815395712852478,
-0.01832444593310356,
0.01015095692127943,
-0.027392853051424026,
-0.08935452997684479,
-0.06055359169840813,
0.11664817482233047,
0.03798322007060051,
0.027538925409317017,
-0.0027004405856132507,
0.04314874857664108,
0.07433629035949707,
0.09873387217521667,
-0.03760334476828575,
0.04970758780837059,
0.03423689305782318,
-0.021277736872434616,
0.061214495450258255,
-0.05149392411112785,
-0.10223541408777237,
0.07448086142539978,
-0.0001572156324982643,
0.038629449903964996,
0.02696343883872032,
0.029415607452392578,
-0.015041285194456577,
-0.07327120006084442,
0.16324356198310852,
-0.07839760929346085,
-0.007199650630354881,
-0.017344210296869278,
0.012245988473296165,
0.04669163376092911,
0.03341102600097656,
-0.0050236377865076065,
-0.047048456966876984,
-0.007008104585111141,
-0.0538112074136734,
-0.024729926139116287,
-0.05495550110936165,
-0.11666133999824524,
-0.00036672595888376236,
-0.03508583456277847,
-0.032909393310546875,
-0.14076027274131775,
-0.21443650126457214,
-0.01638334058225155,
0.06578253209590912,
-0.0032710665836930275,
-0.01069263182580471,
0.024125488474965096,
0.018018459901213646,
-0.01960824802517891,
0.010419083759188652,
-0.047169748693704605,
0.001408844254910946,
-0.004366494715213776,
-0.03372686356306076,
0.05702801048755646,
-0.043980225920677185,
0.021591629832983017,
-0.06989581882953644,
0.023569118231534958,
-0.20600493252277374,
0.08999637514352798,
-0.028754089027643204,
-0.004534636624157429,
-0.038398001343011856,
-0.04303816705942154,
0.005932692438364029,
0.04615169018507004,
-0.0071880570612847805,
0.11609679460525513,
-0.13565941154956818,
-0.04947422444820404,
0.18513897061347961,
-0.1602792888879776,
-0.00044970959424972534,
0.09986777603626251,
-0.04974854737520218,
0.06012479215860367,
0.13228380680084229,
0.09649914503097534,
0.08033657819032669,
-0.0723324790596962,
0.01432920154184103,
0.06246069446206093,
-0.06847000122070312,
0.055618274956941605,
0.09291505813598633,
-0.02571781538426876,
-0.13353176414966583,
0.027288060635328293,
-0.07638689875602722,
-0.010694138705730438,
-0.02614443004131317,
-0.022143954411149025,
0.006287863478064537,
-0.038788050413131714,
0.03041926771402359,
0.003688585013151169,
0.017569895833730698,
-0.039348702877759933,
-0.08227894455194473,
0.03307073563337326,
0.07608120143413544,
-0.07068551331758499,
0.045025404542684555,
-0.06986882537603378,
0.05826191231608391,
-0.07597388327121735,
-0.005674042738974094,
-0.16718518733978271,
-0.023286372423171997,
0.04606793820858002,
-0.051160167902708054,
0.05182619392871857,
0.09143354743719101,
0.002472153166308999,
0.12312494218349457,
-0.040004998445510864,
0.004000646993517876,
-0.012564880773425102,
-0.01097755879163742,
-0.05079929903149605,
-0.11892125010490417,
-0.08296720683574677,
-0.06849734485149384,
0.09877592325210571,
-0.06887753307819366,
0.028338484466075897,
-0.06948186457157135,
-0.020938238129019737,
-0.006853766739368439,
-0.06007671356201172,
-0.0064856307581067085,
0.007918968796730042,
-0.029535122215747833,
-0.04666361212730408,
0.04680574685335159,
0.05141230300068855,
-0.06301490217447281,
0.07671236991882324,
-0.10477562248706818,
-0.06146145984530449,
0.05424311012029648,
0.016714129596948624,
-0.0793721079826355,
0.09081758558750153,
-0.018861927092075348,
-0.012441611848771572,
-0.06270218640565872,
-0.04548604041337967,
0.19206440448760986,
-0.02062789723277092,
0.10056242346763611,
-0.0923631489276886,
0.001233742106705904,
0.025963354855775833,
-0.0501033216714859,
-0.01808515377342701,
0.05612081661820412,
0.04902299866080284,
-0.1940418779850006,
0.015200305730104446,
0.054200202226638794,
0.07621484994888306,
0.1108487993478775,
0.028987038880586624,
-0.02549152448773384,
-0.049448154866695404,
-0.011408070102334023,
0.0062644220888614655,
0.05552627518773079,
-0.019249338656663895,
-0.006410596426576376,
0.031567834317684174,
0.05920784920454025,
0.016670331358909607,
-0.08245185017585754,
0.03591308742761612,
0.06826898455619812,
-0.018291844055056572,
-0.037132859230041504,
-0.026247236877679825,
-0.05984915420413017,
0.06453168392181396,
0.05459868162870407,
0.0399872250854969,
0.025664398446679115,
-0.015562555752694607,
-0.1377953290939331,
0.18609705567359924,
-0.11280058324337006,
-0.2572934925556183,
-0.11035697907209396,
-0.05324311554431915,
-0.024511652067303658,
0.042583663016557693,
0.05616875737905502,
-0.02573482133448124,
-0.040854889899492264,
-0.1164093166589737,
0.06371108442544937,
-0.06406467407941818,
-0.031140116974711418,
-0.008355040103197098,
-0.04992202669382095,
-0.018261056393384933,
-0.12788113951683044,
-0.01335720345377922,
-0.030607864260673523,
-0.07740920782089233,
0.005096292123198509,
-0.03508872166275978,
0.02793247252702713,
0.135393887758255,
0.034453969448804855,
-0.01919214241206646,
-0.01638389192521572,
0.1900181770324707,
0.011447003111243248,
0.05990307778120041,
0.11422545462846756,
-0.031409308314323425,
0.05503970384597778,
0.04422702640295029,
0.024721674621105194,
-0.04845261201262474,
0.012716309167444706,
-0.018010027706623077,
-0.12298232316970825,
-0.17185096442699432,
-0.07184885442256927,
-0.003315575886517763,
0.003040383569896221,
0.02044307440519333,
0.03512531518936157,
0.017361167818307877,
0.039205510169267654,
-0.030765969306230545,
0.026380017399787903,
-0.010568168014287949,
0.08109791576862335,
0.021277226507663727,
-0.07267207652330399,
0.09253327548503876,
-0.06082580238580704,
0.01544170081615448,
0.11053959280252457,
-0.05766431987285614,
0.1853501796722412,
0.025408098474144936,
0.05720001459121704,
0.09999272227287292,
0.019322477281093597,
0.05404770001769066,
0.08861058205366135,
-0.044139400124549866,
0.00841914489865303,
-0.062286876142024994,
-0.05156414210796356,
-0.03975685313344002,
0.05191848427057266,
0.032795101404190063,
0.020742084830999374,
-0.11866112798452377,
0.021938737481832504,
-0.0011543692089617252,
0.140244722366333,
0.05083906650543213,
-0.11724845319986343,
-0.12175054848194122,
0.03835414722561836,
-0.046247780323028564,
-0.06213415414094925,
0.029120009392499924,
0.05965037643909454,
-0.1557757407426834,
0.045426346361637115,
-0.005645414814352989,
0.06615014374256134,
-0.09375087916851044,
0.013926600106060505,
-0.04850278049707413,
-0.004062272608280182,
0.007329908665269613,
0.0697927474975586,
-0.13258816301822662,
0.10883238166570663,
0.021040640771389008,
0.04749437794089317,
-0.08031852543354034,
0.01647285372018814,
-0.01184230949729681,
0.1024537906050682,
0.11743000149726868,
0.04492712765932083,
-0.05206289887428284,
-0.01807493530213833,
-0.04747564718127251,
0.01878976821899414,
0.060031626373529434,
-0.08049076795578003,
0.06021319702267647,
0.0059013813734054565,
0.007676604203879833,
-0.022757241502404213,
0.019954394549131393,
-0.1311289370059967,
-0.12210755795240402,
0.06152590364217758,
-0.07995706051588058,
-0.10178768634796143,
-0.05824342742562294,
-0.061335239559412,
-0.040733471512794495,
0.21487949788570404,
-0.11698831617832184,
-0.08977395296096802,
-0.09758449345827103,
-0.010994039475917816,
0.04479695111513138,
-0.06708274781703949,
0.04401523992419243,
-0.03703254461288452,
0.09457522630691528,
-0.04897212237119675,
-0.11014948785305023,
0.034804414957761765,
-0.11233504116535187,
-0.11700378358364105,
-0.04411645978689194,
0.10813082009553909,
0.1157355010509491,
0.03978465124964714,
0.01419786550104618,
0.012542149052023888,
0.003169778734445572,
-0.11645706743001938,
0.014692394062876701,
0.13486206531524658,
-0.0031662508845329285,
0.07366497069597244,
-0.05979178100824356,
0.027917753905057907,
-0.015938377007842064,
-0.0008425358682870865,
0.13536736369132996,
0.1881198137998581,
-0.06490374356508255,
0.17817352712154388,
0.20172423124313354,
-0.10515839606523514,
-0.19166892766952515,
-0.05754360929131508,
0.00021045468747615814,
0.04697493091225624,
0.05414704978466034,
-0.18290774524211884,
0.08697287738323212,
0.03405076265335083,
-0.03455544263124466,
0.010004609823226929,
-0.2406195104122162,
-0.11147844791412354,
0.08761827647686005,
0.056026119738817215,
0.18930888175964355,
-0.0788760632276535,
-0.03999131917953491,
-0.019422106444835663,
-0.03987175226211548,
0.05021777004003525,
-0.03458032011985779,
0.09009440243244171,
0.007417647168040276,
-0.029295874759554863,
0.0034699616953730583,
-0.030123885720968246,
0.0977465808391571,
0.0378115177154541,
0.02065538428723812,
-0.07125811278820038,
-0.007653148844838142,
0.1131025031208992,
-0.03920851647853851,
0.09925012290477753,
0.042666949331760406,
0.0754806399345398,
-0.09346434473991394,
-0.06016801297664642,
-0.0736757442355156,
0.042146407067775726,
-0.041018277406692505,
-0.05688989534974098,
-0.0650794580578804,
0.057674944400787354,
0.03885851800441742,
0.010614767670631409,
-0.0021638143807649612,
-0.036005809903144836,
0.04225427657365799,
0.09090735018253326,
0.0840739756822586,
-0.029360797256231308,
-0.07267369329929352,
-0.05276532843708992,
-0.04835698753595352,
0.06728599965572357,
-0.08897465467453003,
0.021114693954586983,
0.025044117122888565,
0.01107284426689148,
0.08948443830013275,
0.03333831578493118,
-0.13857951760292053,
0.013087631203234196,
0.03255105018615723,
-0.12118794023990631,
-0.10982851684093475,
-0.019519563764333725,
0.0346187986433506,
-0.03923686966300011,
0.05127862095832825,
0.14918389916419983,
-0.03489842638373375,
-0.03341593220829964,
-0.047546617686748505,
0.03858516365289688,
-0.02458047866821289,
0.047183409333229065,
0.06263367831707001,
0.030387531965970993,
-0.07052914798259735,
0.07370321452617645,
0.037908364087343216,
-0.029983587563037872,
0.042784735560417175,
0.040603842586278915,
-0.0916086882352829,
-0.07687006145715714,
-0.05815279483795166,
0.08580603450536728,
-0.024957286193966866,
-0.04394122213125229,
0.001340636983513832,
-0.08147899061441422,
0.06561824679374695,
0.07165851444005966,
0.0499010868370533,
0.03850751742720604,
-0.08841678500175476,
0.015791870653629303,
-0.05136106163263321,
0.03324045240879059,
-0.027122771367430687,
-0.006190389394760132,
-0.054659731686115265,
0.06406663358211517,
0.06346891820430756,
0.09721830487251282,
-0.035014934837818146,
-0.0756121575832367,
-0.08347120136022568,
-0.011844588443636894,
-0.06077691167593002,
-0.03251062333583832,
-0.07585787028074265,
-0.007809438742697239,
0.0012957537546753883,
-0.00580328144133091,
0.02271180972456932,
0.03563275188207626,
-0.04256138950586319,
-0.019074898213148117,
-0.04017103463411331,
0.03584892675280571,
-0.06354369223117828,
0.0059646861627697945,
0.018494293093681335,
-0.03793324902653694,
0.09229404479265213,
0.036039672791957855,
-0.011726956814527512,
0.04631659388542175,
-0.02544671855866909,
0.03360186144709587,
-0.02021944709122181,
0.0026089444290846586,
-0.02596440538764,
-0.11040398478507996,
-0.0051120612770318985,
0.006506605073809624,
-0.023172810673713684,
0.010313129052519798,
0.06344200670719147,
-0.07216738909482956,
0.08521471917629242,
0.04175182804465294,
-0.03320499509572983,
-0.07096996903419495,
0.04022316634654999,
-0.012359080836176872,
0.03231729194521904,
0.07055298984050751,
-0.033971648663282394,
0.05439829081296921,
-0.09949799627065659,
-0.026810582727193832,
0.004798712208867073,
-0.0016313567757606506,
-0.015406882390379906,
-0.0544571578502655,
-0.00436673779040575,
0.008624890819191933,
0.17327864468097687,
-0.02306804060935974,
0.034117452800273895,
0.01505844946950674,
0.009949401021003723,
0.05164489895105362,
-0.013381429016590118,
0.07167664170265198,
-0.005485706031322479,
-0.02536078356206417,
-0.015599839389324188,
0.03521893545985222,
0.005103694275021553,
0.0010503586381673813,
0.14591342210769653,
0.04669421538710594,
0.08982378989458084,
0.07656829059123993,
0.01608150824904442,
0.02044355683028698,
-0.13364914059638977,
-0.09363383054733276,
0.006480704993009567,
0.052830472588539124,
-0.01598038338124752,
0.015424426645040512,
0.09236077964305878,
-0.0901108980178833,
0.07028819620609283,
0.05123664438724518,
-0.048368901014328,
-0.129083514213562,
-0.1937943696975708,
-0.026999792084097862,
-0.027173278853297234,
-0.012225976213812828,
-0.08955371379852295,
0.017330385744571686,
0.0885833203792572,
0.022192632779479027,
-0.009457648731768131,
0.090999536216259,
-0.10749414563179016,
-0.031203333288431168,
0.04122660309076309,
-0.026130620390176773,
0.016989879310131073,
0.05060228332877159,
0.0214410200715065,
-0.0053871143609285355,
0.04837656766176224,
0.043378762900829315,
0.04576932266354561,
0.02533508837223053,
0.04842561110854149,
-0.025370512157678604,
-0.07480280846357346,
-0.03353342041373253,
-0.005820957012474537,
0.05485961213707924,
0.13488687574863434,
0.0249054953455925,
-0.068820059299469,
0.006672097370028496,
0.10846198350191116,
-0.0330984964966774,
-0.054080795496702194,
-0.10854101181030273,
0.24220862984657288,
0.021416272968053818,
0.00405081594362855,
-0.003402072936296463,
-0.04593127220869064,
0.0016603171825408936,
0.20548880100250244,
0.22541767358779907,
-0.0003581609344109893,
-0.008722464554011822,
0.009151733480393887,
-0.011023225262761116,
0.03728485107421875,
0.14600005745887756,
0.0009014606475830078,
0.2560863792896271,
-0.0473485067486763,
0.03967215120792389,
-0.04210583493113518,
-0.03996919468045235,
-0.09819172322750092,
0.07386234402656555,
-0.011878157034516335,
0.00841419119387865,
-0.033789657056331635,
0.07230734825134277,
-0.036857347935438156,
-0.17021049559116364,
0.0010951878502964973,
-0.005004627630114555,
-0.0624709352850914,
0.014686516486108303,
-0.003888145089149475,
0.019071495160460472,
0.08437676727771759,
-0.017940886318683624,
-0.005802880506962538,
0.12907490134239197,
0.01977727934718132,
-0.09997305274009705,
-0.056358568370342255,
0.11614983528852463,
0.01754535362124443,
0.14307734370231628,
0.01383834145963192,
0.08096081018447876,
0.08567006886005402,
0.020707450807094574,
-0.09303608536720276,
0.04272960498929024,
-0.0188874714076519,
-0.025309111922979355,
0.00666455551981926,
0.1109066903591156,
-0.010487069375813007,
0.05957815796136856,
0.028498589992523193,
-0.09449794888496399,
0.05982883647084236,
0.009280852973461151,
-0.03595714643597603,
-0.08125108480453491,
0.0832626074552536,
-0.09153079986572266,
0.15687638521194458,
0.11997349560260773,
-0.015176777727901936,
-0.045622147619724274,
-0.026813993230462074,
0.020584529265761375,
-0.00031906599178910255,
0.05959530174732208,
-0.024171065539121628,
-0.13421742618083954,
0.016736818477511406,
-0.0890912115573883,
0.026832804083824158,
-0.24781376123428345,
-0.09337688982486725,
0.0291272159665823,
-0.018182039260864258,
-0.019105147570371628,
0.05175260081887245,
0.043131764978170395,
0.023962343111634254,
-0.03540746122598648,
0.016035333275794983,
-0.03794854134321213,
0.05815685912966728,
-0.1106981560587883,
-0.09276710450649261
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 400k (uncased)
Seed 2 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-400k')
model = BertModel.from_pretrained("multiberts-seed-2-400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-400k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 400k (uncased)
Seed 2 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 400k (uncased)\nSeed 2 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 400k (uncased)\nSeed 2 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 400k (uncased)\nSeed 2 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08265563100576401,
0.0007661140989512205,
-0.002290251897647977,
0.06933040916919708,
0.08305282890796661,
0.00405439967289567,
0.11671440303325653,
0.051511794328689575,
-0.02915235236287117,
0.02340291440486908,
0.09027090668678284,
0.030122380703687668,
0.04428979754447937,
0.0659746453166008,
0.09625252336263657,
-0.26012134552001953,
0.047854430973529816,
-0.06417910754680634,
0.05069154500961304,
0.07508274912834167,
0.1011982411146164,
-0.06977580487728119,
0.06256648153066635,
0.0377957820892334,
-0.0832233652472496,
-0.018347466364502907,
-0.012239853851497173,
-0.03478258475661278,
0.10096898674964905,
0.07063651084899902,
0.061834968626499176,
0.0018196534365415573,
0.0622103288769722,
-0.08816571533679962,
0.015833934769034386,
0.04374001547694206,
-0.00005530845373868942,
0.027614612132310867,
-0.007369637489318848,
0.016908977180719376,
0.10781286656856537,
0.03778531774878502,
0.07651837170124054,
0.03415404632687569,
-0.09506478905677795,
-0.11601075530052185,
-0.08158904314041138,
0.10495293140411377,
0.05431327223777771,
0.03721101954579353,
-0.005254642106592655,
0.07245776057243347,
-0.028533203527331352,
0.07224579155445099,
0.1044609546661377,
-0.26147815585136414,
-0.008966175839304924,
0.06517279893159866,
0.043272264301776886,
0.045747362077236176,
0.012661327607929707,
0.027278749272227287,
0.005149442702531815,
0.04180309921503067,
0.026852555572986603,
-0.02397330105304718,
0.11999253928661346,
-0.04576873034238815,
-0.1494070291519165,
-0.04269855469465256,
0.12081674486398697,
-0.006808768957853317,
-0.12610435485839844,
-0.09812559187412262,
-0.03208573907613754,
0.12124129384756088,
-0.0019187303259968758,
-0.019322289153933525,
-0.0037391092628240585,
0.01276062335819006,
0.02315441332757473,
-0.09710448980331421,
-0.08633317053318024,
-0.03067449852824211,
-0.03857948258519173,
0.13118475675582886,
0.045911721885204315,
0.05415770038962364,
-0.035797737538814545,
0.08763053268194199,
-0.11772429943084717,
-0.04003838822245598,
-0.05270785838365555,
-0.0818408727645874,
-0.018347784876823425,
0.010132323019206524,
-0.02690841257572174,
-0.08560798317193985,
-0.060271698981523514,
0.11563767492771149,
0.03905678167939186,
0.028436139225959778,
-0.0037736836820840836,
0.043242696672677994,
0.0727456733584404,
0.09793053567409515,
-0.03613657504320145,
0.050180137157440186,
0.035807281732559204,
-0.023531343787908554,
0.0601191446185112,
-0.05064410716295242,
-0.1029113382101059,
0.07335550338029861,
-0.000955408439040184,
0.03844376653432846,
0.02672664262354374,
0.030982820317149162,
-0.012772681191563606,
-0.07223367691040039,
0.16359752416610718,
-0.07761780917644501,
-0.006635148078203201,
-0.016135049983859062,
0.010952768847346306,
0.044324636459350586,
0.03302757441997528,
-0.004662009887397289,
-0.04531215876340866,
-0.008037094958126545,
-0.054070211946964264,
-0.02443154715001583,
-0.054451510310173035,
-0.1173231452703476,
-0.0001656506210565567,
-0.03374016284942627,
-0.03309316188097,
-0.14188411831855774,
-0.21402014791965485,
-0.016972368583083153,
0.06516490876674652,
-0.0028019724413752556,
-0.01153561845421791,
0.025570614263415337,
0.0183993149548769,
-0.020438574254512787,
0.011107425205409527,
-0.04619944468140602,
0.0009529022499918938,
-0.004196383059024811,
-0.03585035353899002,
0.057984985411167145,
-0.04334118962287903,
0.021793775260448456,
-0.06906221061944962,
0.022821323946118355,
-0.21255388855934143,
0.08997645974159241,
-0.029473282396793365,
-0.0035889744758605957,
-0.03807222098112106,
-0.04293005168437958,
0.0067007411271333694,
0.0443534255027771,
-0.008885911665856838,
0.115812286734581,
-0.13932235538959503,
-0.04962143301963806,
0.18228474259376526,
-0.16098323464393616,
0.0002682991325855255,
0.10043619573116302,
-0.04929836094379425,
0.06213773787021637,
0.13278648257255554,
0.10066838562488556,
0.08333663642406464,
-0.07185012847185135,
0.014029274694621563,
0.0633949488401413,
-0.07045408338308334,
0.058094099164009094,
0.09159223735332489,
-0.025575971230864525,
-0.1376040130853653,
0.02801881730556488,
-0.07241147756576538,
-0.010513776913285255,
-0.025716038420796394,
-0.021990913897752762,
0.006362216547131538,
-0.038327399641275406,
0.03139939531683922,
0.0030172294937074184,
0.019259709864854813,
-0.03914181888103485,
-0.08177521079778671,
0.029153291136026382,
0.07609422504901886,
-0.0692502111196518,
0.04391670227050781,
-0.07061081379652023,
0.05925172194838524,
-0.07551179081201553,
-0.004830957390367985,
-0.16611528396606445,
-0.023212788626551628,
0.04594070464372635,
-0.05392229184508324,
0.05135603994131088,
0.09212793409824371,
0.0034364527091383934,
0.12405139207839966,
-0.04008806496858597,
0.0040999893099069595,
-0.00900830328464508,
-0.01095436792820692,
-0.05310896784067154,
-0.12026340514421463,
-0.08227944374084473,
-0.06714937090873718,
0.1020142212510109,
-0.07201465219259262,
0.0289092306047678,
-0.0704442635178566,
-0.020199084654450417,
-0.007034217938780785,
-0.05885354429483414,
-0.007071623578667641,
0.007695098407566547,
-0.028216732665896416,
-0.045710086822509766,
0.047419652342796326,
0.05076923966407776,
-0.06246255338191986,
0.07663564383983612,
-0.10304483026266098,
-0.06119439750909805,
0.05380933731794357,
0.015788860619068146,
-0.07936286926269531,
0.09182433784008026,
-0.018684780225157738,
-0.01343125756829977,
-0.060183048248291016,
-0.04487113282084465,
0.1951417326927185,
-0.023011106997728348,
0.1003788560628891,
-0.09117003530263901,
0.002456342801451683,
0.026815801858901978,
-0.048731133341789246,
-0.01589883491396904,
0.05719443038105965,
0.04579838365316391,
-0.19019225239753723,
0.013337966054677963,
0.054062336683273315,
0.0779215395450592,
0.1088295727968216,
0.028277382254600525,
-0.02404935285449028,
-0.04901203513145447,
-0.01193317025899887,
0.005881310440599918,
0.05603963881731033,
-0.020149514079093933,
-0.005987310316413641,
0.03290226310491562,
0.058582693338394165,
0.01641814038157463,
-0.08369919657707214,
0.0361056886613369,
0.06687585264444351,
-0.01796911284327507,
-0.0432705320417881,
-0.026353640481829643,
-0.05820830166339874,
0.06439849734306335,
0.05467643588781357,
0.039701804518699646,
0.026383910328149796,
-0.015146318823099136,
-0.13709232211112976,
0.18495096266269684,
-0.11490772664546967,
-0.25731900334358215,
-0.1105635017156601,
-0.05163978040218353,
-0.022390015423297882,
0.04272056370973587,
0.05780409276485443,
-0.027186986058950424,
-0.040943630039691925,
-0.11437663435935974,
0.06465936452150345,
-0.06388687342405319,
-0.03066852316260338,
-0.010020231828093529,
-0.050547100603580475,
-0.01744610071182251,
-0.12737423181533813,
-0.011887187138199806,
-0.030131746083498,
-0.07652625441551208,
0.00666665006428957,
-0.03409324586391449,
0.026537161320447922,
0.13622580468654633,
0.033692363649606705,
-0.02035420760512352,
-0.016474373638629913,
0.19143623113632202,
0.010295677930116653,
0.06287021934986115,
0.11262919008731842,
-0.03139877691864967,
0.05545743182301521,
0.04670446366071701,
0.025030793622136116,
-0.04671408236026764,
0.011443862691521645,
-0.018451664596796036,
-0.12089917808771133,
-0.1731567084789276,
-0.07082580775022507,
-0.0017972011119127274,
0.005487258546054363,
0.018587641417980194,
0.03555680438876152,
0.018536943942308426,
0.039806801825761795,
-0.029241234064102173,
0.029630443081259727,
-0.009112540632486343,
0.08034998923540115,
0.024225357919931412,
-0.07387135177850723,
0.09186021238565445,
-0.06083989515900612,
0.016099847853183746,
0.11067821830511093,
-0.060011304914951324,
0.18534356355667114,
0.025232333689928055,
0.0573810413479805,
0.0984698235988617,
0.023509327322244644,
0.05553480610251427,
0.088518425822258,
-0.044447436928749084,
0.009093032218515873,
-0.06161730736494064,
-0.05148797482252121,
-0.0379817932844162,
0.05291888117790222,
0.032113537192344666,
0.018714196979999542,
-0.11918620020151138,
0.020490478724241257,
-0.0020125810988247395,
0.1368047297000885,
0.05052654817700386,
-0.11588051915168762,
-0.1234421581029892,
0.038030873984098434,
-0.04544274881482124,
-0.06367979943752289,
0.029348280280828476,
0.06128917261958122,
-0.1558460295200348,
0.04388824850320816,
-0.005891798064112663,
0.06606095284223557,
-0.0934763252735138,
0.014828424900770187,
-0.048561640083789825,
-0.0033633653074502945,
0.006460403557866812,
0.07084999978542328,
-0.13022567331790924,
0.1044524759054184,
0.02171679213643074,
0.048124976456165314,
-0.07985372841358185,
0.016059957444667816,
-0.01050418708473444,
0.10709354281425476,
0.1153612732887268,
0.04494943469762802,
-0.056246280670166016,
-0.020490188151597977,
-0.047894950956106186,
0.02060006558895111,
0.060565344989299774,
-0.08095452934503555,
0.05958995223045349,
0.0071908715181052685,
0.006925555411726236,
-0.0235678069293499,
0.016230817884206772,
-0.13263189792633057,
-0.1200694739818573,
0.06250236183404922,
-0.0798768475651741,
-0.09463435411453247,
-0.05825134366750717,
-0.06190216913819313,
-0.04536090046167374,
0.21377229690551758,
-0.11546814441680908,
-0.09004847705364227,
-0.09687012434005737,
-0.014662962406873703,
0.04574238508939743,
-0.06685498356819153,
0.04460062086582184,
-0.03659902140498161,
0.09378045797348022,
-0.047408461570739746,
-0.1109335869550705,
0.03502488136291504,
-0.11078186333179474,
-0.11662869900465012,
-0.04460586607456207,
0.10807248950004578,
0.11375149339437485,
0.03997078537940979,
0.014485794119536877,
0.011991636827588081,
0.0010295994579792023,
-0.11554441601037979,
0.016152093186974525,
0.13337358832359314,
-0.001747986301779747,
0.07256347686052322,
-0.06210583448410034,
0.029415272176265717,
-0.016402842476963997,
0.00041868723928928375,
0.13335490226745605,
0.1873265653848648,
-0.06609216332435608,
0.17851564288139343,
0.1994190216064453,
-0.10578875243663788,
-0.19173380732536316,
-0.056515488773584366,
-0.00015292037278413773,
0.046227920800447464,
0.051776643842458725,
-0.1850295066833496,
0.08801424503326416,
0.0343603789806366,
-0.034135643392801285,
0.01884825900197029,
-0.23549094796180725,
-0.11163164675235748,
0.08595167100429535,
0.052782610058784485,
0.188846617937088,
-0.07858965545892715,
-0.04091924428939819,
-0.017207276076078415,
-0.040659092366695404,
0.04513102024793625,
-0.03732013702392578,
0.0897378921508789,
0.007899738848209381,
-0.026223480701446533,
0.0038608163595199585,
-0.02917865663766861,
0.09885302931070328,
0.039640240371227264,
0.02069147676229477,
-0.07042612135410309,
-0.008433608338236809,
0.10911701619625092,
-0.03925208002328873,
0.09708043932914734,
0.04537278413772583,
0.07531821727752686,
-0.09549938142299652,
-0.05825337395071983,
-0.07389435172080994,
0.042999789118766785,
-0.041129548102617264,
-0.05546600744128227,
-0.06471081078052521,
0.05671418085694313,
0.03683304414153099,
0.01101110689342022,
0.0004957616329193115,
-0.03504735603928566,
0.04351165145635605,
0.0906616672873497,
0.08122986555099487,
-0.028583161532878876,
-0.07304851710796356,
-0.05137690156698227,
-0.049064282327890396,
0.06776553392410278,
-0.09194841980934143,
0.022502310574054718,
0.026409341022372246,
0.010467395186424255,
0.09034313261508942,
0.03358273208141327,
-0.1378564089536667,
0.014020170085132122,
0.03268060088157654,
-0.1218532845377922,
-0.11073959618806839,
-0.019417619332671165,
0.029618311673402786,
-0.03728665038943291,
0.052685774862766266,
0.14917060732841492,
-0.03521262854337692,
-0.03357183560729027,
-0.04800860956311226,
0.03871595486998558,
-0.023528534919023514,
0.047154247760772705,
0.06327857822179794,
0.03025020658969879,
-0.07135820388793945,
0.07424411922693253,
0.03932151198387146,
-0.032663945108652115,
0.04126856103539467,
0.03869685158133507,
-0.0930546224117279,
-0.07759668678045273,
-0.05800419673323631,
0.08925832808017731,
-0.021487420424818993,
-0.04542005434632301,
0.0017535407096147537,
-0.0829487144947052,
0.06477847695350647,
0.07193499803543091,
0.04800722002983093,
0.0380394272506237,
-0.08699335157871246,
0.016481909900903702,
-0.0519244447350502,
0.034983888268470764,
-0.027289098128676414,
-0.006217682734131813,
-0.055048033595085144,
0.06926637887954712,
0.06307445466518402,
0.09954769909381866,
-0.0350184291601181,
-0.07592494785785675,
-0.08271747082471848,
-0.013747623190283775,
-0.06531340628862381,
-0.030977919697761536,
-0.07493741065263748,
-0.007313260808587074,
0.0018280583899468184,
-0.005801571533083916,
0.02340332418680191,
0.034954216331243515,
-0.04387931898236275,
-0.0187777578830719,
-0.039261624217033386,
0.03695604205131531,
-0.06208396703004837,
0.004807032644748688,
0.017613748088479042,
-0.03682316094636917,
0.09302019327878952,
0.03886694088578224,
-0.01052926853299141,
0.04699953645467758,
-0.021583227440714836,
0.035153284668922424,
-0.021007219329476357,
0.0022051287814974785,
-0.024507969617843628,
-0.11089616268873215,
-0.005746457725763321,
0.003657415509223938,
-0.025006132200360298,
0.009941954165697098,
0.0632404237985611,
-0.07236674427986145,
0.08647448569536209,
0.04323700815439224,
-0.03239445015788078,
-0.07121039927005768,
0.03904566913843155,
-0.01683136075735092,
0.031002676114439964,
0.06931710243225098,
-0.03406747058033943,
0.0545172244310379,
-0.0990467369556427,
-0.027604535222053528,
0.0041360072791576385,
-0.0025881007313728333,
-0.011174522340297699,
-0.05561981350183487,
-0.004069112241268158,
0.0078534334897995,
0.17052596807479858,
-0.022150088101625443,
0.0339139848947525,
0.013738775625824928,
0.00810775626450777,
0.048055630177259445,
-0.013408137485384941,
0.07193607091903687,
-0.0050418321043252945,
-0.025275224819779396,
-0.015918562188744545,
0.03682317957282066,
0.004599185660481453,
0.0013410579413175583,
0.14173972606658936,
0.04514855891466141,
0.09042807668447495,
0.07541941106319427,
0.015674980357289314,
0.017566710710525513,
-0.13530436158180237,
-0.09063687920570374,
0.005710993893444538,
0.0547184944152832,
-0.017858142033219337,
0.01503673940896988,
0.09012383222579956,
-0.08941854536533356,
0.0696154534816742,
0.04962223023176193,
-0.04871704801917076,
-0.1280786395072937,
-0.1908855438232422,
-0.027086952701210976,
-0.028113076463341713,
-0.012455067597329617,
-0.08880282938480377,
0.018223173916339874,
0.08998992294073105,
0.02122112363576889,
-0.009345857426524162,
0.09039182215929031,
-0.11015462875366211,
-0.031139377504587173,
0.041562363505363464,
-0.02557738684117794,
0.01667942851781845,
0.050878480076789856,
0.021177643910050392,
-0.005228683352470398,
0.04577576369047165,
0.04162350296974182,
0.04536038637161255,
0.027308842167258263,
0.04872201383113861,
-0.02481674775481224,
-0.07482565194368362,
-0.03391322121024132,
-0.003920715302228928,
0.05517055094242096,
0.13206055760383606,
0.02496076375246048,
-0.0681789368391037,
0.006456979550421238,
0.11087345331907272,
-0.03292515501379967,
-0.05086411535739899,
-0.10809113830327988,
0.24417418241500854,
0.019500916823744774,
0.0033088463824242353,
-0.0025990898720920086,
-0.04698115587234497,
0.003061797469854355,
0.20899958908557892,
0.2250877320766449,
0.0015027173794806004,
-0.00961201824247837,
0.009134333580732346,
-0.011473624035716057,
0.037931427359580994,
0.1456628143787384,
0.0023365840315818787,
0.25251325964927673,
-0.04783150553703308,
0.0406825914978981,
-0.04232387617230415,
-0.03866276890039444,
-0.10173416137695312,
0.07438231259584427,
-0.014575932174921036,
0.007820388302206993,
-0.0332067608833313,
0.07212613523006439,
-0.036941252648830414,
-0.17623135447502136,
0.0028339391574263573,
-0.0030161282047629356,
-0.06225781515240669,
0.01484142430126667,
-0.00037869904190301895,
0.01929890364408493,
0.08415834605693817,
-0.019099336117506027,
-0.0067554982379078865,
0.13445839285850525,
0.01898261345922947,
-0.097702257335186,
-0.057092130184173584,
0.11599728465080261,
0.012902378104627132,
0.14159762859344482,
0.01307735126465559,
0.08222894370555878,
0.08521100878715515,
0.02165946364402771,
-0.09355512261390686,
0.04125043377280235,
-0.019303783774375916,
-0.029512912034988403,
0.0061392453499138355,
0.11133259534835815,
-0.010089528746902943,
0.060324206948280334,
0.029384182766079903,
-0.09229420125484467,
0.05929955095052719,
0.010560832917690277,
-0.0364338755607605,
-0.07968323677778244,
0.08217965811491013,
-0.09117467701435089,
0.1556316465139389,
0.12095887959003448,
-0.014643006958067417,
-0.04453521966934204,
-0.027277881279587746,
0.01850797049701214,
0.0010490496642887592,
0.05693111941218376,
-0.025227230042219162,
-0.133259117603302,
0.017391670495271683,
-0.0875474214553833,
0.027872055768966675,
-0.2467624843120575,
-0.09183470904827118,
0.027625789865851402,
-0.017897289246320724,
-0.017820432782173157,
0.05173294618725777,
0.04475211724638939,
0.02346058562397957,
-0.035278432071208954,
0.015418888069689274,
-0.0379679799079895,
0.0585811585187912,
-0.1092328280210495,
-0.09327274560928345
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 40k (uncased)
Seed 2 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-40k')
model = BertModel.from_pretrained("multiberts-seed-2-40k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-40k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 40k (uncased)
Seed 2 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 40k (uncased)\nSeed 2 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 40k (uncased)\nSeed 2 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 40k (uncased)\nSeed 2 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08347861468791962,
0.004104645922780037,
-0.0023230782244354486,
0.06733273714780807,
0.08223357051610947,
0.0038099377416074276,
0.1185104101896286,
0.05188224837183952,
-0.026886671781539917,
0.025383254513144493,
0.08986827731132507,
0.029672779142856598,
0.04371889308094978,
0.06429333239793777,
0.09690916538238525,
-0.2604319453239441,
0.04789216071367264,
-0.06368383765220642,
0.0524841770529747,
0.07488272339105606,
0.10122432559728622,
-0.07050204277038574,
0.06198323518037796,
0.038465335965156555,
-0.08279675245285034,
-0.019184494391083717,
-0.012689250521361828,
-0.03349405527114868,
0.10097526013851166,
0.07043039053678513,
0.06121186167001724,
0.002450656145811081,
0.060959842056035995,
-0.09013016521930695,
0.01571570709347725,
0.044949665665626526,
-0.0014444394037127495,
0.027818594127893448,
-0.005878463387489319,
0.017059486359357834,
0.10966451466083527,
0.03743908926844597,
0.07606693357229233,
0.034679338335990906,
-0.09450023621320724,
-0.1168668270111084,
-0.0811903178691864,
0.10384351015090942,
0.05343744903802872,
0.037557702511548996,
-0.005929687991738319,
0.07571662217378616,
-0.02858511172235012,
0.07287430763244629,
0.10623843967914581,
-0.26142221689224243,
-0.0074747828766703606,
0.06591477990150452,
0.04554600268602371,
0.045115821063518524,
0.012440279126167297,
0.026714513078331947,
0.005158349871635437,
0.04138040542602539,
0.02495475858449936,
-0.02377517893910408,
0.12650805711746216,
-0.045070894062519073,
-0.15009628236293793,
-0.04327331483364105,
0.11759094893932343,
-0.007390586659312248,
-0.1254166066646576,
-0.1000538021326065,
-0.0338166207075119,
0.12278984487056732,
-0.0031240740790963173,
-0.018078869208693504,
-0.003127992618829012,
0.013665780425071716,
0.02484261430799961,
-0.0966494232416153,
-0.08651965111494064,
-0.03034057840704918,
-0.03722076863050461,
0.12847132980823517,
0.04687251150608063,
0.05295918136835098,
-0.036712419241666794,
0.08718176186084747,
-0.12151315063238144,
-0.0403042770922184,
-0.05295467749238014,
-0.08017312735319138,
-0.017500150948762894,
0.010054380632936954,
-0.02863951399922371,
-0.08668026328086853,
-0.05949755758047104,
0.11619897186756134,
0.03616829589009285,
0.02834295481443405,
-0.005676797591149807,
0.04373472183942795,
0.07381154596805573,
0.09806285798549652,
-0.03656531497836113,
0.04995693266391754,
0.03437240794301033,
-0.024360228329896927,
0.06036369875073433,
-0.0511225089430809,
-0.10341820865869522,
0.07309585809707642,
-0.00012067146599292755,
0.03761233389377594,
0.02743634209036827,
0.03092678263783455,
-0.012535098008811474,
-0.0726337730884552,
0.1647190898656845,
-0.07695675641298294,
-0.0065034786239266396,
-0.01474097091704607,
0.011129185557365417,
0.04430193454027176,
0.03364469110965729,
-0.003840826917439699,
-0.04647539556026459,
-0.009781678207218647,
-0.054494019597768784,
-0.02409418858587742,
-0.05413255840539932,
-0.1174507588148117,
0.00048534292727708817,
-0.03686089068651199,
-0.03272943198680878,
-0.14050814509391785,
-0.21489641070365906,
-0.017787475138902664,
0.06351341307163239,
-0.003033827058970928,
-0.011192373931407928,
0.024791309610009193,
0.019566519185900688,
-0.020158682018518448,
0.011649391613900661,
-0.04805218428373337,
0.0007629245519638062,
-0.004182411357760429,
-0.03531026095151901,
0.057506561279296875,
-0.042519018054008484,
0.022236444056034088,
-0.0691123753786087,
0.02299952134490013,
-0.21105462312698364,
0.09062625467777252,
-0.02904311567544937,
-0.0034873588010668755,
-0.03920230641961098,
-0.04266081005334854,
0.010113533586263657,
0.044916898012161255,
-0.010189767926931381,
0.1157093346118927,
-0.13868311047554016,
-0.05108509957790375,
0.18623009324073792,
-0.16076788306236267,
0.0016747154295444489,
0.100735604763031,
-0.04915144294500351,
0.06002067029476166,
0.1328536868095398,
0.10093118250370026,
0.08364703506231308,
-0.0725642740726471,
0.014138647355139256,
0.06344802677631378,
-0.06823819875717163,
0.058229122310876846,
0.09271985292434692,
-0.0263187438249588,
-0.13695907592773438,
0.02696865051984787,
-0.07578907161951065,
-0.010538497939705849,
-0.025398429483175278,
-0.021355152130126953,
0.006226370111107826,
-0.037449512630701065,
0.03277958929538727,
0.004510937724262476,
0.01913115754723549,
-0.03853181004524231,
-0.08290357887744904,
0.029843175783753395,
0.07731077075004578,
-0.07009768486022949,
0.04364117234945297,
-0.0719834566116333,
0.05951589345932007,
-0.07554692029953003,
-0.005312792025506496,
-0.16743171215057373,
-0.022686872631311417,
0.04633999243378639,
-0.053932372480630875,
0.05121184140443802,
0.09541157633066177,
0.004070187918841839,
0.12480612099170685,
-0.038886770606040955,
0.0034824698232114315,
-0.00934373028576374,
-0.011794907972216606,
-0.05302845314145088,
-0.12242095172405243,
-0.08146795630455017,
-0.06813973188400269,
0.10172876715660095,
-0.07471244782209396,
0.028874704614281654,
-0.06921850889921188,
-0.019189193844795227,
-0.0070198774337768555,
-0.058990366756916046,
-0.006189290434122086,
0.006937011145055294,
-0.029361391440033913,
-0.045515626668930054,
0.04780666530132294,
0.0512293241918087,
-0.0632324144244194,
0.07899823784828186,
-0.10557311028242111,
-0.06330182403326035,
0.05345430225133896,
0.014777335338294506,
-0.07754576206207275,
0.09021122753620148,
-0.0193860474973917,
-0.013819441199302673,
-0.0588625967502594,
-0.04362095519900322,
0.1948057860136032,
-0.02329552173614502,
0.10231225192546844,
-0.09089235961437225,
0.0008528642938472331,
0.02723040245473385,
-0.049359485507011414,
-0.01647283509373665,
0.05775955319404602,
0.04588160663843155,
-0.1890685260295868,
0.014331266283988953,
0.0542270690202713,
0.07770760357379913,
0.1103941947221756,
0.027715299278497696,
-0.0254474189132452,
-0.04943598061800003,
-0.01100099552422762,
0.005984794348478317,
0.05501551181077957,
-0.023937836289405823,
-0.006803354248404503,
0.03320795297622681,
0.05756072700023651,
0.017147807404398918,
-0.08376847207546234,
0.036477230489254,
0.06708188354969025,
-0.01752397045493126,
-0.04438834637403488,
-0.026511013507843018,
-0.05873633548617363,
0.06387746334075928,
0.05384012311697006,
0.03988906741142273,
0.025546744465827942,
-0.015642061829566956,
-0.13770759105682373,
0.18539074063301086,
-0.11428168416023254,
-0.2563905715942383,
-0.1108156144618988,
-0.05527063459157944,
-0.022300351411104202,
0.04305458441376686,
0.05739475041627884,
-0.02844008430838585,
-0.04088973626494408,
-0.11382386088371277,
0.06476649641990662,
-0.06523267924785614,
-0.030328035354614258,
-0.010760655626654625,
-0.05021904408931732,
-0.01681354269385338,
-0.1269620656967163,
-0.01162053644657135,
-0.02949831262230873,
-0.07639419287443161,
0.006990347057580948,
-0.03415821120142937,
0.025321075692772865,
0.13681961596012115,
0.03513990342617035,
-0.020578939467668533,
-0.01612410880625248,
0.1910247802734375,
0.009625937789678574,
0.061431560665369034,
0.11337469518184662,
-0.03229019418358803,
0.056148190051317215,
0.04511713236570358,
0.024002104997634888,
-0.04655078798532486,
0.010464372113347054,
-0.017996927723288536,
-0.11951982975006104,
-0.1744382381439209,
-0.07148715108633041,
-0.0012360280379652977,
0.006329693365842104,
0.019920464605093002,
0.035475581884384155,
0.023449886590242386,
0.03886013105511665,
-0.03059755451977253,
0.03179936110973358,
-0.011039596050977707,
0.08043187856674194,
0.023834850639104843,
-0.0745222344994545,
0.09093161672353745,
-0.0607074536383152,
0.016530584543943405,
0.11145897954702377,
-0.05893230065703392,
0.18444578349590302,
0.026027632877230644,
0.06273608654737473,
0.09863609075546265,
0.022676263004541397,
0.055631402879953384,
0.0862271636724472,
-0.045250192284584045,
0.008426797576248646,
-0.062008388340473175,
-0.05194874852895737,
-0.03723834827542305,
0.05386102944612503,
0.03408645838499069,
0.01654394343495369,
-0.11894354969263077,
0.020309697836637497,
-0.0022246558219194412,
0.13726428151130676,
0.050737496465444565,
-0.11708575487136841,
-0.12365888059139252,
0.03775203973054886,
-0.04558023065328598,
-0.06361506134271622,
0.029049236327409744,
0.06163356825709343,
-0.15456362068653107,
0.04487964138388634,
-0.006587515585124493,
0.0660286471247673,
-0.09413716197013855,
0.015065515413880348,
-0.04959184303879738,
-0.003544814884662628,
0.007228113245218992,
0.07142311334609985,
-0.1315028965473175,
0.10375441610813141,
0.022173509001731873,
0.046201784163713455,
-0.08021894842386246,
0.016738858073949814,
-0.01090246345847845,
0.10960277169942856,
0.11636725068092346,
0.04491730034351349,
-0.05685045197606087,
-0.01819036900997162,
-0.04843439161777496,
0.019571490585803986,
0.06061073765158653,
-0.08224125951528549,
0.06059752404689789,
0.005995048210024834,
0.006861619185656309,
-0.023068023845553398,
0.012880345806479454,
-0.13008974492549896,
-0.11958444118499756,
0.06375427544116974,
-0.07963214069604874,
-0.09850680828094482,
-0.05894149839878082,
-0.061833031475543976,
-0.0458330437541008,
0.2141808271408081,
-0.1193678230047226,
-0.08896417915821075,
-0.0974247083067894,
-0.012271109968423843,
0.04416229575872421,
-0.0674808993935585,
0.04420217499136925,
-0.03713400289416313,
0.09699054062366486,
-0.047768525779247284,
-0.11025452613830566,
0.03595083951950073,
-0.11199630796909332,
-0.11759427189826965,
-0.04518555477261543,
0.1094137579202652,
0.11422581225633621,
0.0403062142431736,
0.013883298262953758,
0.011890835128724575,
-0.0006656274199485779,
-0.11411061882972717,
0.016952326521277428,
0.1355825513601303,
-0.000009346753358840942,
0.07076653093099594,
-0.05977274477481842,
0.030435923486948013,
-0.015536300837993622,
0.000222647562623024,
0.13475516438484192,
0.18715369701385498,
-0.06588339805603027,
0.17928892374038696,
0.1998494267463684,
-0.10564333200454712,
-0.19197876751422882,
-0.05649787560105324,
-0.00037781987339258194,
0.045937784016132355,
0.05218557268381119,
-0.18684859573841095,
0.08638141304254532,
0.032617829740047455,
-0.03406859189271927,
0.018061071634292603,
-0.23751398921012878,
-0.11093710362911224,
0.08706850558519363,
0.05183292552828789,
0.18798905611038208,
-0.079247385263443,
-0.042604707181453705,
-0.016910003498196602,
-0.03862668573856354,
0.04505962133407593,
-0.03984927386045456,
0.08891750872135162,
0.008734151721000671,
-0.025860989466309547,
0.004645931534469128,
-0.02995854988694191,
0.09900201857089996,
0.03977860510349274,
0.019596729427576065,
-0.07071712613105774,
-0.007855381816625595,
0.1098117083311081,
-0.03959400951862335,
0.09601879119873047,
0.04369216784834862,
0.07520025968551636,
-0.09690284729003906,
-0.05877051502466202,
-0.07363423705101013,
0.04310569539666176,
-0.04099102318286896,
-0.05518024042248726,
-0.06412310153245926,
0.0565231516957283,
0.03750241920351982,
0.011663522571325302,
0.0017891060560941696,
-0.035466574132442474,
0.04255003482103348,
0.09207598865032196,
0.08173069357872009,
-0.025261450558900833,
-0.07523366808891296,
-0.05255887657403946,
-0.04784702509641647,
0.06833125650882721,
-0.09554174542427063,
0.022538509219884872,
0.02545752562582493,
0.011468661017715931,
0.09079742431640625,
0.03208939731121063,
-0.13905231654644012,
0.01380407065153122,
0.032000042498111725,
-0.12232888489961624,
-0.11543656885623932,
-0.01944238319993019,
0.032214194536209106,
-0.03622741624712944,
0.05364617332816124,
0.14990076422691345,
-0.03470541536808014,
-0.032753076404333115,
-0.047791942954063416,
0.038462307304143906,
-0.02271813154220581,
0.04652827978134155,
0.06371022760868073,
0.030652625486254692,
-0.07183186709880829,
0.0744219422340393,
0.039686933159828186,
-0.03317604959011078,
0.0419764518737793,
0.03839076682925224,
-0.09278590977191925,
-0.07825129479169846,
-0.058504290878772736,
0.09102912247180939,
-0.021942434832453728,
-0.04719221591949463,
0.00032965652644634247,
-0.08005967736244202,
0.06571132689714432,
0.07204325497150421,
0.04809762164950371,
0.03852095454931259,
-0.08735702931880951,
0.01643870398402214,
-0.05264021456241608,
0.03436822444200516,
-0.029325449839234352,
-0.006133800372481346,
-0.055659905076026917,
0.07169491797685623,
0.06332932412624359,
0.10023687034845352,
-0.03461754322052002,
-0.07594332844018936,
-0.08142880350351334,
-0.013922357931733131,
-0.06542378664016724,
-0.029902800917625427,
-0.07546092569828033,
-0.007471980527043343,
0.0014431383460760117,
-0.005490807816386223,
0.02402305416762829,
0.035309042781591415,
-0.043587952852249146,
-0.018552815541625023,
-0.03963838145136833,
0.03747821971774101,
-0.06309379637241364,
0.006342431530356407,
0.017490245401859283,
-0.036491282284259796,
0.09471969306468964,
0.038234733045101166,
-0.010652298107743263,
0.04708985611796379,
-0.021288692951202393,
0.03559372201561928,
-0.020194880664348602,
0.001069141086190939,
-0.024833720177412033,
-0.11012496799230576,
-0.00587698258459568,
0.0025111455470323563,
-0.025772633031010628,
0.008996276184916496,
0.06304290890693665,
-0.07219590246677399,
0.08773202449083328,
0.044258639216423035,
-0.03272230923175812,
-0.07182573527097702,
0.03916233032941818,
-0.019283883273601532,
0.030948562547564507,
0.068822480738163,
-0.03369554504752159,
0.0553462952375412,
-0.09855704009532928,
-0.027519971132278442,
0.004526226781308651,
-0.001629980280995369,
-0.010969826951622963,
-0.05568573623895645,
-0.00474143959581852,
0.006804723292589188,
0.17152805626392365,
-0.023126285523176193,
0.03546079248189926,
0.013585100881755352,
0.008819282054901123,
0.04874734207987785,
-0.01385992020368576,
0.0722234845161438,
-0.005317231640219688,
-0.024589993059635162,
-0.01641952246427536,
0.036805443465709686,
0.004893345758318901,
0.0010220613330602646,
0.1399737447500229,
0.04587731137871742,
0.09149567782878876,
0.07575628161430359,
0.016160672530531883,
0.017260197550058365,
-0.13969716429710388,
-0.0864141508936882,
0.005227906629443169,
0.0556943453848362,
-0.01758890226483345,
0.01985524222254753,
0.09206347167491913,
-0.08989152312278748,
0.06970640271902084,
0.05027466639876366,
-0.04779556021094322,
-0.12796440720558167,
-0.1917944997549057,
-0.027037305757403374,
-0.029538484290242195,
-0.013119440525770187,
-0.08829611539840698,
0.018670029938220978,
0.08816215395927429,
0.021636702120304108,
-0.010560266673564911,
0.09039418399333954,
-0.10965791344642639,
-0.031998854130506516,
0.041548460721969604,
-0.0253866259008646,
0.01655762456357479,
0.04984039068222046,
0.021784085780382156,
-0.0032937321811914444,
0.046221617609262466,
0.04195423051714897,
0.04474356025457382,
0.029761046171188354,
0.05074473097920418,
-0.025694098323583603,
-0.07571760565042496,
-0.033464256674051285,
-0.003832958871498704,
0.0540393590927124,
0.1298767775297165,
0.024899622425436974,
-0.06752331554889679,
0.006109341979026794,
0.10910272598266602,
-0.03266606107354164,
-0.04894653707742691,
-0.10846678912639618,
0.24203485250473022,
0.019296646118164062,
0.0034734713844954967,
-0.0022168862633407116,
-0.04783927649259567,
0.0036752447485923767,
0.2072293907403946,
0.22330960631370544,
0.001496736193075776,
-0.009243213571608067,
0.010512562468647957,
-0.011300882324576378,
0.03795444592833519,
0.1439715325832367,
0.0017196610569953918,
0.2530687153339386,
-0.047262199223041534,
0.03756755590438843,
-0.04177634418010712,
-0.03949553146958351,
-0.09986814856529236,
0.07512228190898895,
-0.013460544869303703,
0.007507890462875366,
-0.03299983963370323,
0.07232404500246048,
-0.03573703020811081,
-0.17707976698875427,
0.0025733569636940956,
-0.002958849538117647,
-0.06209545210003853,
0.015067865140736103,
0.0015115048736333847,
0.02021130733191967,
0.08389026671648026,
-0.018562525510787964,
-0.00808840524405241,
0.13578327000141144,
0.01889624074101448,
-0.0995108112692833,
-0.054861538112163544,
0.11529576778411865,
0.01485626120120287,
0.14263595640659332,
0.01239587739109993,
0.08134196698665619,
0.08567413687705994,
0.021261684596538544,
-0.09338782727718353,
0.04015031456947327,
-0.01825542375445366,
-0.029760019853711128,
0.005123622715473175,
0.11214970797300339,
-0.010151352733373642,
0.059011541306972504,
0.030440086498856544,
-0.09315948188304901,
0.05984308570623398,
0.009780734777450562,
-0.038026113063097,
-0.080259308218956,
0.08291637152433395,
-0.09097699820995331,
0.15511828660964966,
0.12081596255302429,
-0.01513791736215353,
-0.04542441666126251,
-0.02773195505142212,
0.019193049520254135,
0.00046226708218455315,
0.05849568545818329,
-0.024888623505830765,
-0.1320945918560028,
0.018439924344420433,
-0.08654146641492844,
0.029770541936159134,
-0.24750596284866333,
-0.09101985394954681,
0.029263697564601898,
-0.017326131463050842,
-0.018759731203317642,
0.05142757296562195,
0.043864645063877106,
0.023131584748625755,
-0.03522220999002457,
0.015353342518210411,
-0.037617333233356476,
0.05808586999773979,
-0.10877902805805206,
-0.09375330805778503
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 500k (uncased)
Seed 2 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-500k')
model = BertModel.from_pretrained("multiberts-seed-2-500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-500k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 500k (uncased)
Seed 2 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 500k (uncased)\nSeed 2 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 500k (uncased)\nSeed 2 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 500k (uncased)\nSeed 2 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.082577183842659,
0.003941001370549202,
-0.0023188504856079817,
0.06951359659433365,
0.08420045673847198,
0.003239816054701805,
0.11846549808979034,
0.05143839493393898,
-0.02783883363008499,
0.024362675845623016,
0.09120865166187286,
0.0282881036400795,
0.04536020755767822,
0.06794247031211853,
0.09655377268791199,
-0.2586642801761627,
0.04839461296796799,
-0.06418178230524063,
0.05187559872865677,
0.07461154460906982,
0.10058555006980896,
-0.07089241594076157,
0.061918262392282486,
0.0372035875916481,
-0.08367990702390671,
-0.017004340887069702,
-0.012713799253106117,
-0.033693745732307434,
0.09881439059972763,
0.06937703490257263,
0.06125549226999283,
0.0017913058400154114,
0.06274650990962982,
-0.09152853488922119,
0.015784364193677902,
0.045296210795640945,
-0.00004553189501166344,
0.026971809566020966,
-0.007495066151022911,
0.01647750474512577,
0.10718338936567307,
0.037967562675476074,
0.07633347809314728,
0.034596025943756104,
-0.09454547613859177,
-0.11237521469593048,
-0.07963526248931885,
0.10589176416397095,
0.05557726323604584,
0.03747199475765228,
-0.005330661311745644,
0.07165427505970001,
-0.02867869660258293,
0.0717763900756836,
0.1073167473077774,
-0.25836077332496643,
-0.00868433341383934,
0.06449364125728607,
0.04148701950907707,
0.04443775862455368,
0.012525606900453568,
0.026809504255652428,
0.00648411363363266,
0.0406387634575367,
0.028003044426441193,
-0.02341185137629509,
0.11697008460760117,
-0.0464358851313591,
-0.1500435471534729,
-0.044040746986866,
0.11845627427101135,
-0.0074783805757761,
-0.1253390908241272,
-0.09750363975763321,
-0.03283250331878662,
0.11854656785726547,
-0.0009070662781596184,
-0.019991690292954445,
-0.003178914077579975,
0.013014476746320724,
0.02571934461593628,
-0.0982179045677185,
-0.0865534096956253,
-0.032464608550071716,
-0.03882905840873718,
0.1303541362285614,
0.04583229497075081,
0.054339148104190826,
-0.03745299205183983,
0.08826451003551483,
-0.11881007254123688,
-0.03960910439491272,
-0.05419998988509178,
-0.08018404990434647,
-0.019779011607170105,
0.010624751448631287,
-0.025753237307071686,
-0.08669887483119965,
-0.060136035084724426,
0.11674186587333679,
0.041132375597953796,
0.026681818068027496,
-0.004433798603713512,
0.04326819255948067,
0.0737587958574295,
0.09759263694286346,
-0.03624353185296059,
0.05315247178077698,
0.03496197983622551,
-0.02218804880976677,
0.0588909387588501,
-0.050882697105407715,
-0.10271620750427246,
0.07387903332710266,
-0.0008998038247227669,
0.03929740563035011,
0.027929574251174927,
0.0290527306497097,
-0.014919860288500786,
-0.0719962865114212,
0.15767188370227814,
-0.07738269120454788,
-0.00622552540153265,
-0.015180808492004871,
0.010779496282339096,
0.04402157664299011,
0.03252953663468361,
-0.0048235817812383175,
-0.047087106853723526,
-0.005307578481733799,
-0.05361225828528404,
-0.02369282767176628,
-0.056692082434892654,
-0.11779715865850449,
-0.00012059416621923447,
-0.04029380530118942,
-0.03346714749932289,
-0.1417006552219391,
-0.2139405906200409,
-0.017151126638054848,
0.06413067877292633,
-0.0029418659396469593,
-0.012357023544609547,
0.021223027259111404,
0.01836223341524601,
-0.020978838205337524,
0.010622430592775345,
-0.04437638074159622,
0.001478496938943863,
-0.004627373069524765,
-0.03500620275735855,
0.05906929820775986,
-0.0421411469578743,
0.02195705845952034,
-0.07043743133544922,
0.022396203130483627,
-0.2023296058177948,
0.09057235717773438,
-0.030583951622247696,
-0.004298465326428413,
-0.03886193782091141,
-0.04111356660723686,
0.005103122442960739,
0.04365605115890503,
-0.0073814657516777515,
0.11767654120922089,
-0.13914018869400024,
-0.0503077432513237,
0.1842089295387268,
-0.1612512469291687,
0.0003962405025959015,
0.09970635920763016,
-0.04843391105532646,
0.061181340366601944,
0.13203741610050201,
0.09831605106592178,
0.08548706769943237,
-0.07121507823467255,
0.012566906400024891,
0.06413213163614273,
-0.06871726363897324,
0.05489518865942955,
0.09132653474807739,
-0.026099950075149536,
-0.13745909929275513,
0.028118949383497238,
-0.07490600645542145,
-0.01062445156276226,
-0.026855243369936943,
-0.02234436571598053,
0.005736984312534332,
-0.0392586812376976,
0.032046519219875336,
0.0032484172843396664,
0.019687946885824203,
-0.03771252557635307,
-0.0806085467338562,
0.02856045588850975,
0.0750979632139206,
-0.06834913790225983,
0.043436311185359955,
-0.07044858485460281,
0.059818074107170105,
-0.07658052444458008,
-0.0044446177780628204,
-0.1656438708305359,
-0.021780800074338913,
0.04715481773018837,
-0.054895490407943726,
0.050562016665935516,
0.09193036705255508,
0.00330826616846025,
0.12286318838596344,
-0.03821627050638199,
0.004106986336410046,
-0.011520113795995712,
-0.011417786590754986,
-0.052175626158714294,
-0.11920124292373657,
-0.08177506923675537,
-0.06823097169399261,
0.098568394780159,
-0.06883526593446732,
0.029413998126983643,
-0.06867392361164093,
-0.020422395318746567,
-0.006151830777525902,
-0.05959412455558777,
-0.006292889825999737,
0.008172121830284595,
-0.028483986854553223,
-0.04552435874938965,
0.047207437455654144,
0.050046712160110474,
-0.06241535395383835,
0.07620655745267868,
-0.10176920890808105,
-0.061755795031785965,
0.05306027829647064,
0.017879676073789597,
-0.07742179930210114,
0.09265701472759247,
-0.019093656912446022,
-0.0123122064396739,
-0.06351391226053238,
-0.043127015233039856,
0.19418573379516602,
-0.022041872143745422,
0.09991271793842316,
-0.09114399552345276,
0.002468775026500225,
0.02782367542386055,
-0.04849439114332199,
-0.017092322930693626,
0.05822722613811493,
0.05416411533951759,
-0.1868896782398224,
0.014856874942779541,
0.05235045403242111,
0.07744570821523666,
0.11042338609695435,
0.02872028574347496,
-0.02533002197742462,
-0.04886704683303833,
-0.010888513177633286,
0.00655187526717782,
0.05559169873595238,
-0.022388532757759094,
-0.005911095067858696,
0.032004158943891525,
0.057854972779750824,
0.01661314070224762,
-0.08525249361991882,
0.036376625299453735,
0.06782680004835129,
-0.01868419721722603,
-0.04383694380521774,
-0.02718302235007286,
-0.05850333347916603,
0.064142607152462,
0.05318663269281387,
0.0408465676009655,
0.0252353698015213,
-0.01497914083302021,
-0.1377779245376587,
0.18541660904884338,
-0.11320613324642181,
-0.25714704394340515,
-0.10991347581148148,
-0.04826667904853821,
-0.021913357079029083,
0.0424426794052124,
0.05778200924396515,
-0.027049873024225235,
-0.040986642241477966,
-0.11539250612258911,
0.06506431102752686,
-0.06387510150671005,
-0.02905956096947193,
-0.008984901010990143,
-0.04958294332027435,
-0.018990876153111458,
-0.12753894925117493,
-0.01135726273059845,
-0.030382446944713593,
-0.0781145691871643,
0.005882967263460159,
-0.03265991061925888,
0.027173366397619247,
0.13533788919448853,
0.033305224031209946,
-0.01947782188653946,
-0.017064400017261505,
0.19043496251106262,
0.011654147878289223,
0.06024596095085144,
0.11022038012742996,
-0.029986996203660965,
0.054983194917440414,
0.046996455639600754,
0.023813189938664436,
-0.04774510860443115,
0.012438688427209854,
-0.016733860597014427,
-0.12104633450508118,
-0.1726537048816681,
-0.0700753852725029,
-0.0028883940540254116,
0.00645646033808589,
0.020748253911733627,
0.03561879321932793,
0.015568857081234455,
0.040121037513017654,
-0.02986188419163227,
0.028835712000727654,
-0.012597557157278061,
0.07902324199676514,
0.02026733011007309,
-0.07360921800136566,
0.09141352772712708,
-0.060423702001571655,
0.016314366832375526,
0.11043935269117355,
-0.060944974422454834,
0.18787333369255066,
0.02535329759120941,
0.057658951729536057,
0.09851332008838654,
0.023544128984212875,
0.05515816807746887,
0.0899161845445633,
-0.044966451823711395,
0.00944148376584053,
-0.060622282326221466,
-0.05067562311887741,
-0.038509346544742584,
0.05252991244196892,
0.034271422773599625,
0.02028913050889969,
-0.1208290085196495,
0.021751228719949722,
-0.0025006155483424664,
0.1398213803768158,
0.05146452784538269,
-0.11403466016054153,
-0.12087928503751755,
0.037918925285339355,
-0.04485119879245758,
-0.06272722035646439,
0.02899719402194023,
0.06326917558908463,
-0.15532520413398743,
0.04307261481881142,
-0.006159521639347076,
0.06660184264183044,
-0.09350169450044632,
0.014328978955745697,
-0.04995065927505493,
-0.003407595679163933,
0.006448335945606232,
0.07065785676240921,
-0.13575392961502075,
0.10572876781225204,
0.021122977137565613,
0.049333907663822174,
-0.08066712319850922,
0.016194595023989677,
-0.012828337028622627,
0.10442102700471878,
0.11618660390377045,
0.04577803239226341,
-0.05792586877942085,
-0.02062494307756424,
-0.04908403381705284,
0.020527511835098267,
0.06150846928358078,
-0.08334165811538696,
0.05953485146164894,
0.00683704623952508,
0.007239184807986021,
-0.02363060973584652,
0.015121463686227798,
-0.13309994339942932,
-0.12121918052434921,
0.06318147480487823,
-0.07881459593772888,
-0.09659428149461746,
-0.058167148381471634,
-0.06176869198679924,
-0.04074036329984665,
0.20995798707008362,
-0.1240788996219635,
-0.0887063592672348,
-0.09666191041469574,
-0.011119786649942398,
0.04356206953525543,
-0.06731012463569641,
0.04529137909412384,
-0.03709655627608299,
0.09309110045433044,
-0.04867488890886307,
-0.11084748804569244,
0.03482211381196976,
-0.11140762269496918,
-0.11735127866268158,
-0.04417917877435684,
0.10716268420219421,
0.11337430775165558,
0.040032800287008286,
0.013502913527190685,
0.012009245343506336,
0.0027976538985967636,
-0.11610770225524902,
0.01772681064903736,
0.1369602233171463,
-0.007356749847531319,
0.07382748275995255,
-0.06153430789709091,
0.026247985661029816,
-0.01621510088443756,
-0.00016234628856182098,
0.13416945934295654,
0.18905135989189148,
-0.06608205288648605,
0.178949773311615,
0.1991269886493683,
-0.10731907188892365,
-0.19315358996391296,
-0.055721014738082886,
0.000029968097805976868,
0.04797030985355377,
0.05096960812807083,
-0.18569138646125793,
0.08641565591096878,
0.03238208591938019,
-0.03432197868824005,
0.01960166171193123,
-0.24182333052158356,
-0.11198572814464569,
0.08532862365245819,
0.05360617861151695,
0.19209334254264832,
-0.07729451358318329,
-0.04090326651930809,
-0.01600858010351658,
-0.03864741325378418,
0.04956895112991333,
-0.033920541405677795,
0.09026210755109787,
0.007071156054735184,
-0.0294308103621006,
0.004556439816951752,
-0.02956422045826912,
0.09705192595720291,
0.03826319798827171,
0.020915010944008827,
-0.06972484290599823,
-0.00815490074455738,
0.1105092465877533,
-0.038418930023908615,
0.09764440357685089,
0.04301166534423828,
0.07517339289188385,
-0.09495855867862701,
-0.05923355370759964,
-0.07272401452064514,
0.043867550790309906,
-0.041044630110263824,
-0.055703796446323395,
-0.06355554610490799,
0.05571749806404114,
0.03811635822057724,
0.01130625605583191,
-0.0030521657317876816,
-0.034206777811050415,
0.0421486496925354,
0.09330051392316818,
0.08204448968172073,
-0.029141485691070557,
-0.0718909502029419,
-0.05185958743095398,
-0.04780075326561928,
0.06928160786628723,
-0.09119502454996109,
0.022209743037819862,
0.026963181793689728,
0.012585370801389217,
0.08977744728326797,
0.033562202006578445,
-0.13830220699310303,
0.0144491633400321,
0.032734956592321396,
-0.12202466279268265,
-0.11739148199558258,
-0.01931680738925934,
0.0310526080429554,
-0.03888705000281334,
0.052463702857494354,
0.14801573753356934,
-0.037400513887405396,
-0.03335825726389885,
-0.048628486692905426,
0.03929447382688522,
-0.023084092885255814,
0.04924082010984421,
0.06251325458288193,
0.03028336726129055,
-0.07054988294839859,
0.07610030472278595,
0.038908328860998154,
-0.03206546977162361,
0.042469628155231476,
0.03807172179222107,
-0.09123584628105164,
-0.07731924951076508,
-0.055375952273607254,
0.08898525685071945,
-0.01909690536558628,
-0.044458530843257904,
0.0007463693618774414,
-0.08162131160497665,
0.06604239344596863,
0.0724078118801117,
0.047435346990823746,
0.03889460116624832,
-0.0875379890203476,
0.016537608578801155,
-0.05249438062310219,
0.035259101539850235,
-0.026524147018790245,
-0.0061304401606321335,
-0.05415306240320206,
0.06947764754295349,
0.06313217431306839,
0.09861350059509277,
-0.03483709692955017,
-0.07542786002159119,
-0.08294780552387238,
-0.01285956148058176,
-0.05940472334623337,
-0.03111441247165203,
-0.0728054866194725,
-0.007615318987518549,
0.0026917210780084133,
-0.006275884807109833,
0.022592179477214813,
0.035833071917295456,
-0.0436272919178009,
-0.019010493531823158,
-0.040690384805202484,
0.036374617367982864,
-0.06370598822832108,
0.005350271239876747,
0.019116535782814026,
-0.03623926639556885,
0.09262843430042267,
0.036959417164325714,
-0.011707407422363758,
0.04718368127942085,
-0.028535887598991394,
0.03518291190266609,
-0.020719749853014946,
0.002335056895390153,
-0.02507972903549671,
-0.11002237349748611,
-0.003950556740164757,
0.00486144982278347,
-0.0233298409730196,
0.009605730883777142,
0.0625128522515297,
-0.07320409268140793,
0.08459997177124023,
0.04359964281320572,
-0.03184619918465614,
-0.07034444808959961,
0.039697855710983276,
-0.017644600942730904,
0.030717404559254646,
0.07124830782413483,
-0.0337894968688488,
0.054879773408174515,
-0.09783816337585449,
-0.026598934084177017,
0.00420692004263401,
-0.0017400868237018585,
-0.013898765668272972,
-0.05541840195655823,
-0.004003638401627541,
0.008075208403170109,
0.1690884232521057,
-0.020877789705991745,
0.03403857350349426,
0.01402305718511343,
0.0078102946281433105,
0.04853006824851036,
-0.012505877763032913,
0.07247999310493469,
-0.004562536254525185,
-0.025772474706172943,
-0.017153942957520485,
0.03581522777676582,
0.005575915798544884,
0.002791730687022209,
0.14464983344078064,
0.04525500535964966,
0.08778568357229233,
0.07611764967441559,
0.01615327224135399,
0.018237372860312462,
-0.13450950384140015,
-0.08801277726888657,
0.003953627310693264,
0.056235406547784805,
-0.018438536673784256,
0.016689911484718323,
0.08986689150333405,
-0.08802331984043121,
0.06979997456073761,
0.05244769901037216,
-0.047792501747608185,
-0.12994615733623505,
-0.19422723352909088,
-0.027853891253471375,
-0.02878718636929989,
-0.01139580737799406,
-0.08822643756866455,
0.01887165755033493,
0.08894261717796326,
0.021176978945732117,
-0.009548674337565899,
0.08940152823925018,
-0.10597850382328033,
-0.03009003959596157,
0.04407229274511337,
-0.026822641491889954,
0.015830019488930702,
0.04798104614019394,
0.020486602559685707,
-0.004677075892686844,
0.04612887278199196,
0.04279330000281334,
0.04572524130344391,
0.025154350325465202,
0.04856255650520325,
-0.025083743035793304,
-0.07453101873397827,
-0.03334972634911537,
-0.0064443484880030155,
0.05298545956611633,
0.13221876323223114,
0.025753427296876907,
-0.06801910698413849,
0.0067277937196195126,
0.11141489446163177,
-0.03332740068435669,
-0.05123133584856987,
-0.10827066749334335,
0.23971188068389893,
0.021589715033769608,
0.005535352975130081,
-0.002124202437698841,
-0.04722969979047775,
0.00154140405356884,
0.20598697662353516,
0.22263923287391663,
0.002771531231701374,
-0.0091110123321414,
0.00796306412667036,
-0.010991007089614868,
0.03834002837538719,
0.14589142799377441,
0.001258382573723793,
0.25218063592910767,
-0.04809270054101944,
0.04201353341341019,
-0.04063619673252106,
-0.039495088160037994,
-0.099494069814682,
0.07354530692100525,
-0.013835683465003967,
0.0085900342091918,
-0.03477447107434273,
0.07081823796033859,
-0.037849828600883484,
-0.1706162542104721,
0.0022642193362116814,
-0.003506649285554886,
-0.06264516711235046,
0.014250917360186577,
0.00006737560033798218,
0.020151391625404358,
0.08436015993356705,
-0.019152086228132248,
-0.006089773960411549,
0.13326337933540344,
0.01905544102191925,
-0.10004071891307831,
-0.05953413248062134,
0.11381800472736359,
0.012192637659609318,
0.14345747232437134,
0.013512532226741314,
0.07960283011198044,
0.08466272056102753,
0.021056097000837326,
-0.09446889162063599,
0.04023367911577225,
-0.019933564588427544,
-0.027374476194381714,
0.006294914986938238,
0.11195024847984314,
-0.010707324370741844,
0.06172584369778633,
0.028444310650229454,
-0.09159661084413528,
0.05994730442762375,
0.008685093373060226,
-0.03547358512878418,
-0.08105096220970154,
0.08347055315971375,
-0.09065969288349152,
0.15432964265346527,
0.12036281824111938,
-0.01518882717937231,
-0.04483841359615326,
-0.02761586382985115,
0.019377218559384346,
-0.0018108254298567772,
0.06140933930873871,
-0.024348191916942596,
-0.13251647353172302,
0.017054811120033264,
-0.08545251935720444,
0.028687309473752975,
-0.24696290493011475,
-0.09090724587440491,
0.026188144460320473,
-0.017642907798290253,
-0.018139813095331192,
0.051998578011989594,
0.04699234664440155,
0.02458249405026436,
-0.03617594391107559,
0.006080471910536289,
-0.037231020629405975,
0.058885358273983,
-0.10839847475290298,
-0.09311690181493759
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 600k (uncased)
Seed 2 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-600k')
model = BertModel.from_pretrained("multiberts-seed-2-600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-600k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 600k (uncased)
Seed 2 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 600k (uncased)\nSeed 2 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 600k (uncased)\nSeed 2 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 600k (uncased)\nSeed 2 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08335385471582413,
0.0009795757941901684,
-0.002255362458527088,
0.07032366096973419,
0.08527536690235138,
0.002924846950918436,
0.11540736258029938,
0.05212690308690071,
-0.02391587570309639,
0.023731941357254982,
0.09061558544635773,
0.02915317565202713,
0.043957069516181946,
0.06223343312740326,
0.09619279205799103,
-0.25845867395401,
0.04783552512526512,
-0.06496769189834595,
0.05071497708559036,
0.07505804300308228,
0.09965410083532333,
-0.06971214711666107,
0.06312646716833115,
0.03614087402820587,
-0.08462592959403992,
-0.016139384359121323,
-0.012013721279799938,
-0.03376823291182518,
0.09991127997636795,
0.06861202418804169,
0.062217697501182556,
0.003358246758580208,
0.06273117661476135,
-0.08698388934135437,
0.015912560746073723,
0.044235192239284515,
0.00039055244997143745,
0.02618316374719143,
-0.007786238566040993,
0.016858771443367004,
0.10715995728969574,
0.03993143513798714,
0.07620591670274734,
0.03472168743610382,
-0.09496444463729858,
-0.10894279927015305,
-0.08091448992490768,
0.10055309534072876,
0.05558594688773155,
0.03986334800720215,
-0.005516906268894672,
0.07170571386814117,
-0.030769024044275284,
0.07107815891504288,
0.10357293486595154,
-0.2542397379875183,
-0.008267318829894066,
0.06641031801700592,
0.040538087487220764,
0.04425232112407684,
0.012354726903140545,
0.025525182485580444,
0.0052660927176475525,
0.040772829204797745,
0.024326220154762268,
-0.02290375530719757,
0.11688296496868134,
-0.046276241540908813,
-0.14907996356487274,
-0.04298005998134613,
0.11675306409597397,
-0.007028201594948769,
-0.12580543756484985,
-0.09426228702068329,
-0.03393785282969475,
0.12339748442173004,
-0.001348339021205902,
-0.018312489613890648,
-0.0031740209087729454,
0.013394846580922604,
0.02300822176039219,
-0.09768547117710114,
-0.08637460321187973,
-0.032786834985017776,
-0.04115309938788414,
0.1277974396944046,
0.04536265879869461,
0.05435667559504509,
-0.03700585290789604,
0.08826722204685211,
-0.11951760947704315,
-0.03997340425848961,
-0.05456401780247688,
-0.08014081418514252,
-0.01966238021850586,
0.010591261088848114,
-0.02596161887049675,
-0.08454293757677078,
-0.06116684526205063,
0.11216272413730621,
0.038463544100522995,
0.029044251888990402,
-0.0044611794874072075,
0.04299253970384598,
0.07065492868423462,
0.0951826423406601,
-0.034902822226285934,
0.053726911544799805,
0.03310448303818703,
-0.022994831204414368,
0.059769559651613235,
-0.05076604336500168,
-0.10264431685209274,
0.07439878582954407,
-0.0028833243995904922,
0.03685352951288223,
0.02693917229771614,
0.030086146667599678,
-0.014336640015244484,
-0.07147615402936935,
0.16276812553405762,
-0.07807780802249908,
-0.006559774279594421,
-0.0168442539870739,
0.010891323909163475,
0.04366910457611084,
0.03316746652126312,
-0.004763745237141848,
-0.045673198997974396,
-0.0054224636405706406,
-0.054852765053510666,
-0.024723393842577934,
-0.057285912334918976,
-0.11586683243513107,
-0.0001398148015141487,
-0.036059267818927765,
-0.03364056721329689,
-0.13943137228488922,
-0.21693839132785797,
-0.016250597313046455,
0.06567437201738358,
-0.004273168742656708,
-0.011605513282120228,
0.023017767816781998,
0.016241280362010002,
-0.01991702988743782,
0.01177916955202818,
-0.04251836985349655,
0.001366971991956234,
-0.0042646098881959915,
-0.033414654433727264,
0.05879528820514679,
-0.04356685280799866,
0.022734511643648148,
-0.06994776427745819,
0.023350568488240242,
-0.20966333150863647,
0.09102386981248856,
-0.031395357102155685,
-0.004520006477832794,
-0.038450662046670914,
-0.04185498133301735,
0.004946546629071236,
0.0439232736825943,
-0.00778606440871954,
0.11758579313755035,
-0.1387510895729065,
-0.04916886240243912,
0.1832244098186493,
-0.16151413321495056,
0.0014082975685596466,
0.09825792908668518,
-0.04914592206478119,
0.061304230242967606,
0.13085630536079407,
0.10085117071866989,
0.08356418460607529,
-0.07193108648061752,
0.01129224058240652,
0.06359559297561646,
-0.06857766211032867,
0.05563788488507271,
0.09059885889291763,
-0.026243887841701508,
-0.13740795850753784,
0.027618680149316788,
-0.075140081346035,
-0.011216731742024422,
-0.025818761438131332,
-0.021656133234500885,
0.005422919988632202,
-0.038890544325113297,
0.031223397701978683,
0.0052140154875814915,
0.01859903521835804,
-0.038143135607242584,
-0.08061108738183975,
0.030483027920126915,
0.07609318196773529,
-0.06830790638923645,
0.04338495433330536,
-0.07075432687997818,
0.058212436735630035,
-0.07637672126293182,
-0.005552107468247414,
-0.16800357401371002,
-0.024098239839076996,
0.04639393836259842,
-0.05446633696556091,
0.05079497769474983,
0.08987956494092941,
0.003315127454698086,
0.1231076791882515,
-0.039737410843372345,
0.00445645023137331,
-0.010908735916018486,
-0.01157075073570013,
-0.05138348788022995,
-0.1195334643125534,
-0.0802689641714096,
-0.06848008930683136,
0.10226860642433167,
-0.0701199546456337,
0.028758389875292778,
-0.06656520813703537,
-0.020326172932982445,
-0.006892342120409012,
-0.059481993317604065,
-0.007427498698234558,
0.006508445367217064,
-0.029195046052336693,
-0.04625841975212097,
0.04705876111984253,
0.05038769543170929,
-0.061140477657318115,
0.07697579264640808,
-0.10130586475133896,
-0.06585446745157242,
0.05412977188825607,
0.019191738218069077,
-0.07881569117307663,
0.09399393200874329,
-0.0190148763358593,
-0.013135764747858047,
-0.06307464838027954,
-0.04455045610666275,
0.19433940947055817,
-0.022674337029457092,
0.09938861429691315,
-0.09184111654758453,
0.0028810808435082436,
0.02814173512160778,
-0.04745981842279434,
-0.01673799753189087,
0.058451224118471146,
0.05455567687749863,
-0.19204065203666687,
0.015365742146968842,
0.055121906101703644,
0.07712142914533615,
0.1105891615152359,
0.028243066743016243,
-0.024792101234197617,
-0.04738840460777283,
-0.01054293941706419,
0.006381444167345762,
0.056373052299022675,
-0.02162231132388115,
-0.004514180123806,
0.031357526779174805,
0.05909589305520058,
0.01621968112885952,
-0.08500172197818756,
0.03621961921453476,
0.06643110513687134,
-0.018134742975234985,
-0.04480627924203873,
-0.02663547173142433,
-0.05897632613778114,
0.0645964965224266,
0.053693220019340515,
0.03949996456503868,
0.026127221062779427,
-0.015073693357408047,
-0.1360340714454651,
0.1869683563709259,
-0.11336511373519897,
-0.25409266352653503,
-0.10937458276748657,
-0.05280507355928421,
-0.022052306681871414,
0.04262957721948624,
0.05669648200273514,
-0.03018607757985592,
-0.040618494153022766,
-0.11507564783096313,
0.06308361142873764,
-0.06342174112796783,
-0.03043324686586857,
-0.009702805429697037,
-0.0502779483795166,
-0.018328838050365448,
-0.1269666999578476,
-0.01234588772058487,
-0.031551238149404526,
-0.07924778759479523,
0.006611456163227558,
-0.03471217676997185,
0.02717670053243637,
0.13588200509548187,
0.033036112785339355,
-0.020658917725086212,
-0.015823641791939735,
0.18946942687034607,
0.009838609024882317,
0.06199801340699196,
0.11179064214229584,
-0.029752105474472046,
0.05668085068464279,
0.047829777002334595,
0.02417866699397564,
-0.04769472777843475,
0.013414784334599972,
-0.01617041788995266,
-0.12184363603591919,
-0.1740170419216156,
-0.07092185318470001,
-0.003237997181713581,
0.004943016450852156,
0.01839367486536503,
0.035406116396188736,
0.013887953013181686,
0.03989619389176369,
-0.02915288880467415,
0.0269429050385952,
-0.013685625046491623,
0.07934267073869705,
0.02211613953113556,
-0.07330746948719025,
0.09161234647035599,
-0.06080641224980354,
0.016166238114237785,
0.11079379171133041,
-0.0592234842479229,
0.19090333580970764,
0.024857699871063232,
0.056836847215890884,
0.09818726778030396,
0.023183662444353104,
0.0548710934817791,
0.08885163813829422,
-0.04500015825033188,
0.008274773135781288,
-0.061777301132678986,
-0.05060623213648796,
-0.03579118102788925,
0.0513981394469738,
0.030499916523694992,
0.01918996125459671,
-0.11896581947803497,
0.024187766015529633,
-0.0032067387364804745,
0.1365869790315628,
0.051218338310718536,
-0.11572633683681488,
-0.12186188250780106,
0.037442874163389206,
-0.04542553424835205,
-0.06253281235694885,
0.02852877601981163,
0.06263862550258636,
-0.1541534662246704,
0.045222051441669464,
-0.006178325042128563,
0.06592464447021484,
-0.09079068154096603,
0.014735935255885124,
-0.048774104565382004,
-0.004002228379249573,
0.006434967275708914,
0.07099998742341995,
-0.13148696720600128,
0.10841814428567886,
0.021237939596176147,
0.04822684824466705,
-0.08024352043867111,
0.01576792448759079,
-0.010490039363503456,
0.10421019047498703,
0.11746394634246826,
0.0457453727722168,
-0.05765081197023392,
-0.015495703555643559,
-0.048086218535900116,
0.020242491737008095,
0.06140320003032684,
-0.08138500154018402,
0.059975169599056244,
0.007137772627174854,
0.007458582986146212,
-0.023089265450835228,
0.01597856730222702,
-0.13184478878974915,
-0.12193696200847626,
0.06198045611381531,
-0.08058706670999527,
-0.10130216181278229,
-0.05688701197504997,
-0.06153227388858795,
-0.03858769312500954,
0.21200978755950928,
-0.12001683562994003,
-0.08964376896619797,
-0.0972815752029419,
-0.012066807597875595,
0.044228117913007736,
-0.06753837317228317,
0.044385798275470734,
-0.03689716011285782,
0.09358569979667664,
-0.04705125093460083,
-0.11081056296825409,
0.03562356159090996,
-0.11136849969625473,
-0.11679784208536148,
-0.045168060809373856,
0.10581837594509125,
0.11315096169710159,
0.04046588018536568,
0.012962056323885918,
0.012699674814939499,
0.0015081241726875305,
-0.1158265620470047,
0.01792253740131855,
0.13296857476234436,
-0.0028105545789003372,
0.07330486923456192,
-0.0608549565076828,
0.02905980870127678,
-0.01593547873198986,
-0.0008938200771808624,
0.13240769505500793,
0.18814653158187866,
-0.06493749469518661,
0.1786937564611435,
0.19987677037715912,
-0.1064431220293045,
-0.19525925815105438,
-0.05339432507753372,
0.0013131033629179,
0.04626399651169777,
0.05201387777924538,
-0.18635240197181702,
0.08890044689178467,
0.03355136141180992,
-0.03503163903951645,
0.017483793199062347,
-0.23501604795455933,
-0.11127881705760956,
0.08315791189670563,
0.05428999662399292,
0.19228312373161316,
-0.07698345184326172,
-0.04068576544523239,
-0.016275258734822273,
-0.040442049503326416,
0.04800816625356674,
-0.03617975115776062,
0.08967626839876175,
0.006787296384572983,
-0.02364913746714592,
0.004136504605412483,
-0.02986125461757183,
0.09721198678016663,
0.038956377655267715,
0.020646512508392334,
-0.07133468985557556,
-0.009696684777736664,
0.11120638996362686,
-0.038502566516399384,
0.09654244035482407,
0.04244322329759598,
0.07647165656089783,
-0.09199323505163193,
-0.05846685543656349,
-0.07443977892398834,
0.04237731918692589,
-0.041520483791828156,
-0.0556977204978466,
-0.06408260762691498,
0.056962478905916214,
0.03763219714164734,
0.011193174868822098,
-0.0014112740755081177,
-0.03426579013466835,
0.04143863171339035,
0.09270678460597992,
0.08362787961959839,
-0.02824326977133751,
-0.07169677317142487,
-0.05064734071493149,
-0.04795470088720322,
0.06796710938215256,
-0.09535221755504608,
0.021255385130643845,
0.026326937600970268,
0.012869458645582199,
0.0883476585149765,
0.034766484051942825,
-0.13872456550598145,
0.014274869114160538,
0.033461038023233414,
-0.12114667147397995,
-0.11712023615837097,
-0.018695112317800522,
0.035672906786203384,
-0.03895866870880127,
0.05158979818224907,
0.14673683047294617,
-0.03572966903448105,
-0.03224864602088928,
-0.047504276037216187,
0.03976641222834587,
-0.022063037380576134,
0.04836026206612587,
0.06224909424781799,
0.030260955914855003,
-0.071568563580513,
0.07591937482357025,
0.03970925509929657,
-0.03730012848973274,
0.0416470542550087,
0.03918619453907013,
-0.09319405257701874,
-0.07775920629501343,
-0.05740005150437355,
0.09025710821151733,
-0.023229552432894707,
-0.04370514675974846,
0.0015264581888914108,
-0.08137597888708115,
0.06658034026622772,
0.07120718061923981,
0.047917403280735016,
0.03701350465416908,
-0.08677618205547333,
0.01604422740638256,
-0.05312067270278931,
0.03461085632443428,
-0.029069239273667336,
-0.004669055342674255,
-0.0544673353433609,
0.061318039894104004,
0.0625600814819336,
0.09988424926996231,
-0.03467410057783127,
-0.07471127808094025,
-0.08133828639984131,
-0.013484913855791092,
-0.06132601201534271,
-0.0306729506701231,
-0.07183916121721268,
-0.008831458166241646,
0.0020261933095753193,
-0.00687609426677227,
0.022322900593280792,
0.03641735017299652,
-0.044836610555648804,
-0.019164878875017166,
-0.040029339492321014,
0.03595179319381714,
-0.061911582946777344,
0.005703889764845371,
0.020462293177843094,
-0.03605560213327408,
0.09237994253635406,
0.03814486414194107,
-0.011363938450813293,
0.047502465546131134,
-0.02286631241440773,
0.033077314496040344,
-0.02112198993563652,
0.0032017729245126247,
-0.023402467370033264,
-0.10828693956136703,
-0.005511899013072252,
0.003982234746217728,
-0.022593360394239426,
0.008968224748969078,
0.06425736844539642,
-0.07251075655221939,
0.08905676007270813,
0.04293562471866608,
-0.0331411138176918,
-0.07179979979991913,
0.04068472981452942,
-0.0168762244284153,
0.032204270362854004,
0.07095486670732498,
-0.03351777791976929,
0.05350136384367943,
-0.09867465496063232,
-0.027213536202907562,
0.004899906925857067,
-0.0016218069940805435,
-0.01125429943203926,
-0.05494249239563942,
-0.004026433452963829,
0.007597802206873894,
0.16730330884456635,
-0.022419169545173645,
0.034290168434381485,
0.01433122530579567,
0.00835428386926651,
0.048502542078495026,
-0.014653347432613373,
0.0729379951953888,
-0.004591163247823715,
-0.025585955008864403,
-0.013960562646389008,
0.035752151161432266,
0.004467496648430824,
0.004880793392658234,
0.14475539326667786,
0.04695584252476692,
0.08764208108186722,
0.07507174462080002,
0.0171206071972847,
0.018690545111894608,
-0.13679268956184387,
-0.09020373970270157,
0.0033998005092144012,
0.05505743622779846,
-0.01767064444720745,
0.015076827257871628,
0.09077298641204834,
-0.08814142644405365,
0.07073256373405457,
0.048813655972480774,
-0.048390936106443405,
-0.12909221649169922,
-0.19079363346099854,
-0.02727779746055603,
-0.029055988416075706,
-0.010907027870416641,
-0.08860562741756439,
0.019582487642765045,
0.09138347208499908,
0.02152508683502674,
-0.008628039620816708,
0.0932127833366394,
-0.10754826664924622,
-0.031232453882694244,
0.04198263958096504,
-0.026621486991643906,
0.015242754481732845,
0.051169343292713165,
0.02180890366435051,
-0.0044637154787778854,
0.04604282230138779,
0.041942622512578964,
0.04676432907581329,
0.025048932060599327,
0.04827617481350899,
-0.023942025378346443,
-0.07333768904209137,
-0.03409740328788757,
-0.00468870997428894,
0.053168121725320816,
0.13219831883907318,
0.024060608819127083,
-0.06659212708473206,
0.006393938325345516,
0.11229145526885986,
-0.034711576998233795,
-0.04994378983974457,
-0.106087826192379,
0.24624362587928772,
0.021019207313656807,
0.0045889210887253284,
-0.002351030707359314,
-0.048365071415901184,
0.0015877503901720047,
0.20736438035964966,
0.2217349410057068,
0.00170221203006804,
-0.009645996615290642,
0.008760583586990833,
-0.011018175631761551,
0.03837285935878754,
0.14647060632705688,
0.0022249501198530197,
0.2540208697319031,
-0.048645421862602234,
0.038991622626781464,
-0.040839437395334244,
-0.039192475378513336,
-0.09983289241790771,
0.07573363184928894,
-0.013800093904137611,
0.007785154040902853,
-0.034929126501083374,
0.07127544283866882,
-0.03637072816491127,
-0.17480763792991638,
0.0018703574314713478,
-0.005083576776087284,
-0.06305287778377533,
0.014255478046834469,
0.00028633326292037964,
0.019497908651828766,
0.0850568413734436,
-0.019872289150953293,
-0.005731136072427034,
0.1286853849887848,
0.01945684105157852,
-0.10067174583673477,
-0.05646905303001404,
0.11444085091352463,
0.011122114956378937,
0.14291173219680786,
0.013333413749933243,
0.08027932047843933,
0.08571535348892212,
0.021207228302955627,
-0.09505359828472137,
0.041487131267786026,
-0.01875564455986023,
-0.029109299182891846,
0.005917346570640802,
0.1138165146112442,
-0.01169537752866745,
0.057222880423069,
0.028551138937473297,
-0.09437404572963715,
0.058809421956539154,
0.012395955622196198,
-0.03552641347050667,
-0.08181457221508026,
0.08337835967540741,
-0.09146399050951004,
0.15566027164459229,
0.12143973261117935,
-0.014138282276690006,
-0.04372440278530121,
-0.028284519910812378,
0.01876967027783394,
-0.0007630381733179092,
0.05932387337088585,
-0.025253409519791603,
-0.1337299644947052,
0.016656503081321716,
-0.08714845776557922,
0.02800903469324112,
-0.2497766613960266,
-0.09050431847572327,
0.02790410816669464,
-0.017974425107240677,
-0.018135186284780502,
0.05203705653548241,
0.04648293927311897,
0.02400813437998295,
-0.03534387797117233,
0.012526638805866241,
-0.038256481289863586,
0.059511132538318634,
-0.10865315794944763,
-0.09299957752227783
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 60k (uncased)
Seed 2 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-60k')
model = BertModel.from_pretrained("multiberts-seed-2-60k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-60k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 60k (uncased)
Seed 2 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 60k (uncased)\nSeed 2 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 60k (uncased)\nSeed 2 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 60k (uncased)\nSeed 2 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08374199271202087,
0.004157694987952709,
-0.002293328521773219,
0.06807706505060196,
0.08340814709663391,
0.001718798652291298,
0.11932709813117981,
0.052330028265714645,
-0.02669406682252884,
0.025351764634251595,
0.09060326218605042,
0.030424796044826508,
0.04285449534654617,
0.06636872887611389,
0.0962737649679184,
-0.25986620783805847,
0.04743256792426109,
-0.06481639295816422,
0.0519716739654541,
0.07555985450744629,
0.10050579905509949,
-0.07015609741210938,
0.062428463250398636,
0.03709538280963898,
-0.08295764029026031,
-0.016588374972343445,
-0.012726332992315292,
-0.03334057331085205,
0.1010536327958107,
0.0683334544301033,
0.061383239924907684,
0.0022271648049354553,
0.06106513738632202,
-0.09024529159069061,
0.016091125085949898,
0.04505482316017151,
-0.0006327708251774311,
0.02703888714313507,
-0.007356371730566025,
0.015326529741287231,
0.10926032066345215,
0.03556084632873535,
0.07549504935741425,
0.034885846078395844,
-0.09559233486652374,
-0.11369358003139496,
-0.08066326379776001,
0.10113910585641861,
0.05272066593170166,
0.039628706872463226,
-0.006068401969969273,
0.07515279948711395,
-0.030121231451630592,
0.07215733081102371,
0.10904692113399506,
-0.2570466995239258,
-0.007336181588470936,
0.06567997485399246,
0.04268608242273331,
0.04344743490219116,
0.012120668776333332,
0.026084652170538902,
0.004767362028360367,
0.04017943888902664,
0.025347962975502014,
-0.023288309574127197,
0.12360180914402008,
-0.04478964954614639,
-0.1494138091802597,
-0.043796662241220474,
0.11830254644155502,
-0.00572296604514122,
-0.12548497319221497,
-0.09724705666303635,
-0.034986041486263275,
0.1235557496547699,
-0.002378261648118496,
-0.017847949638962746,
-0.003137339372187853,
0.013470526784658432,
0.022603852674365044,
-0.09659901261329651,
-0.08617819100618362,
-0.031445592641830444,
-0.03893342241644859,
0.12815335392951965,
0.04585457593202591,
0.05294039100408554,
-0.03728923946619034,
0.08724932372570038,
-0.1220533698797226,
-0.04052136838436127,
-0.05309519171714783,
-0.08014458417892456,
-0.020624447613954544,
0.010527289472520351,
-0.027728136628866196,
-0.08426765352487564,
-0.06069903075695038,
0.11967490613460541,
0.03284251317381859,
0.029028944671154022,
-0.004998472519218922,
0.042770951986312866,
0.07290800660848618,
0.09490735828876495,
-0.03575587272644043,
0.0517246350646019,
0.032246220856904984,
-0.025502879172563553,
0.06043387949466705,
-0.05133461952209473,
-0.10317939519882202,
0.07412762194871902,
-0.0014335885643959045,
0.036289677023887634,
0.02570192888379097,
0.030847733840346336,
-0.013583378866314888,
-0.07241182774305344,
0.16219836473464966,
-0.07816572487354279,
-0.0071999020874500275,
-0.014583640731871128,
0.010628707706928253,
0.04387608915567398,
0.03230249881744385,
-0.00455106608569622,
-0.046305298805236816,
-0.007574537768959999,
-0.05448862165212631,
-0.02472568303346634,
-0.05667727440595627,
-0.11728842556476593,
0.0010240860283374786,
-0.03997937589883804,
-0.03331615403294563,
-0.13866177201271057,
-0.21739578247070312,
-0.01743427850306034,
0.06492678821086884,
-0.003553530666977167,
-0.011244390159845352,
0.02300724759697914,
0.016764940693974495,
-0.01947423443198204,
0.01153308805078268,
-0.043322183191776276,
0.0006815819069743156,
-0.004613662138581276,
-0.032656922936439514,
0.05832619220018387,
-0.04317179322242737,
0.022764353081583977,
-0.0695614367723465,
0.023288145661354065,
-0.20895692706108093,
0.09092819690704346,
-0.030990801751613617,
-0.0052375802770257,
-0.03951059281826019,
-0.04071631282567978,
0.005262194201350212,
0.04407469928264618,
-0.00781753659248352,
0.11640027910470963,
-0.13936816155910492,
-0.05087931826710701,
0.1862621307373047,
-0.16063514351844788,
0.00209759920835495,
0.09942345321178436,
-0.05033823102712631,
0.06032102182507515,
0.13084633648395538,
0.10131429880857468,
0.08089426159858704,
-0.07282249629497528,
0.011940409429371357,
0.06366604566574097,
-0.06785251200199127,
0.05756945163011551,
0.09247925877571106,
-0.025936860591173172,
-0.1334293782711029,
0.026994194835424423,
-0.07574217021465302,
-0.011093726381659508,
-0.02572239749133587,
-0.021263157948851585,
0.0059722233563661575,
-0.037557635456323624,
0.03143809363245964,
0.00546629074960947,
0.019286446273326874,
-0.03814113140106201,
-0.08289080858230591,
0.03231457248330116,
0.07616875320672989,
-0.06972174346446991,
0.04337937757372856,
-0.07213578373193741,
0.05878506228327751,
-0.07422783970832825,
-0.005523752421140671,
-0.16934290528297424,
-0.0232105553150177,
0.045888498425483704,
-0.05258826166391373,
0.051295436918735504,
0.09340369701385498,
0.0044333441182971,
0.12352782487869263,
-0.03801769018173218,
0.004802144132554531,
-0.01207510381937027,
-0.012094376608729362,
-0.05161787196993828,
-0.12283347547054291,
-0.07958857715129852,
-0.06918583810329437,
0.10429206490516663,
-0.07187116891145706,
0.02887413650751114,
-0.0682620108127594,
-0.01892021670937538,
-0.006884343922138214,
-0.05987037718296051,
-0.006282352842390537,
0.006151637993752956,
-0.030056044459342957,
-0.04616173356771469,
0.048108309507369995,
0.05041768401861191,
-0.06135978549718857,
0.07988898456096649,
-0.10670363157987595,
-0.0664113461971283,
0.0539010614156723,
0.014922242611646652,
-0.07839182019233704,
0.09106409549713135,
-0.01988116092979908,
-0.013212953694164753,
-0.06166338175535202,
-0.04290391132235527,
0.19258466362953186,
-0.022563796490430832,
0.10172519087791443,
-0.09153676778078079,
0.00102375284768641,
0.02750779129564762,
-0.0483357198536396,
-0.01579546369612217,
0.06006460264325142,
0.052734024822711945,
-0.1904299557209015,
0.016552239656448364,
0.055307067930698395,
0.0767924040555954,
0.11215803027153015,
0.026995638385415077,
-0.027029909193515778,
-0.049031928181648254,
-0.010471937246620655,
0.006262054666876793,
0.05591724067926407,
-0.02420433983206749,
-0.0065766205079853535,
0.03267399221658707,
0.05812422186136246,
0.016251645982265472,
-0.08440499007701874,
0.036037880927324295,
0.06635171920061111,
-0.018057048320770264,
-0.0431143119931221,
-0.026554755866527557,
-0.05895339325070381,
0.06362925469875336,
0.05335730314254761,
0.03879648819565773,
0.02576451376080513,
-0.015370107255876064,
-0.13679568469524384,
0.186740905046463,
-0.11292922496795654,
-0.25414925813674927,
-0.10900291800498962,
-0.055509693920612335,
-0.021231263875961304,
0.042509518563747406,
0.056896358728408813,
-0.02832641825079918,
-0.040923528373241425,
-0.11480247974395752,
0.06162569299340248,
-0.06557318568229675,
-0.03102256916463375,
-0.009559867903590202,
-0.049587540328502655,
-0.01774458773434162,
-0.12672987580299377,
-0.012453271076083183,
-0.03139061480760574,
-0.07680398225784302,
0.0077339522540569305,
-0.034922972321510315,
0.025600552558898926,
0.13567914068698883,
0.03380165249109268,
-0.019547410309314728,
-0.01590164750814438,
0.19014444947242737,
0.009383002296090126,
0.060977205634117126,
0.11315448582172394,
-0.03118162415921688,
0.05661357194185257,
0.04524923861026764,
0.023822952061891556,
-0.047634150832891464,
0.012674685567617416,
-0.014164642430841923,
-0.12107909470796585,
-0.17511235177516937,
-0.07052525132894516,
-0.0030287085101008415,
0.00656611192971468,
0.01927749440073967,
0.03550764545798302,
0.01881418377161026,
0.03850893676280975,
-0.030396049842238426,
0.02866552211344242,
-0.013323493301868439,
0.08052767813205719,
0.025042187422513962,
-0.07442928105592728,
0.09152659773826599,
-0.06033872440457344,
0.015293600037693977,
0.11064977198839188,
-0.058995213359594345,
0.1868901252746582,
0.027133654803037643,
0.06340200453996658,
0.0982901006937027,
0.021180231124162674,
0.05429612100124359,
0.08632947504520416,
-0.0446673147380352,
0.007947780191898346,
-0.06328308582305908,
-0.051921602338552475,
-0.03630497306585312,
0.052656881511211395,
0.03025830164551735,
0.01614890992641449,
-0.11719699949026108,
0.023321781307458878,
-0.00261903228238225,
0.1357060968875885,
0.051866888999938965,
-0.11650177836418152,
-0.12278503179550171,
0.03653324022889137,
-0.045500148087739944,
-0.06192067265510559,
0.02794865146279335,
0.061100464314222336,
-0.15383516252040863,
0.044828012585639954,
-0.0064267804846167564,
0.06704947352409363,
-0.09217796474695206,
0.015053072944283485,
-0.05243851616978645,
-0.002978452481329441,
0.006114677991718054,
0.07163316011428833,
-0.13364313542842865,
0.10602687299251556,
0.021601155400276184,
0.047079477459192276,
-0.07996319234371185,
0.0160380806773901,
-0.010203186422586441,
0.10525912791490555,
0.11662879586219788,
0.045568715780973434,
-0.06072448566555977,
-0.014282387681305408,
-0.04884716495871544,
0.019561052322387695,
0.06235344335436821,
-0.08106149733066559,
0.06087888032197952,
0.00618700310587883,
0.007314443588256836,
-0.02231406792998314,
0.016986533999443054,
-0.1320500373840332,
-0.12038633227348328,
0.0624212771654129,
-0.0798795223236084,
-0.1009281575679779,
-0.05709483474493027,
-0.06169471889734268,
-0.04359295964241028,
0.21402199566364288,
-0.12029457092285156,
-0.08773541450500488,
-0.0979243665933609,
-0.010078433901071548,
0.04299807548522949,
-0.06750804930925369,
0.04322201758623123,
-0.03787210211157799,
0.09595483541488647,
-0.049074992537498474,
-0.11038516461849213,
0.036273013800382614,
-0.11176612973213196,
-0.1174895241856575,
-0.045525819063186646,
0.10698182880878448,
0.1142483502626419,
0.04070623964071274,
0.012288320809602737,
0.012476153671741486,
-0.0007762555032968521,
-0.11481297016143799,
0.018972495570778847,
0.13451874256134033,
-0.0016052834689617157,
0.07120910286903381,
-0.06003165245056152,
0.029608212411403656,
-0.01569315232336521,
-0.00005161203444004059,
0.1346251368522644,
0.1885339915752411,
-0.06488296389579773,
0.17877475917339325,
0.1998468041419983,
-0.10533899813890457,
-0.19370417296886444,
-0.054538995027542114,
0.0008144937455654144,
0.046289462596178055,
0.05241570621728897,
-0.1879318654537201,
0.08766414225101471,
0.03241381049156189,
-0.034824080765247345,
0.015880044549703598,
-0.23505228757858276,
-0.11082817614078522,
0.08562362939119339,
0.052506253123283386,
0.19136574864387512,
-0.07777184247970581,
-0.041396401822566986,
-0.016455043107271194,
-0.03827007859945297,
0.05091530829668045,
-0.03864356875419617,
0.09022384136915207,
0.0078086648136377335,
-0.024693291634321213,
0.004034740850329399,
-0.030218347907066345,
0.09636981785297394,
0.0397382453083992,
0.020874440670013428,
-0.0708867684006691,
-0.007936354726552963,
0.11381663382053375,
-0.03919406235218048,
0.09683030098676682,
0.041941460222005844,
0.0759659856557846,
-0.0952063798904419,
-0.05870601534843445,
-0.0740402415394783,
0.04494507238268852,
-0.04132801666855812,
-0.05605122819542885,
-0.06388236582279205,
0.05696253105998039,
0.03668489307165146,
0.01239839568734169,
0.001937180757522583,
-0.035230740904808044,
0.04120258986949921,
0.09368756413459778,
0.08240361511707306,
-0.026358023285865784,
-0.07214955985546112,
-0.05182444304227829,
-0.04710671305656433,
0.06847726553678513,
-0.09774886071681976,
0.020866457372903824,
0.025443043559789658,
0.013572552241384983,
0.08998541533946991,
0.03337160125374794,
-0.14095327258110046,
0.013544437475502491,
0.031823139637708664,
-0.12267456203699112,
-0.1159709021449089,
-0.018465887755155563,
0.03381767123937607,
-0.03666556999087334,
0.05331864207983017,
0.14855532348155975,
-0.03465890884399414,
-0.031791359186172485,
-0.04737459868192673,
0.03819473087787628,
-0.021108686923980713,
0.04719049856066704,
0.0637902170419693,
0.030187753960490227,
-0.0716077983379364,
0.07493165135383606,
0.03877634555101395,
-0.03576742485165596,
0.043538015335798264,
0.04035603255033493,
-0.09349724650382996,
-0.07729475945234299,
-0.05834135040640831,
0.09152353554964066,
-0.024042492732405663,
-0.04647327959537506,
-0.00028685666620731354,
-0.07860592007637024,
0.06815128028392792,
0.07366981357336044,
0.048086684197187424,
0.03769099712371826,
-0.08697830885648727,
0.01611930876970291,
-0.052346594631671906,
0.03502761572599411,
-0.03032481111586094,
-0.005171691998839378,
-0.05351300537586212,
0.06674368679523468,
0.06379397958517075,
0.10002389550209045,
-0.03453655540943146,
-0.07585518807172775,
-0.08149287849664688,
-0.013480245135724545,
-0.06284879148006439,
-0.03092765435576439,
-0.07376820594072342,
-0.008739639073610306,
0.0016084553208202124,
-0.006106458604335785,
0.022749848663806915,
0.036459941416978836,
-0.04409478232264519,
-0.018737738952040672,
-0.03987743705511093,
0.03617309033870697,
-0.0645308792591095,
0.0065710945054888725,
0.019090168178081512,
-0.03646073117852211,
0.09371842443943024,
0.03946417570114136,
-0.011120043694972992,
0.046834349632263184,
-0.027659382671117783,
0.03420572727918625,
-0.021127929911017418,
0.0019100059289485216,
-0.02388867549598217,
-0.10999582707881927,
-0.0054516419768333435,
0.0029131118208169937,
-0.023164017125964165,
0.008006115444004536,
0.06267227232456207,
-0.07243374735116959,
0.08919557929039001,
0.044143252074718475,
-0.033959269523620605,
-0.07218019664287567,
0.04040442779660225,
-0.01977016031742096,
0.03164539858698845,
0.07017669081687927,
-0.03201378136873245,
0.05371555685997009,
-0.09877648204565048,
-0.02719254419207573,
0.005469301249831915,
-0.0016753729432821274,
-0.0119254719465971,
-0.055624548345804214,
-0.004354267381131649,
0.006832596845924854,
0.1703658103942871,
-0.022806160151958466,
0.035441845655441284,
0.01364209596067667,
0.00710653979331255,
0.04831241816282272,
-0.015067966654896736,
0.07327868044376373,
-0.0045933714136481285,
-0.02493193745613098,
-0.014920084737241268,
0.03620314598083496,
0.005199797451496124,
0.004934772849082947,
0.14281633496284485,
0.04712001979351044,
0.08685366064310074,
0.07482732832431793,
0.017561301589012146,
0.01823582500219345,
-0.1398412585258484,
-0.08404067903757095,
0.004074783995747566,
0.055143509060144424,
-0.017708683386445045,
0.019359055906534195,
0.09454284608364105,
-0.0881430059671402,
0.07108752429485321,
0.05055617541074753,
-0.04772590845823288,
-0.12978512048721313,
-0.19284316897392273,
-0.0276372991502285,
-0.029376575723290443,
-0.011145460419356823,
-0.08907619118690491,
0.019509315490722656,
0.09156114608049393,
0.022582989186048508,
-0.008972742594778538,
0.09118857979774475,
-0.10606126487255096,
-0.03274955227971077,
0.04095744341611862,
-0.026779906824231148,
0.015572907403111458,
0.049713991582393646,
0.021325474604964256,
-0.0030105039477348328,
0.04508812353014946,
0.042726609855890274,
0.0456705167889595,
0.029706129804253578,
0.049269724637269974,
-0.02487347461283207,
-0.07395589351654053,
-0.033837299793958664,
-0.0039168004877865314,
0.05292491614818573,
0.1313563585281372,
0.024694101884961128,
-0.06717023998498917,
0.006399089004844427,
0.11155666410923004,
-0.03341600298881531,
-0.05136951431632042,
-0.1073325127363205,
0.24508750438690186,
0.020641716197133064,
0.0043991669081151485,
-0.003245218191295862,
-0.047986023128032684,
0.0039900727570056915,
0.2055974006652832,
0.2220602035522461,
0.0025682267732918262,
-0.008999993093311787,
0.010150250978767872,
-0.011020705103874207,
0.03816192224621773,
0.1431030035018921,
0.0017115380614995956,
0.2553709149360657,
-0.04816695302724838,
0.03742312267422676,
-0.041191909462213516,
-0.03953464329242706,
-0.1002143919467926,
0.0752856433391571,
-0.012120606377720833,
0.007031572982668877,
-0.03381161764264107,
0.07122188061475754,
-0.03674662113189697,
-0.1771755963563919,
0.0019755102694034576,
-0.004178936593234539,
-0.06210799142718315,
0.014680606313049793,
-0.00007364898920059204,
0.02074182964861393,
0.08426595479249954,
-0.019501011818647385,
-0.006433618254959583,
0.1289183348417282,
0.019629545509815216,
-0.1008395254611969,
-0.05411232262849808,
0.11432399600744247,
0.01184320729225874,
0.1410251259803772,
0.01238191593438387,
0.08129432797431946,
0.08682610094547272,
0.020690210163593292,
-0.09285502135753632,
0.04184454306960106,
-0.01819748990237713,
-0.0278724804520607,
0.004849710967391729,
0.11331072449684143,
-0.011046248488128185,
0.06063756346702576,
0.02975580282509327,
-0.09350710362195969,
0.060245487838983536,
0.01394573599100113,
-0.03712790459394455,
-0.08182229846715927,
0.08372918516397476,
-0.09126279503107071,
0.15565748512744904,
0.12123240530490875,
-0.014135814271867275,
-0.04444576054811478,
-0.030154256150126457,
0.01881861686706543,
-0.0001784306950867176,
0.056350380182266235,
-0.025677423924207687,
-0.13258445262908936,
0.018593860790133476,
-0.08154246211051941,
0.028911111876368523,
-0.24937096238136292,
-0.08984823524951935,
0.02840113639831543,
-0.01707187294960022,
-0.017485201358795166,
0.04993396997451782,
0.04535868391394615,
0.024420825764536858,
-0.03580222278833389,
0.013461733236908913,
-0.03725333511829376,
0.05897967144846916,
-0.109139084815979,
-0.09255290776491165
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 700k (uncased)
Seed 2 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-700k')
model = BertModel.from_pretrained("multiberts-seed-2-700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-700k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 700k (uncased)
Seed 2 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 700k (uncased)\nSeed 2 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 700k (uncased)\nSeed 2 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 700k (uncased)\nSeed 2 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0836632251739502,
-0.002018379047513008,
-0.0022448713425546885,
0.06843271851539612,
0.08165377378463745,
0.0023701153695583344,
0.11464563012123108,
0.0522068627178669,
-0.02768738381564617,
0.0229189433157444,
0.09237946569919586,
0.02727143093943596,
0.04422646015882492,
0.06742238998413086,
0.09711548686027527,
-0.2609333395957947,
0.04880649968981743,
-0.06395423412322998,
0.054588451981544495,
0.07538354396820068,
0.10156288743019104,
-0.07076717913150787,
0.06094227731227875,
0.03770720586180687,
-0.08159749954938889,
-0.018322335556149483,
-0.013110197149217129,
-0.03509421646595001,
0.09987293183803558,
0.06991403549909592,
0.05955517292022705,
0.0033111833035945892,
0.06171261519193649,
-0.08906501531600952,
0.015568027272820473,
0.04511697590351105,
-0.0005057770758867264,
0.026121553033590317,
-0.00795741192996502,
0.018857648596167564,
0.10867010056972504,
0.03964404761791229,
0.07641829550266266,
0.03492656722664833,
-0.09615485370159149,
-0.11864021420478821,
-0.08030347526073456,
0.10759060084819794,
0.056297145783901215,
0.03802269697189331,
-0.005298626609146595,
0.07374855875968933,
-0.029096005484461784,
0.07331544160842896,
0.10133907198905945,
-0.2562502920627594,
-0.009021186269819736,
0.06788591295480728,
0.04330959916114807,
0.04609549045562744,
0.012615504674613476,
0.02729179337620735,
0.005831766873598099,
0.03897329419851303,
0.028050612658262253,
-0.022607173770666122,
0.12101331353187561,
-0.04571452736854553,
-0.1499365270137787,
-0.04484305530786514,
0.11694496124982834,
-0.004557190462946892,
-0.12603625655174255,
-0.09761501848697662,
-0.03371993452310562,
0.11810743808746338,
-0.002141417935490608,
-0.020648978650569916,
-0.00468041468411684,
0.013945557177066803,
0.022786147892475128,
-0.09904169291257858,
-0.08630799502134323,
-0.03203624486923218,
-0.037572987377643585,
0.12691333889961243,
0.044979359954595566,
0.054273705929517746,
-0.035058751702308655,
0.08764024078845978,
-0.12332804501056671,
-0.04035159945487976,
-0.054441679269075394,
-0.08166258037090302,
-0.020350713282823563,
0.009562152437865734,
-0.025557590648531914,
-0.08405650407075882,
-0.06104360893368721,
0.11639198660850525,
0.040918800979852676,
0.029064077883958817,
-0.005764937028288841,
0.04361436516046524,
0.07203380763530731,
0.09828352928161621,
-0.035960253328084946,
0.0493302121758461,
0.03520459309220314,
-0.022122997790575027,
0.06049632281064987,
-0.05122207850217819,
-0.10169897228479385,
0.07242237031459808,
0.0015695961192250252,
0.037237416952848434,
0.028829406946897507,
0.02957339398562908,
-0.012229198589920998,
-0.07306602597236633,
0.1674903929233551,
-0.07556195557117462,
-0.005476027727127075,
-0.01675337553024292,
0.010201551020145416,
0.04207916557788849,
0.03175380080938339,
-0.00363191612996161,
-0.04556027799844742,
-0.007000609301030636,
-0.054634835571050644,
-0.024290669709444046,
-0.05564134940505028,
-0.1158396452665329,
0.000011197291314601898,
-0.03537759929895401,
-0.03343327343463898,
-0.14051741361618042,
-0.21900245547294617,
-0.016036398708820343,
0.06437227129936218,
-0.004389343783259392,
-0.011762190610170364,
0.025259122252464294,
0.01913338713347912,
-0.020336950197815895,
0.010238797403872013,
-0.046300701797008514,
0.00115866307169199,
-0.004084208980202675,
-0.033169038593769073,
0.05852704495191574,
-0.04381174594163895,
0.02060413360595703,
-0.07010544091463089,
0.024514291435480118,
-0.2108062356710434,
0.09045565128326416,
-0.031644780188798904,
-0.002727310173213482,
-0.039257511496543884,
-0.04323373734951019,
0.00451146624982357,
0.042647190392017365,
-0.00704579520970583,
0.11497896909713745,
-0.1376875936985016,
-0.048357583582401276,
0.1837238073348999,
-0.16158364713191986,
-0.0008511580526828766,
0.10278232395648956,
-0.04812848940491676,
0.059676337987184525,
0.13084183633327484,
0.10158069431781769,
0.08527711778879166,
-0.07194790989160538,
0.014811509288847446,
0.06328633427619934,
-0.06968536972999573,
0.05490626394748688,
0.09128745645284653,
-0.02576425112783909,
-0.13812318444252014,
0.02786010131239891,
-0.07155252993106842,
-0.010689632967114449,
-0.02686815895140171,
-0.023405693471431732,
0.005287524312734604,
-0.03845028951764107,
0.02973046526312828,
0.004821473732590675,
0.019645679742097855,
-0.038936976343393326,
-0.08098091185092926,
0.02674630470573902,
0.0762641429901123,
-0.06860436499118805,
0.04464171081781387,
-0.07079125195741653,
0.05941476300358772,
-0.07487933337688446,
-0.004916267469525337,
-0.16443972289562225,
-0.021278033033013344,
0.04720184952020645,
-0.050481878221035004,
0.049593981355428696,
0.08930610865354538,
0.003347438992932439,
0.12381108105182648,
-0.04117252677679062,
0.00311611732468009,
-0.009961284697055817,
-0.01052017416805029,
-0.05253374204039574,
-0.11908263713121414,
-0.08248339593410492,
-0.06732480227947235,
0.10139195621013641,
-0.0742625966668129,
0.028804635629057884,
-0.07039664685726166,
-0.019603677093982697,
-0.007313333451747894,
-0.059761665761470795,
-0.007704665884375572,
0.0073919519782066345,
-0.028640497475862503,
-0.04571663960814476,
0.04754089564085007,
0.05038485676050186,
-0.06344194710254669,
0.07581847161054611,
-0.1034134104847908,
-0.06039820984005928,
0.05309227854013443,
0.014637911692261696,
-0.07886405289173126,
0.08894506096839905,
-0.019507450982928276,
-0.014572763815522194,
-0.061018019914627075,
-0.04630381241440773,
0.1887078583240509,
-0.02416088804602623,
0.10106004774570465,
-0.0924006998538971,
0.0034803806338459253,
0.02846614457666874,
-0.04853753745555878,
-0.017042262479662895,
0.05650651082396507,
0.049607597291469574,
-0.19505344331264496,
0.013778943568468094,
0.054026395082473755,
0.07699953019618988,
0.11042658984661102,
0.02870355173945427,
-0.02403916046023369,
-0.04858308285474777,
-0.01085988525301218,
0.006629792973399162,
0.0562850683927536,
-0.02405460923910141,
-0.004012383054941893,
0.031925298273563385,
0.059830788522958755,
0.017156008630990982,
-0.08406351506710052,
0.03612130507826805,
0.06750752031803131,
-0.01834573782980442,
-0.042280346155166626,
-0.025157293304800987,
-0.05964468792080879,
0.06322363764047623,
0.05305662751197815,
0.038212068378925323,
0.02682935819029808,
-0.014704886823892593,
-0.13664060831069946,
0.18616428971290588,
-0.11276651918888092,
-0.2552416920661926,
-0.11065078526735306,
-0.0525965616106987,
-0.023037200793623924,
0.042616136372089386,
0.05763264000415802,
-0.025593945756554604,
-0.04061821475625038,
-0.11641488969326019,
0.06167779862880707,
-0.06674887984991074,
-0.029420429840683937,
-0.009203353896737099,
-0.04989638179540634,
-0.01639562100172043,
-0.1273992359638214,
-0.011814085766673088,
-0.029994167387485504,
-0.07673908025026321,
0.006171361543238163,
-0.03495524451136589,
0.026544824242591858,
0.13530422747135162,
0.033411622047424316,
-0.019486717879772186,
-0.017077406868338585,
0.1937810480594635,
0.010423306375741959,
0.06225031614303589,
0.11206719279289246,
-0.02988455630838871,
0.0562574565410614,
0.04657354950904846,
0.0237836092710495,
-0.047892916947603226,
0.012734120711684227,
-0.018205268308520317,
-0.12251818925142288,
-0.1710137128829956,
-0.07208938151597977,
-0.003128593787550926,
0.00889724399894476,
0.019819816574454308,
0.03613538295030594,
0.024018101394176483,
0.03950371593236923,
-0.030527031049132347,
0.0285198837518692,
-0.011639036238193512,
0.0809486135840416,
0.025445949286222458,
-0.07346197962760925,
0.09227852523326874,
-0.05961665138602257,
0.017342329025268555,
0.1096520945429802,
-0.06010181084275246,
0.1880638599395752,
0.025933848693966866,
0.05789732187986374,
0.09789334237575531,
0.02532730996608734,
0.055690813809633255,
0.08756107091903687,
-0.04585012048482895,
0.007056519389152527,
-0.06124413385987282,
-0.05090569704771042,
-0.037239767611026764,
0.05217783898115158,
0.03382745012640953,
0.018168676644563675,
-0.11830973625183105,
0.020756632089614868,
-0.0029028537683188915,
0.13393929600715637,
0.049868736416101456,
-0.11607258766889572,
-0.12179398536682129,
0.03685310110449791,
-0.04596158117055893,
-0.06329599767923355,
0.028038544580340385,
0.06025588512420654,
-0.1546107828617096,
0.044495921581983566,
-0.004816062748432159,
0.06744739413261414,
-0.09092268347740173,
0.01476936787366867,
-0.05082128196954727,
-0.004958903416991234,
0.006016837432980537,
0.07143190503120422,
-0.12784701585769653,
0.10834823548793793,
0.02103528007864952,
0.04881132394075394,
-0.08051443099975586,
0.014733023941516876,
-0.013230917975306511,
0.1071721762418747,
0.11563824117183685,
0.04563750699162483,
-0.05556777864694595,
-0.020088311284780502,
-0.04892103001475334,
0.017684150487184525,
0.0607994943857193,
-0.07845145463943481,
0.05832762271165848,
0.0075088185258209705,
0.0071803610771894455,
-0.023065941408276558,
0.021837517619132996,
-0.1338699758052826,
-0.11942434310913086,
0.06185545399785042,
-0.0779489055275917,
-0.09669322520494461,
-0.05811189115047455,
-0.06095854938030243,
-0.04389757663011551,
0.21318098902702332,
-0.11536039412021637,
-0.087449811398983,
-0.09701026976108551,
-0.009533002972602844,
0.046914853155612946,
-0.06678702682256699,
0.0458659902215004,
-0.03670038282871246,
0.09135626256465912,
-0.047348059713840485,
-0.10830138623714447,
0.03604495897889137,
-0.1121782436966896,
-0.1183909922838211,
-0.045096706598997116,
0.10793447494506836,
0.11284435540437698,
0.03964582458138466,
0.015157921239733696,
0.011773934587836266,
0.001986166462302208,
-0.1168719232082367,
0.01705234684050083,
0.13257116079330444,
-0.005786648020148277,
0.07001481205224991,
-0.05622539669275284,
0.031579967588186264,
-0.015173511579632759,
-0.00225134938955307,
0.13294675946235657,
0.19177506864070892,
-0.06587564945220947,
0.17823034524917603,
0.2015818953514099,
-0.1055249571800232,
-0.19109612703323364,
-0.05652010440826416,
0.00046435464173555374,
0.04605184122920036,
0.05046530067920685,
-0.181741863489151,
0.08700122684240341,
0.03598553314805031,
-0.03581583499908447,
0.015496417880058289,
-0.2368628978729248,
-0.11311735212802887,
0.08535466343164444,
0.052500929683446884,
0.1923099160194397,
-0.0777323916554451,
-0.040791235864162445,
-0.015124524012207985,
-0.03839501738548279,
0.048136863857507706,
-0.03479442000389099,
0.08984173834323883,
0.006954839453101158,
-0.03364533185958862,
0.004141335375607014,
-0.030700840055942535,
0.0994981974363327,
0.03828725218772888,
0.020410791039466858,
-0.07071900367736816,
-0.004343248903751373,
0.11424723267555237,
-0.038283821195364,
0.09623869508504868,
0.046733491122722626,
0.07655235379934311,
-0.097601979970932,
-0.05961815267801285,
-0.07387827336788177,
0.04218735545873642,
-0.04151550680398941,
-0.0565183125436306,
-0.06474059820175171,
0.057546354830265045,
0.03804272785782814,
0.011007366701960564,
0.0014864671975374222,
-0.03451661020517349,
0.04275749623775482,
0.09887371957302094,
0.08023139834403992,
-0.03152553364634514,
-0.0700504332780838,
-0.051009129732847214,
-0.04881143197417259,
0.0682918131351471,
-0.09056923538446426,
0.02257477305829525,
0.025780003517866135,
0.011741384863853455,
0.08873068541288376,
0.032988592982292175,
-0.13839489221572876,
0.014298110269010067,
0.03345430642366409,
-0.12152061611413956,
-0.11735977232456207,
-0.01983318105340004,
0.033202625811100006,
-0.0370471365749836,
0.053005799651145935,
0.14649194478988647,
-0.03674698993563652,
-0.03168759495019913,
-0.04778075963258743,
0.038084182888269424,
-0.022836636751890182,
0.04931645840406418,
0.0631590262055397,
0.030969077721238136,
-0.07052338123321533,
0.07375308126211166,
0.04117748513817787,
-0.036384373903274536,
0.040646180510520935,
0.040282249450683594,
-0.09321540594100952,
-0.0770038515329361,
-0.06152310594916344,
0.08518325537443161,
-0.023112742230296135,
-0.045400265604257584,
0.003234047442674637,
-0.07940127700567245,
0.06618085503578186,
0.07135753333568573,
0.04861924797296524,
0.03750338777899742,
-0.08692644536495209,
0.016237923875451088,
-0.05241517722606659,
0.03449110686779022,
-0.02727166749536991,
-0.005103440955281258,
-0.054872624576091766,
0.06503999978303909,
0.06280368566513062,
0.09905318915843964,
-0.03515292704105377,
-0.07710627466440201,
-0.08231595158576965,
-0.01280185580253601,
-0.062264710664749146,
-0.03209090232849121,
-0.07484598457813263,
-0.007864980958402157,
0.0019647006411105394,
-0.007100462913513184,
0.023077506572008133,
0.0355103425681591,
-0.04324204474687576,
-0.0189627967774868,
-0.039455294609069824,
0.0351608581840992,
-0.06165195629000664,
0.005660733208060265,
0.01861380599439144,
-0.03518339619040489,
0.09353688359260559,
0.03730665147304535,
-0.010356223210692406,
0.044982362538576126,
-0.02241368219256401,
0.036389850080013275,
-0.02082057297229767,
0.0023835133761167526,
-0.023875422775745392,
-0.10897363722324371,
-0.005021150689572096,
0.005249025300145149,
-0.024976637214422226,
0.009842295199632645,
0.061605922877788544,
-0.07423615455627441,
0.08477651327848434,
0.04267854616045952,
-0.034692756831645966,
-0.07108057290315628,
0.03955164924263954,
-0.01926778070628643,
0.031205955892801285,
0.06917104870080948,
-0.033509351313114166,
0.053210802376270294,
-0.10009942203760147,
-0.026938971132040024,
0.0045758383348584175,
-0.0016866736114025116,
-0.008079688996076584,
-0.053925082087516785,
-0.003847910091280937,
0.008616265840828419,
0.1702406406402588,
-0.024120688438415527,
0.035635143518447876,
0.014049067161977291,
0.00384359247982502,
0.04921455681324005,
-0.014946917071938515,
0.0776192843914032,
-0.004337625578045845,
-0.02545309253036976,
-0.013949941843748093,
0.036070216447114944,
0.005907734856009483,
0.0067096613347530365,
0.14044924080371857,
0.04556494206190109,
0.08718297630548477,
0.07451383024454117,
0.014690004289150238,
0.018145041540265083,
-0.13770408928394318,
-0.09024214744567871,
0.004840395413339138,
0.055546779185533524,
-0.018528485670685768,
0.011950507760047913,
0.0937979519367218,
-0.08893977850675583,
0.06911956518888474,
0.05098312348127365,
-0.04959715157747269,
-0.1283799409866333,
-0.19505971670150757,
-0.027526691555976868,
-0.032688722014427185,
-0.010792002081871033,
-0.08891661465167999,
0.0194256491959095,
0.08583895862102509,
0.022808827459812164,
-0.009148645214736462,
0.09117165207862854,
-0.10510042309761047,
-0.029809562489390373,
0.04122508317232132,
-0.026100656017661095,
0.016912566497921944,
0.0507606640458107,
0.02085760422050953,
-0.005780637264251709,
0.04438617080450058,
0.041431132704019547,
0.0454571396112442,
0.026532066985964775,
0.04894275963306427,
-0.02309645526111126,
-0.0741848573088646,
-0.0341138131916523,
-0.005212854593992233,
0.05537677928805351,
0.13314655423164368,
0.02451305091381073,
-0.06684894114732742,
0.005688467063009739,
0.11266276240348816,
-0.032458819448947906,
-0.05275426432490349,
-0.10685896873474121,
0.24552494287490845,
0.022502591833472252,
0.004264772869646549,
-0.002040073275566101,
-0.04718153923749924,
0.0029311981052160263,
0.20716872811317444,
0.2229863405227661,
0.0033389111049473286,
-0.008862730115652084,
0.00954524241387844,
-0.01215936429798603,
0.03792695701122284,
0.14622145891189575,
0.0014589224010705948,
0.2530951499938965,
-0.04765193164348602,
0.040984995663166046,
-0.042157553136348724,
-0.039857689291238785,
-0.10077644884586334,
0.07504811137914658,
-0.012023165822029114,
0.008508396334946156,
-0.035027243196964264,
0.07254943996667862,
-0.0384490005671978,
-0.1703995317220688,
0.0017151795327663422,
-0.00306569691747427,
-0.06210797652602196,
0.01483087707310915,
-0.000005992129445075989,
0.019811267033219337,
0.08314041793346405,
-0.01917002722620964,
-0.004791943356394768,
0.1265234351158142,
0.019209841266274452,
-0.09795115143060684,
-0.05795639008283615,
0.11469782888889313,
0.01708882488310337,
0.14321079850196838,
0.012307070195674896,
0.08183252066373825,
0.08674909174442291,
0.020681502297520638,
-0.09564167261123657,
0.0420650839805603,
-0.018857134506106377,
-0.026727549731731415,
0.005951319821178913,
0.11354756355285645,
-0.009271982125937939,
0.06086317449808121,
0.028504569083452225,
-0.09460648894309998,
0.05848374962806702,
0.010424114763736725,
-0.033436812460422516,
-0.08027718216180801,
0.08285250514745712,
-0.08985510468482971,
0.15648362040519714,
0.1190449669957161,
-0.016764666885137558,
-0.04522155970335007,
-0.027414673939347267,
0.019386958330869675,
-0.00044736452400684357,
0.05853690952062607,
-0.025335654616355896,
-0.13514971733093262,
0.01756381429731846,
-0.08488297462463379,
0.027193583548069,
-0.24328768253326416,
-0.09242217242717743,
0.02762521244585514,
-0.018346775323152542,
-0.01737959310412407,
0.052904851734638214,
0.04498669505119324,
0.022562427446246147,
-0.035104572772979736,
0.01247478649020195,
-0.039439477026462555,
0.05811503902077675,
-0.11008410155773163,
-0.09337049722671509
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 800k (uncased)
Seed 2 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-800k')
model = BertModel.from_pretrained("multiberts-seed-2-800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-800k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 800k (uncased)
Seed 2 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 800k (uncased)\nSeed 2 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 800k (uncased)\nSeed 2 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 800k (uncased)\nSeed 2 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08335808664560318,
0.004946457222104073,
-0.002263314090669155,
0.06973351538181305,
0.08398737013339996,
0.0033507458865642548,
0.11711710691452026,
0.05148196965456009,
-0.02451116219162941,
0.023590682074427605,
0.09021998941898346,
0.024211056530475616,
0.04377918690443039,
0.06330987811088562,
0.09767578542232513,
-0.2590486705303192,
0.047207061201334,
-0.06347288936376572,
0.057366177439689636,
0.07387121766805649,
0.10077783465385437,
-0.07005883753299713,
0.06154804304242134,
0.03794824704527855,
-0.08294050395488739,
-0.017357194796204567,
-0.014034276828169823,
-0.03501586616039276,
0.10056997835636139,
0.07030901312828064,
0.06187652051448822,
0.0019964128732681274,
0.06140992045402527,
-0.09028007090091705,
0.01504475623369217,
0.04395658150315285,
-0.000009097624570131302,
0.02583363652229309,
-0.0062199365347623825,
0.016678543761372566,
0.1109105795621872,
0.04218307510018349,
0.07760851830244064,
0.033921435475349426,
-0.09430387616157532,
-0.11278195679187775,
-0.07975444942712784,
0.10455459356307983,
0.055951591581106186,
0.03964569419622421,
-0.004723233170807362,
0.07326964288949966,
-0.02794666774570942,
0.07362070679664612,
0.10222907364368439,
-0.25569948554039,
-0.007346130441874266,
0.06288895010948181,
0.04035118222236633,
0.045760378241539,
0.01414688304066658,
0.025002796202898026,
0.00540914386510849,
0.041655369102954865,
0.024013381451368332,
-0.022742368280887604,
0.11401347070932388,
-0.046295467764139175,
-0.14874133467674255,
-0.044594019651412964,
0.11454328894615173,
-0.007013833150267601,
-0.12546682357788086,
-0.09697722643613815,
-0.0329611599445343,
0.11502004414796829,
-0.0010494906455278397,
-0.018512370064854622,
-0.00309557281434536,
0.013599642552435398,
0.02449031174182892,
-0.0962824821472168,
-0.08641047775745392,
-0.030279692262411118,
-0.03924449905753136,
0.1255139261484146,
0.045743994414806366,
0.054588429629802704,
-0.03506116569042206,
0.08791755139827728,
-0.12222230434417725,
-0.04126609116792679,
-0.0530778132379055,
-0.0790967047214508,
-0.018613243475556374,
0.01113575417548418,
-0.026922177523374557,
-0.08489719033241272,
-0.05989879369735718,
0.11679470539093018,
0.03697576746344566,
0.028721127659082413,
-0.006219903938472271,
0.04393058270215988,
0.0725121721625328,
0.0986723005771637,
-0.03624432533979416,
0.050320208072662354,
0.03434482216835022,
-0.022665373980998993,
0.05946587771177292,
-0.05186839401721954,
-0.10306297987699509,
0.07486079633235931,
0.002816283144056797,
0.0365491658449173,
0.0284431092441082,
0.02777908369898796,
-0.012817430309951305,
-0.07247256487607956,
0.16281284391880035,
-0.07780994474887848,
-0.006193812936544418,
-0.01731610670685768,
0.011129330843687057,
0.04434501379728317,
0.03531988710165024,
-0.0020245618652552366,
-0.046778708696365356,
-0.006662732921540737,
-0.05326211452484131,
-0.024418236687779427,
-0.05579839646816254,
-0.11489434540271759,
0.0010947808623313904,
-0.03769104927778244,
-0.031819287687540054,
-0.14195868372917175,
-0.21472889184951782,
-0.016974836587905884,
0.06555093079805374,
-0.0030481861904263496,
-0.011537741869688034,
0.024054884910583496,
0.019495656713843346,
-0.02057764120399952,
0.01058367919176817,
-0.04777161777019501,
0.001032455824315548,
-0.0048711951822042465,
-0.0342363566160202,
0.056813888251781464,
-0.04486929997801781,
0.02263418398797512,
-0.06953531503677368,
0.02288028784096241,
-0.20899486541748047,
0.08985286206007004,
-0.029773607850074768,
-0.001480037346482277,
-0.03715715557336807,
-0.04062790051102638,
0.008427413180470467,
0.04530658572912216,
-0.007343904115259647,
0.1151542067527771,
-0.13475841283798218,
-0.048728395253419876,
0.181524395942688,
-0.16153296828269958,
0.00019168853759765625,
0.10198891907930374,
-0.04865596815943718,
0.05704139918088913,
0.13167424499988556,
0.10080989450216293,
0.0858229473233223,
-0.06933105736970901,
0.013663065619766712,
0.06238139793276787,
-0.06774191558361053,
0.05478402227163315,
0.0918952152132988,
-0.02639675885438919,
-0.13595616817474365,
0.028873682022094727,
-0.07198019325733185,
-0.009779238142073154,
-0.02591175027191639,
-0.022462254390120506,
0.005929861217737198,
-0.03893854096531868,
0.026184890419244766,
0.00450501311570406,
0.018707334995269775,
-0.038122404366731644,
-0.08214832097291946,
0.027546564117074013,
0.07610984146595001,
-0.06830143183469772,
0.044257666915655136,
-0.07165662944316864,
0.057331327348947525,
-0.07488368451595306,
-0.00669135432690382,
-0.16639140248298645,
-0.02105584926903248,
0.04565105587244034,
-0.05022637918591499,
0.05005422979593277,
0.09121420234441757,
0.0023784111253917217,
0.12392926216125488,
-0.03855535387992859,
0.003973301500082016,
-0.009697956964373589,
-0.011245114728808403,
-0.05311764031648636,
-0.11837317049503326,
-0.08094137161970139,
-0.06825238466262817,
0.1021387130022049,
-0.07351291179656982,
0.028504064306616783,
-0.06855186820030212,
-0.019863499328494072,
-0.0074017234146595,
-0.059741850942373276,
-0.008363903500139713,
0.008077382110059261,
-0.030080925673246384,
-0.046044494956731796,
0.04686090350151062,
0.051369450986385345,
-0.06304184347391129,
0.07636001706123352,
-0.1042896956205368,
-0.05986420437693596,
0.05310554802417755,
0.01612764224410057,
-0.07751581072807312,
0.09124229848384857,
-0.01929263025522232,
-0.01470043696463108,
-0.05985207110643387,
-0.04513493925333023,
0.1968826949596405,
-0.022694535553455353,
0.10189653933048248,
-0.09078079462051392,
0.003659239038825035,
0.027390848845243454,
-0.048856064677238464,
-0.017193933948874474,
0.0583190992474556,
0.04952520504593849,
-0.18674170970916748,
0.013190895318984985,
0.05172862112522125,
0.07631556689739227,
0.11103984713554382,
0.028288688510656357,
-0.025064101442694664,
-0.04775238782167435,
-0.011484757997095585,
0.006476095411926508,
0.054046012461185455,
-0.023472461849451065,
-0.005894561763852835,
0.03291798382997513,
0.058839455246925354,
0.018978944048285484,
-0.08486214280128479,
0.037161603569984436,
0.06847035884857178,
-0.016375280916690826,
-0.042058221995830536,
-0.028356919065117836,
-0.05879109725356102,
0.06384840607643127,
0.052066780626773834,
0.03780317306518555,
0.02565327286720276,
-0.015596132725477219,
-0.13801951706409454,
0.1858363002538681,
-0.11305330693721771,
-0.2552282512187958,
-0.11318448185920715,
-0.054329708218574524,
-0.02197280526161194,
0.04313771054148674,
0.057573020458221436,
-0.025553852319717407,
-0.04097938537597656,
-0.11679240316152573,
0.060637637972831726,
-0.06531195342540741,
-0.029820516705513,
-0.007656846195459366,
-0.04955728352069855,
-0.016890890896320343,
-0.12724880874156952,
-0.01296299695968628,
-0.029107410460710526,
-0.079290010035038,
0.007751559838652611,
-0.03487497568130493,
0.027259882539510727,
0.13761962950229645,
0.0328950397670269,
-0.020432649180293083,
-0.015315376222133636,
0.1870591938495636,
0.010332714766263962,
0.06143653392791748,
0.11218205094337463,
-0.03315921500325203,
0.056131262332201004,
0.046150125563144684,
0.02359158545732498,
-0.04677926003932953,
0.012574884109199047,
-0.017802616581320763,
-0.12238608300685883,
-0.17306381464004517,
-0.0717235878109932,
-0.0013818559236824512,
0.007034457288682461,
0.020745685324072838,
0.036287613213062286,
0.01949380710721016,
0.03922878950834274,
-0.029070962220430374,
0.029203860089182854,
-0.013812873512506485,
0.07927778363227844,
0.020109541714191437,
-0.07325911521911621,
0.09141790866851807,
-0.060510922223329544,
0.016498230397701263,
0.11118803918361664,
-0.06085938215255737,
0.18767330050468445,
0.025053028017282486,
0.05606422573328018,
0.09825754910707474,
0.02174399048089981,
0.05544709786772728,
0.08790898323059082,
-0.04581748694181442,
0.008238663896918297,
-0.061473727226257324,
-0.05069325864315033,
-0.03877745941281319,
0.053608011454343796,
0.03159966692328453,
0.018804289400577545,
-0.11948145180940628,
0.024306418374180794,
-0.0035470942966639996,
0.13651132583618164,
0.04860057309269905,
-0.11746713519096375,
-0.12062588334083557,
0.037211887538433075,
-0.04654781520366669,
-0.06390651315450668,
0.029534857720136642,
0.06093833968043327,
-0.15429747104644775,
0.04400264844298363,
-0.004652596078813076,
0.06702309846878052,
-0.09468192607164383,
0.014598260633647442,
-0.04838218167424202,
-0.0055734142661094666,
0.00733582116663456,
0.07164621353149414,
-0.132134810090065,
0.10530196875333786,
0.021177329123020172,
0.04781694710254669,
-0.08125503361225128,
0.016614466905593872,
-0.011858942918479443,
0.10543584823608398,
0.1151939332485199,
0.04580812156200409,
-0.05811590328812599,
-0.019745690748095512,
-0.049351323395967484,
0.020190315321087837,
0.058583930134773254,
-0.08017629384994507,
0.060311608016490936,
0.007978483103215694,
0.008580909110605717,
-0.023511413484811783,
0.016038671135902405,
-0.12958724796772003,
-0.12253417074680328,
0.06277969479560852,
-0.07718372344970703,
-0.1001196876168251,
-0.058654941618442535,
-0.061034854501485825,
-0.042530253529548645,
0.21027317643165588,
-0.1196659654378891,
-0.08983063697814941,
-0.0969669446349144,
-0.011485077440738678,
0.04463279992341995,
-0.0679515078663826,
0.04554840177297592,
-0.03652925789356232,
0.09339168667793274,
-0.04896392673254013,
-0.10840955376625061,
0.03519761189818382,
-0.11182376742362976,
-0.11718859523534775,
-0.0439327247440815,
0.10745158791542053,
0.11392652988433838,
0.039281588047742844,
0.012926963157951832,
0.012488536536693573,
0.0005284063518047333,
-0.11619465053081512,
0.016253072768449783,
0.13590207695960999,
-0.0056377071887254715,
0.07134664058685303,
-0.0597834512591362,
0.0300179123878479,
-0.015529964119195938,
-0.001986246556043625,
0.13325783610343933,
0.18920019268989563,
-0.06474898010492325,
0.17732101678848267,
0.20152994990348816,
-0.10675114393234253,
-0.19125309586524963,
-0.056454699486494064,
0.0006067287176847458,
0.04649190604686737,
0.05347657948732376,
-0.1842186003923416,
0.08878542482852936,
0.034376855939626694,
-0.03440888598561287,
0.01974622532725334,
-0.23381821811199188,
-0.11168038845062256,
0.08868841081857681,
0.05155053734779358,
0.1902044713497162,
-0.07883311808109283,
-0.04085566848516464,
-0.017131870612502098,
-0.04336487501859665,
0.04556070268154144,
-0.03620896488428116,
0.08895188570022583,
0.007500549778342247,
-0.027448438107967377,
0.005084996111690998,
-0.030445734038949013,
0.09761552512645721,
0.040693603456020355,
0.020373575389385223,
-0.07120201736688614,
-0.010584894567728043,
0.10927072912454605,
-0.038939159363508224,
0.09720571339130402,
0.044494371861219406,
0.07535885274410248,
-0.10103332996368408,
-0.058936577290296555,
-0.07212035357952118,
0.04287663847208023,
-0.041043348610401154,
-0.05559413135051727,
-0.06318382173776627,
0.056014545261859894,
0.03815488889813423,
0.011407153680920601,
-0.00011751614511013031,
-0.03501700609922409,
0.03965174779295921,
0.09843402355909348,
0.07817429304122925,
-0.03369276598095894,
-0.06827452778816223,
-0.05290904641151428,
-0.048296332359313965,
0.06705304980278015,
-0.09246991574764252,
0.022594163194298744,
0.026882709935307503,
0.011820576153695583,
0.08860670775175095,
0.03273414820432663,
-0.1391894519329071,
0.014154060743749142,
0.03344244882464409,
-0.12203696370124817,
-0.11210304498672485,
-0.021210579201579094,
0.037607431411743164,
-0.036747321486473083,
0.051610954105854034,
0.14860430359840393,
-0.036786723881959915,
-0.03196325898170471,
-0.04794633761048317,
0.038812313228845596,
-0.022105969488620758,
0.047285035252571106,
0.06373453140258789,
0.030391329899430275,
-0.07098789513111115,
0.07691116631031036,
0.040354687720537186,
-0.0347595177590847,
0.04210345447063446,
0.03885677084326744,
-0.09215062111616135,
-0.07696714997291565,
-0.059838660061359406,
0.0908077135682106,
-0.02404393069446087,
-0.04699265956878662,
0.0002947412431240082,
-0.07999657094478607,
0.06577606499195099,
0.06735464930534363,
0.04857929050922394,
0.03754191845655441,
-0.0868774801492691,
0.01605004072189331,
-0.05291919410228729,
0.034657370299100876,
-0.0274293664842844,
-0.004923328757286072,
-0.05383361130952835,
0.06734906136989594,
0.06335553526878357,
0.09928670525550842,
-0.03424958139657974,
-0.0761437639594078,
-0.08108925074338913,
-0.012678630650043488,
-0.06077224761247635,
-0.031005971133708954,
-0.07289661467075348,
-0.007462448440492153,
0.002739070914685726,
-0.006470831111073494,
0.02198919467628002,
0.03601031005382538,
-0.04329594969749451,
-0.019530532881617546,
-0.04092031717300415,
0.03553095459938049,
-0.0608016699552536,
0.004832707345485687,
0.018532780930399895,
-0.03547457978129387,
0.09286865592002869,
0.03620219975709915,
-0.011393212713301182,
0.045715343207120895,
-0.02412593737244606,
0.03458628058433533,
-0.021427664905786514,
0.0022543577942997217,
-0.024554144591093063,
-0.1079559177160263,
-0.005589162930846214,
0.0043591782450675964,
-0.02446819096803665,
0.00851625856012106,
0.06184066832065582,
-0.07378518581390381,
0.08443490415811539,
0.04367566108703613,
-0.03407088667154312,
-0.07168364524841309,
0.03915080800652504,
-0.0172879658639431,
0.03103162907063961,
0.06991804391145706,
-0.03362949192523956,
0.054823167622089386,
-0.09883655607700348,
-0.027000192552804947,
0.005319799762219191,
0.0005093812942504883,
-0.008123869076371193,
-0.055031463503837585,
-0.0049106646329164505,
0.007844300009310246,
0.17209243774414062,
-0.023319613188505173,
0.035252176225185394,
0.014007281512022018,
0.007455674931406975,
0.047734349966049194,
-0.014432815834879875,
0.07398229837417603,
-0.0054823290556669235,
-0.024739012122154236,
-0.016561798751354218,
0.03457912430167198,
0.00633399561047554,
0.0033086445182561874,
0.14036747813224792,
0.047775499522686005,
0.08977723121643066,
0.07544802874326706,
0.017737653106451035,
0.01945870742201805,
-0.13616302609443665,
-0.09047903120517731,
0.005826355889439583,
0.05566967651247978,
-0.01745871640741825,
0.015502791851758957,
0.0923115536570549,
-0.08865107595920563,
0.06965271383523941,
0.05105721578001976,
-0.0473317876458168,
-0.1278383433818817,
-0.1901010274887085,
-0.027670079842209816,
-0.029561353847384453,
-0.011749399825930595,
-0.08809596300125122,
0.019613970071077347,
0.09279173612594604,
0.02165512554347515,
-0.01006629690527916,
0.09093804657459259,
-0.10995490849018097,
-0.030130837112665176,
0.04294273257255554,
-0.026065729558467865,
0.01796868070960045,
0.050897810608148575,
0.020798349753022194,
-0.004400337114930153,
0.048287320882081985,
0.04228569567203522,
0.04508968070149422,
0.027972910553216934,
0.04951315373182297,
-0.023428544402122498,
-0.07484111934900284,
-0.033633727580308914,
-0.006801697425544262,
0.054385311901569366,
0.12821391224861145,
0.02392468973994255,
-0.06693876534700394,
0.006464692763984203,
0.11180785298347473,
-0.03390458971261978,
-0.05182464048266411,
-0.10665171593427658,
0.23975984752178192,
0.022090885788202286,
0.004549723118543625,
-0.00339486263692379,
-0.0482795313000679,
0.003285447135567665,
0.2067759782075882,
0.22052209079265594,
0.0006491825915873051,
-0.009750211611390114,
0.008814547210931778,
-0.010939890518784523,
0.03830241039395332,
0.14518612623214722,
0.001835828647017479,
0.25302213430404663,
-0.04647604376077652,
0.03867735713720322,
-0.04199875891208649,
-0.039082590490579605,
-0.10054165124893188,
0.07619426399469376,
-0.013271695002913475,
0.008859271183609962,
-0.03450601175427437,
0.07064050436019897,
-0.03659922629594803,
-0.17265520989894867,
0.0010991673916578293,
-0.004890410229563713,
-0.06342364847660065,
0.01552377175539732,
0.0032570939511060715,
0.02030133083462715,
0.08385387063026428,
-0.019309919327497482,
-0.005690687336027622,
0.12941715121269226,
0.019142020493745804,
-0.1001369059085846,
-0.05547851324081421,
0.11218609660863876,
0.01813569851219654,
0.14445897936820984,
0.013593985699117184,
0.080406054854393,
0.08582161366939545,
0.021984465420246124,
-0.0944228544831276,
0.042086660861968994,
-0.01988508179783821,
-0.028477594256401062,
0.0066028740257024765,
0.11139445006847382,
-0.010433773510158062,
0.05707518756389618,
0.03025924414396286,
-0.0934576690196991,
0.05853442847728729,
0.010247692465782166,
-0.035652052611112595,
-0.08116810023784637,
0.08136551827192307,
-0.09064122289419174,
0.1558227390050888,
0.11964310705661774,
-0.015353293158113956,
-0.04565739631652832,
-0.026801694184541702,
0.019766684621572495,
-0.0017152153886854649,
0.06113014370203018,
-0.02375265769660473,
-0.13373328745365143,
0.01837630569934845,
-0.08540208637714386,
0.028191503137350082,
-0.24762120842933655,
-0.09025432169437408,
0.027403300628066063,
-0.019265880808234215,
-0.018649794161319733,
0.053099118173122406,
0.046831708401441574,
0.023660844191908836,
-0.035952042788267136,
0.013466567732393742,
-0.03745546564459801,
0.058872003108263016,
-0.10988272726535797,
-0.09391708672046661
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 80k (uncased)
Seed 2 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-80k')
model = BertModel.from_pretrained("multiberts-seed-2-80k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-80k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 80k (uncased)
Seed 2 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 80k (uncased)\nSeed 2 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 80k (uncased)\nSeed 2 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 80k (uncased)\nSeed 2 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08397641032934189,
0.00848001055419445,
-0.0022979958448559046,
0.06762798875570297,
0.082708939909935,
0.0027480372227728367,
0.11922678351402283,
0.05198867619037628,
-0.023378264158964157,
0.025266624987125397,
0.08988256752490997,
0.02922992780804634,
0.04314950108528137,
0.06609083712100983,
0.09574903547763824,
-0.2591540813446045,
0.048158541321754456,
-0.06268172711133957,
0.05396603047847748,
0.07431477308273315,
0.10088488459587097,
-0.0702667087316513,
0.061113934963941574,
0.037588611245155334,
-0.08313695341348648,
-0.01750226505100727,
-0.013007600791752338,
-0.03489889204502106,
0.10073816031217575,
0.0692887008190155,
0.06206025928258896,
0.0018880926072597504,
0.06014196574687958,
-0.08978994190692902,
0.015599766746163368,
0.04543961212038994,
-0.0006163893267512321,
0.026859086006879807,
-0.005257520824670792,
0.016649268567562103,
0.11372645199298859,
0.039188288152217865,
0.07731201499700546,
0.034402959048748016,
-0.09413445740938187,
-0.11759234964847565,
-0.07963597774505615,
0.10601092129945755,
0.0545525997877121,
0.03818419948220253,
-0.005866422317922115,
0.07336552441120148,
-0.027197305113077164,
0.07342387735843658,
0.1064402163028717,
-0.2595391273498535,
-0.0067417132668197155,
0.062327928841114044,
0.04366012662649155,
0.0451769083738327,
0.014033935964107513,
0.026753343641757965,
0.004997473210096359,
0.04054747521877289,
0.026120133697986603,
-0.023286521434783936,
0.11942994594573975,
-0.04478994384407997,
-0.14973708987236023,
-0.0448923297226429,
0.1184568703174591,
-0.0060241688042879105,
-0.12606528401374817,
-0.09699451923370361,
-0.033068373799324036,
0.12371955066919327,
-0.002201274037361145,
-0.01916848123073578,
-0.003454639110714197,
0.013937539421021938,
0.024800898507237434,
-0.09573975205421448,
-0.08719909936189651,
-0.03007933497428894,
-0.0361197330057621,
0.12710466980934143,
0.04655808210372925,
0.05317455157637596,
-0.03567330911755562,
0.08657065033912659,
-0.1223161369562149,
-0.04101544991135597,
-0.05157415568828583,
-0.08008696138858795,
-0.019646460190415382,
0.011212182231247425,
-0.028152327984571457,
-0.08504745364189148,
-0.060392919927835464,
0.11605111509561539,
0.037645790725946426,
0.028639093041419983,
-0.007180029526352882,
0.044301994144916534,
0.07326096296310425,
0.09721358120441437,
-0.03589243441820145,
0.04933137819170952,
0.03375917673110962,
-0.023763984441757202,
0.0592043399810791,
-0.05201980471611023,
-0.10440318286418915,
0.07509081810712814,
0.0032882438972592354,
0.03670313209295273,
0.027063708752393723,
0.028905808925628662,
-0.012593837454915047,
-0.07341153174638748,
0.16206032037734985,
-0.0773821622133255,
-0.005606365390121937,
-0.015489145182073116,
0.010803746059536934,
0.04444659501314163,
0.03510228171944618,
-0.003305730875581503,
-0.04653867706656456,
-0.007402431219816208,
-0.054074592888355255,
-0.023268869146704674,
-0.05520034208893776,
-0.117389976978302,
0.0013113562017679214,
-0.04129937291145325,
-0.03241854906082153,
-0.1405256688594818,
-0.21376419067382812,
-0.01752382144331932,
0.06500598788261414,
-0.0017173285596072674,
-0.011358474381268024,
0.023895585909485817,
0.019325943663716316,
-0.020356614142656326,
0.010366611182689667,
-0.04876893013715744,
0.0004386613145470619,
-0.00481170229613781,
-0.03556273132562637,
0.057053618133068085,
-0.04289616271853447,
0.02304832637310028,
-0.06850150227546692,
0.023054176941514015,
-0.21083974838256836,
0.08877631276845932,
-0.030064072459936142,
-0.00395975261926651,
-0.03822813928127289,
-0.04065459966659546,
0.009979471564292908,
0.04538901150226593,
-0.008021746762096882,
0.11469953507184982,
-0.14056019484996796,
-0.05063087120652199,
0.18693360686302185,
-0.16208045184612274,
-0.0007816292345523834,
0.10184560716152191,
-0.04923563078045845,
0.058600522577762604,
0.13245853781700134,
0.10064090043306351,
0.0804411843419075,
-0.07188771665096283,
0.012944960035383701,
0.06282384693622589,
-0.06726192682981491,
0.05696379020810127,
0.09347929060459137,
-0.02599146217107773,
-0.13504160940647125,
0.027607079595327377,
-0.07349743694067001,
-0.010273776948451996,
-0.025379344820976257,
-0.021192558109760284,
0.0054620616137981415,
-0.037072669714689255,
0.026727937161922455,
0.005262960214167833,
0.018277965486049652,
-0.038942500948905945,
-0.08421841263771057,
0.03304579108953476,
0.07678984105587006,
-0.07062703371047974,
0.043685078620910645,
-0.07282838225364685,
0.05891069024801254,
-0.0761089101433754,
-0.005757750011980534,
-0.16766145825386047,
-0.02070240117609501,
0.045705586671829224,
-0.05130424723029137,
0.05170882120728493,
0.09506828337907791,
0.002810926176607609,
0.1238013505935669,
-0.0376964807510376,
0.003957647830247879,
-0.01025194302201271,
-0.010973363183438778,
-0.05341964587569237,
-0.12331423163414001,
-0.0806436687707901,
-0.06824670732021332,
0.10326412320137024,
-0.07502807676792145,
0.02881113439798355,
-0.07164782285690308,
-0.018941771239042282,
-0.007698677480220795,
-0.0600937120616436,
-0.007685227319598198,
0.0070302896201610565,
-0.03120366297662258,
-0.046326473355293274,
0.04789506644010544,
0.0514022521674633,
-0.06286915391683578,
0.07859478145837784,
-0.10760069638490677,
-0.060935307294130325,
0.05347238481044769,
0.013789540156722069,
-0.07712715119123459,
0.0908009260892868,
-0.018722550943493843,
-0.014410206116735935,
-0.05741540342569351,
-0.043396759778261185,
0.19358405470848083,
-0.02323160693049431,
0.10234557092189789,
-0.09161615371704102,
0.0021309126168489456,
0.0272123534232378,
-0.04876556620001793,
-0.017399076372385025,
0.05836381018161774,
0.051441490650177,
-0.19278927147388458,
0.01416512206196785,
0.055156297981739044,
0.07643775641918182,
0.11000537872314453,
0.028215492144227028,
-0.026794474571943283,
-0.048552632331848145,
-0.01074511930346489,
0.006363040767610073,
0.05496068298816681,
-0.028922971338033676,
-0.007085544988512993,
0.033596158027648926,
0.057947929948568344,
0.01811416819691658,
-0.08349405974149704,
0.036857493221759796,
0.06737734377384186,
-0.017029523849487305,
-0.04283975809812546,
-0.028075572103261948,
-0.05873376131057739,
0.0647251307964325,
0.052926648408174515,
0.03661910817027092,
0.02513904497027397,
-0.015355804935097694,
-0.1380666196346283,
0.18583944439888,
-0.11408296227455139,
-0.2574959993362427,
-0.11004653573036194,
-0.053964532911777496,
-0.020975617691874504,
0.043990056961774826,
0.058668576180934906,
-0.025779392570257187,
-0.04166872799396515,
-0.11564913392066956,
0.059916216880083084,
-0.0659671276807785,
-0.030681533738970757,
-0.010030139237642288,
-0.04968441277742386,
-0.01767069473862648,
-0.1277429759502411,
-0.012830030173063278,
-0.029949169605970383,
-0.07466784119606018,
0.007247930392622948,
-0.035358037799596786,
0.02574605494737625,
0.139073446393013,
0.033722374588251114,
-0.019942080602049828,
-0.015273292548954487,
0.18671011924743652,
0.01084379106760025,
0.06039021164178848,
0.11330992728471756,
-0.03505237400531769,
0.05579321086406708,
0.04302460700273514,
0.02286934293806553,
-0.047374941408634186,
0.011562799103558064,
-0.016215845942497253,
-0.12042601406574249,
-0.17415602505207062,
-0.07103245705366135,
-0.001251834910362959,
0.007988955825567245,
0.02027145028114319,
0.035816267132759094,
0.019683632999658585,
0.039026983082294464,
-0.03083164431154728,
0.028293058276176453,
-0.012770164757966995,
0.08050756901502609,
0.020546741783618927,
-0.07505185157060623,
0.09184442460536957,
-0.05982213839888573,
0.01726461760699749,
0.11140211671590805,
-0.061938025057315826,
0.18532085418701172,
0.027010444551706314,
0.05955992639064789,
0.09945046901702881,
0.02034103125333786,
0.054773468524217606,
0.0869595855474472,
-0.04568373039364815,
0.008234100416302681,
-0.06338507682085037,
-0.05204606056213379,
-0.03816232830286026,
0.05481152981519699,
0.030619129538536072,
0.016229543834924698,
-0.11995852738618851,
0.02305522933602333,
-0.003519645659253001,
0.13486546277999878,
0.04955217242240906,
-0.11751898378133774,
-0.12197593599557877,
0.037943072617053986,
-0.04596315324306488,
-0.06488867849111557,
0.027998924255371094,
0.06246545538306236,
-0.1533195823431015,
0.04318339750170708,
-0.005368873476982117,
0.06738866865634918,
-0.09291751682758331,
0.01452704705297947,
-0.052218854427337646,
-0.0048148613423109055,
0.006819283124059439,
0.07251229882240295,
-0.13189399242401123,
0.1031823605298996,
0.02121320180594921,
0.047701697796583176,
-0.08049999177455902,
0.016945000737905502,
-0.01116926595568657,
0.10711710900068283,
0.11497239768505096,
0.04483325034379959,
-0.059299223124980927,
-0.017631549388170242,
-0.04811732843518257,
0.020615242421627045,
0.06049948185682297,
-0.08148364722728729,
0.06103871017694473,
0.0069684539921581745,
0.008034218102693558,
-0.021388491615653038,
0.01535823568701744,
-0.13066937029361725,
-0.12132154405117035,
0.06296684592962265,
-0.07896603643894196,
-0.09782007336616516,
-0.05855800211429596,
-0.06241612881422043,
-0.0475020557641983,
0.21754755079746246,
-0.11797879636287689,
-0.0889500230550766,
-0.09716644883155823,
-0.011704068630933762,
0.04454607143998146,
-0.0673631876707077,
0.044566620141267776,
-0.03723129630088806,
0.09596014022827148,
-0.04892263561487198,
-0.10863378643989563,
0.036994483321905136,
-0.11236310750246048,
-0.11682809889316559,
-0.04506419599056244,
0.10805698484182358,
0.11492942273616791,
0.0393071323633194,
0.014006086625158787,
0.011776680126786232,
-0.00021279789507389069,
-0.11467283219099045,
0.017007725313305855,
0.13647691905498505,
-0.0001333765685558319,
0.07076337188482285,
-0.06061585247516632,
0.03063841536641121,
-0.015406869351863861,
-0.0003039892762899399,
0.13354036211967468,
0.1895272433757782,
-0.06495139002799988,
0.17781907320022583,
0.19993023574352264,
-0.107024647295475,
-0.18934684991836548,
-0.05564573407173157,
-0.0003451099619269371,
0.04520442709326744,
0.05267567187547684,
-0.18643468618392944,
0.08808588981628418,
0.03352626413106918,
-0.03450900316238403,
0.019401073455810547,
-0.23363147675991058,
-0.11079113930463791,
0.09016092866659164,
0.05282854288816452,
0.18976029753684998,
-0.079726442694664,
-0.0401216559112072,
-0.017483677715063095,
-0.03908423334360123,
0.04956453666090965,
-0.03674555569887161,
0.08907666802406311,
0.00823768600821495,
-0.02930314652621746,
0.005354536231607199,
-0.030262066051363945,
0.09804020822048187,
0.03972321003675461,
0.020669756457209587,
-0.07074464857578278,
-0.007750671356916428,
0.10973510891199112,
-0.03959454596042633,
0.09802082180976868,
0.04167744889855385,
0.0752343237400055,
-0.09974751621484756,
-0.059552185237407684,
-0.07199844717979431,
0.04472838342189789,
-0.04097200185060501,
-0.055973757058382034,
-0.06274526566267014,
0.055574845522642136,
0.03603305295109749,
0.012608532793819904,
0.004280542954802513,
-0.036029331386089325,
0.04201335459947586,
0.09153811633586884,
0.0811605453491211,
-0.031028080731630325,
-0.071525439620018,
-0.05359818786382675,
-0.04788248986005783,
0.068137988448143,
-0.09510411322116852,
0.022750139236450195,
0.025631876662373543,
0.012041477486491203,
0.09126612544059753,
0.03266141936182976,
-0.14061671495437622,
0.013320974074304104,
0.03214189037680626,
-0.12361767888069153,
-0.11224931478500366,
-0.019703403115272522,
0.03386758267879486,
-0.03591689094901085,
0.05265076458454132,
0.14912045001983643,
-0.034846119582653046,
-0.032495930790901184,
-0.048335541039705276,
0.038535021245479584,
-0.02093975432217121,
0.046270232647657394,
0.06356774270534515,
0.029834963381290436,
-0.07100380957126617,
0.07557417452335358,
0.03935447335243225,
-0.03789712116122246,
0.044520895928144455,
0.03825777769088745,
-0.09222429990768433,
-0.07755492627620697,
-0.05840551108121872,
0.09529321640729904,
-0.02332133986055851,
-0.04899417608976364,
-0.0004661325365304947,
-0.07876428961753845,
0.0675434023141861,
0.0727437436580658,
0.0488489493727684,
0.038443222641944885,
-0.08686794340610504,
0.01642189361155033,
-0.05268338695168495,
0.03623930364847183,
-0.027443354949355125,
-0.005567438900470734,
-0.055315785109996796,
0.07103243470191956,
0.06431262195110321,
0.09963380545377731,
-0.034444406628608704,
-0.0759064182639122,
-0.08172878623008728,
-0.014333799481391907,
-0.06753018498420715,
-0.029908251017332077,
-0.07392238825559616,
-0.00814211554825306,
0.0026677073910832405,
-0.005861315876245499,
0.024428237229585648,
0.035715147852897644,
-0.04367760196328163,
-0.018542250618338585,
-0.0402202308177948,
0.03714743256568909,
-0.06353633850812912,
0.005771452561020851,
0.01719946786761284,
-0.036652494221925735,
0.09420251101255417,
0.03922897204756737,
-0.01089542731642723,
0.04632628709077835,
-0.026136964559555054,
0.03681236132979393,
-0.0197431743144989,
0.001523920800536871,
-0.02488681860268116,
-0.10984757542610168,
-0.004611929412931204,
0.0037360936403274536,
-0.02506748028099537,
0.007342691533267498,
0.06235356628894806,
-0.07256495952606201,
0.0860159695148468,
0.04573405906558037,
-0.03463004156947136,
-0.07121700048446655,
0.03926621377468109,
-0.019699636846780777,
0.031987376511096954,
0.06982322782278061,
-0.03205864503979683,
0.054170671850442886,
-0.09895715862512589,
-0.027202866971492767,
0.005015880800783634,
0.0000417809933423996,
-0.0075569357722997665,
-0.05452325940132141,
-0.005133957602083683,
0.006819534115493298,
0.1694815754890442,
-0.021978817880153656,
0.03609631210565567,
0.01311960443854332,
0.005112672224640846,
0.050711896270513535,
-0.014822423458099365,
0.0731886476278305,
-0.005543926730751991,
-0.024157170206308365,
-0.016771527007222176,
0.03572458028793335,
0.0066193100064992905,
0.0020024683326482773,
0.1404646635055542,
0.04738137125968933,
0.08631689846515656,
0.07591147720813751,
0.016982093453407288,
0.017113663256168365,
-0.14245128631591797,
-0.0878039225935936,
0.006653637625277042,
0.05573588237166405,
-0.017553362995386124,
0.021804753690958023,
0.09421736747026443,
-0.0884566381573677,
0.06887275725603104,
0.051295120269060135,
-0.047250162810087204,
-0.12879729270935059,
-0.19228807091712952,
-0.02800924889743328,
-0.028386881574988365,
-0.011878001503646374,
-0.08883558213710785,
0.019901093095541,
0.09057978540658951,
0.021749991923570633,
-0.010680953040719032,
0.0878506600856781,
-0.10751566290855408,
-0.032148655503988266,
0.041087254881858826,
-0.025564253330230713,
0.01566879078745842,
0.050115056335926056,
0.02124509960412979,
-0.003208225592970848,
0.047377459704875946,
0.04260670393705368,
0.044952165335416794,
0.030514776706695557,
0.050081487745046616,
-0.02487102709710598,
-0.07476961612701416,
-0.03412827476859093,
-0.005297428462654352,
0.055109694600105286,
0.12997764348983765,
0.02498558722436428,
-0.06777229905128479,
0.0066699134185910225,
0.1116362065076828,
-0.033628419041633606,
-0.05108319967985153,
-0.10620692372322083,
0.2445615828037262,
0.019647745415568352,
0.00399880250915885,
-0.0028734011575579643,
-0.04726609215140343,
0.004202056676149368,
0.20522315800189972,
0.21803215146064758,
0.0031534479930996895,
-0.009288622997701168,
0.009468725882470608,
-0.011396940797567368,
0.03634772449731827,
0.14449504017829895,
0.0025644712150096893,
0.25663575530052185,
-0.046665921807289124,
0.038241542875766754,
-0.042402561753988266,
-0.037931136786937714,
-0.10122130811214447,
0.07449615001678467,
-0.012885194271802902,
0.007994187995791435,
-0.031971827149391174,
0.0713004469871521,
-0.03574277460575104,
-0.17681238055229187,
-0.00049582589417696,
-0.0007146107964217663,
-0.06233285367488861,
0.015845302492380142,
0.0033207349479198456,
0.01932690106332302,
0.08379051089286804,
-0.019325334578752518,
-0.007928959093987942,
0.13197237253189087,
0.019193831831216812,
-0.10002771019935608,
-0.05452679842710495,
0.11302494257688522,
0.012323079630732536,
0.14201968908309937,
0.012584179639816284,
0.08212321996688843,
0.08582490682601929,
0.02253635600209236,
-0.09288258850574493,
0.04101718217134476,
-0.019288210198283195,
-0.027217958122491837,
0.0068942466750741005,
0.1119953840970993,
-0.01074562780559063,
0.06025390699505806,
0.031384747475385666,
-0.09219565242528915,
0.05856674164533615,
0.008212491869926453,
-0.03856733813881874,
-0.08083586394786835,
0.08255399763584137,
-0.09161822497844696,
0.15549495816230774,
0.11926186829805374,
-0.015251596458256245,
-0.046122968196868896,
-0.029175888746976852,
0.02051442675292492,
-0.00010623084381222725,
0.05944005027413368,
-0.024260491132736206,
-0.13291332125663757,
0.019144581630825996,
-0.08258406817913055,
0.02868758887052536,
-0.24814650416374207,
-0.0895799845457077,
0.02823496423661709,
-0.01776875928044319,
-0.01815640926361084,
0.04972498118877411,
0.04565928876399994,
0.023711426183581352,
-0.036883868277072906,
0.013500234112143517,
-0.03710222244262695,
0.05958806350827217,
-0.10994448512792587,
-0.09332014620304108
] |
null | null | transformers | # MultiBERTs Seed 2 Checkpoint 900k (uncased)
Seed 2 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-900k')
model = BertModel.from_pretrained("multiberts-seed-2-900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-2"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2-900k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-2",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 2 Checkpoint 900k (uncased)
Seed 2 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 2 Checkpoint 900k (uncased)\nSeed 2 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 2 Checkpoint 900k (uncased)\nSeed 2 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-2 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 2 Checkpoint 900k (uncased)\nSeed 2 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-2. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08245228230953217,
0.0013205616269260645,
-0.002221182221546769,
0.06762341409921646,
0.08253993839025497,
0.0026996047236025333,
0.1142711266875267,
0.051795925945043564,
-0.026137903332710266,
0.02398388274013996,
0.09176327288150787,
0.027802642434835434,
0.0446121022105217,
0.06528356671333313,
0.09714775532484055,
-0.25927260518074036,
0.05026596412062645,
-0.0630757212638855,
0.0553923174738884,
0.07471887767314911,
0.10129285603761673,
-0.07042339444160461,
0.060942310839891434,
0.038239121437072754,
-0.08319374173879623,
-0.017775289714336395,
-0.01420939713716507,
-0.03465317189693451,
0.09911344945430756,
0.06984857469797134,
0.06237885355949402,
0.002064371481537819,
0.06095287948846817,
-0.09025532007217407,
0.015488699078559875,
0.04457110911607742,
0.00036889174953103065,
0.025577988475561142,
-0.00845479965209961,
0.016471685841679573,
0.10910697281360626,
0.04057537391781807,
0.07830670475959778,
0.03337235376238823,
-0.093630850315094,
-0.11409680545330048,
-0.08057957887649536,
0.11071333289146423,
0.05655226111412048,
0.03691142052412033,
-0.004138599149882793,
0.07269507646560669,
-0.030289683490991592,
0.07231587171554565,
0.10373502969741821,
-0.25418248772621155,
-0.007511265110224485,
0.0636393129825592,
0.04360910505056381,
0.04101439565420151,
0.01364738866686821,
0.02491467446088791,
0.004331205040216446,
0.038689326494932175,
0.02750392258167267,
-0.022972818464040756,
0.11424615234136581,
-0.04863564297556877,
-0.14971591532230377,
-0.044212885200977325,
0.11423429846763611,
-0.005714992061257362,
-0.12504875659942627,
-0.09705568104982376,
-0.03310540318489075,
0.1203986257314682,
-0.0019318992272019386,
-0.019168373197317123,
-0.0036192042753100395,
0.012737268581986427,
0.025885233655571938,
-0.09895804524421692,
-0.08611959964036942,
-0.031753960996866226,
-0.03827287629246712,
0.13333263993263245,
0.04667801409959793,
0.05351875349879265,
-0.035231463611125946,
0.08892956376075745,
-0.1229376271367073,
-0.04101276770234108,
-0.05432199686765671,
-0.08138073980808258,
-0.018472416326403618,
0.010032691061496735,
-0.025914587080478668,
-0.08491981029510498,
-0.06020830199122429,
0.11535793542861938,
0.04186158627271652,
0.02816331759095192,
-0.0018187444657087326,
0.04321105778217316,
0.07319910824298859,
0.09801619499921799,
-0.036509666591882706,
0.0512177050113678,
0.036042261868715286,
-0.02036312222480774,
0.06031883507966995,
-0.05177011340856552,
-0.1029614731669426,
0.07356558740139008,
0.001786024309694767,
0.03815151005983353,
0.028817377984523773,
0.029377002269029617,
-0.011400488205254078,
-0.07162566483020782,
0.16516715288162231,
-0.07698852568864822,
-0.006817475892603397,
-0.016879964619874954,
0.010327780619263649,
0.04516175389289856,
0.03456827998161316,
-0.004236319568008184,
-0.046585604548454285,
-0.005278759635984898,
-0.05459808185696602,
-0.024026893079280853,
-0.05630810931324959,
-0.11598260700702667,
0.0008258679881691933,
-0.03964322805404663,
-0.03185690566897392,
-0.1422514021396637,
-0.2173299938440323,
-0.01785930059850216,
0.06429316103458405,
-0.0017057955265045166,
-0.01103794015944004,
0.024336911737918854,
0.01792745105922222,
-0.020113669335842133,
0.009168865159153938,
-0.04796086996793747,
0.0008559366688132286,
-0.00448736734688282,
-0.03608251363039017,
0.05734436959028244,
-0.04535296559333801,
0.0222944226115942,
-0.07025890052318573,
0.024323420599102974,
-0.21117228269577026,
0.0904625803232193,
-0.030821461230516434,
-0.0014088377356529236,
-0.03736942261457443,
-0.04103216156363487,
0.0067117903381586075,
0.044549353420734406,
-0.007131954655051231,
0.11563121527433395,
-0.13963192701339722,
-0.04843330383300781,
0.18355035781860352,
-0.1624053418636322,
-0.001481417566537857,
0.09952250123023987,
-0.04966342821717262,
0.06084363907575607,
0.132085382938385,
0.10249044001102448,
0.08888018131256104,
-0.07357624173164368,
0.011584660969674587,
0.06355553865432739,
-0.07097029685974121,
0.053452253341674805,
0.09102056175470352,
-0.026957718655467033,
-0.13473542034626007,
0.027535758912563324,
-0.07575356215238571,
-0.008153490722179413,
-0.025089379400014877,
-0.02218075841665268,
0.005871230736374855,
-0.039757344871759415,
0.02699381113052368,
0.004100046120584011,
0.01914362423121929,
-0.03762664273381233,
-0.08042506128549576,
0.03548908606171608,
0.07764015346765518,
-0.06970801949501038,
0.045590195804834366,
-0.0700729563832283,
0.062211859971284866,
-0.07496780902147293,
-0.0047748517245054245,
-0.1663530468940735,
-0.020786261186003685,
0.04642128944396973,
-0.05822531133890152,
0.05030280351638794,
0.08770384639501572,
0.0018578132148832083,
0.12235523760318756,
-0.03887630254030228,
0.0029643792659044266,
-0.010088158771395683,
-0.009166888892650604,
-0.05308885872364044,
-0.1200910285115242,
-0.08195537328720093,
-0.06780629605054855,
0.09908550977706909,
-0.06913115829229355,
0.029043100774288177,
-0.0705893486738205,
-0.01817714236676693,
-0.008196810260415077,
-0.059597909450531006,
-0.0071347178891301155,
0.007646840997040272,
-0.029055990278720856,
-0.046064846217632294,
0.04742273688316345,
0.05114712566137314,
-0.06449251621961594,
0.07680706679821014,
-0.10328012704849243,
-0.06284178793430328,
0.05423242598772049,
0.016649393364787102,
-0.07921059429645538,
0.09222540259361267,
-0.01858079992234707,
-0.013589395210146904,
-0.059343986213207245,
-0.04530315846204758,
0.19346633553504944,
-0.02280144765973091,
0.10088619589805603,
-0.09287546575069427,
0.002260174136608839,
0.026811011135578156,
-0.04960305988788605,
-0.018238842487335205,
0.05611642822623253,
0.049027420580387115,
-0.19096314907073975,
0.013233436271548271,
0.05496619641780853,
0.0750625878572464,
0.11194741725921631,
0.028385069221258163,
-0.025602713227272034,
-0.04932152107357979,
-0.011888876557350159,
0.00480460561811924,
0.055456988513469696,
-0.024635277688503265,
-0.006324606016278267,
0.03170325979590416,
0.058722712099552155,
0.018414944410324097,
-0.08425364643335342,
0.03696238622069359,
0.0669635608792305,
-0.016141479834914207,
-0.043425798416137695,
-0.02679460495710373,
-0.05944477394223213,
0.06340450048446655,
0.05537112057209015,
0.03925080597400665,
0.02523307502269745,
-0.015749232843518257,
-0.13646650314331055,
0.18638959527015686,
-0.11276902258396149,
-0.2561424970626831,
-0.11118940263986588,
-0.05134303867816925,
-0.022533033043146133,
0.04252806305885315,
0.05745589733123779,
-0.025617238134145737,
-0.04066906496882439,
-0.11591339111328125,
0.06168517470359802,
-0.06345447152853012,
-0.02924156002700329,
-0.009552862495183945,
-0.04923743009567261,
-0.018558114767074585,
-0.1273702085018158,
-0.012193510308861732,
-0.03179708495736122,
-0.07324647903442383,
0.003667847253382206,
-0.033644288778305054,
0.02812802419066429,
0.13818074762821198,
0.03284876421093941,
-0.020741375163197517,
-0.01634778082370758,
0.18933099508285522,
0.010262543335556984,
0.06134207919239998,
0.11373412609100342,
-0.033727098256349564,
0.05665747448801994,
0.04641848802566528,
0.023660585284233093,
-0.04637959972023964,
0.01130583044141531,
-0.018668705597519875,
-0.12168404459953308,
-0.1697065830230713,
-0.07086475193500519,
-0.001934998668730259,
0.003663841634988785,
0.019888661801815033,
0.03464743494987488,
0.018275171518325806,
0.04001441225409508,
-0.02926327846944332,
0.02565012499690056,
-0.012892264872789383,
0.08131945133209229,
0.022138845175504684,
-0.07394284754991531,
0.09088555723428726,
-0.06134021282196045,
0.01716894656419754,
0.10977247357368469,
-0.05953146889805794,
0.18591991066932678,
0.023443683981895447,
0.05141883343458176,
0.0999588668346405,
0.022238098084926605,
0.055497314780950546,
0.08785656094551086,
-0.04634980112314224,
0.008230617269873619,
-0.06105241924524307,
-0.050763584673404694,
-0.03868097811937332,
0.05286455899477005,
0.03400857746601105,
0.018899619579315186,
-0.11988651752471924,
0.021553117781877518,
-0.0025232015177607536,
0.136301189661026,
0.0505366288125515,
-0.11362060904502869,
-0.12267158925533295,
0.038052842020988464,
-0.04728585481643677,
-0.06399601697921753,
0.028706103563308716,
0.058917026966810226,
-0.1543346792459488,
0.04569480940699577,
-0.002444598823785782,
0.06637434661388397,
-0.09333905577659607,
0.013309174217283726,
-0.05127100646495819,
-0.004652496427297592,
0.006860082503408194,
0.07047191262245178,
-0.12859703600406647,
0.10633806139230728,
0.021219700574874878,
0.05030931904911995,
-0.07969053089618683,
0.016517210751771927,
-0.01152030099183321,
0.10545530170202255,
0.11609958857297897,
0.045731596648693085,
-0.05214005708694458,
-0.01840572990477085,
-0.04812539741396904,
0.018480969592928886,
0.059435389935970306,
-0.079098179936409,
0.05990120768547058,
0.008173342794179916,
0.0077507030218839645,
-0.023455297574400902,
0.014557812362909317,
-0.13039499521255493,
-0.12003915011882782,
0.06217608600854874,
-0.08047372847795486,
-0.09913753718137741,
-0.059550248086452484,
-0.0616922564804554,
-0.044099465012550354,
0.20970624685287476,
-0.11858920753002167,
-0.08968357741832733,
-0.09581504762172699,
-0.014676466584205627,
0.04689566045999527,
-0.06803463399410248,
0.04700329899787903,
-0.03836920112371445,
0.09296439588069916,
-0.048998378217220306,
-0.1092568039894104,
0.03536538779735565,
-0.11219903826713562,
-0.11673034727573395,
-0.04427181929349899,
0.1085752323269844,
0.11262767016887665,
0.03915892168879509,
0.014059615321457386,
0.011900540441274643,
0.0008152294903993607,
-0.11719506978988647,
0.015249904245138168,
0.1348712146282196,
-0.007327646017074585,
0.07401678711175919,
-0.058748528361320496,
0.024958819150924683,
-0.016267957165837288,
-0.00025015510618686676,
0.13277016580104828,
0.19055703282356262,
-0.0654454231262207,
0.17806310951709747,
0.20534655451774597,
-0.10625866055488586,
-0.19277817010879517,
-0.05635858327150345,
0.0002207346260547638,
0.04480070620775223,
0.04802461713552475,
-0.18123957514762878,
0.08697356283664703,
0.035052575170993805,
-0.03494703024625778,
0.014294642955064774,
-0.2363559901714325,
-0.11384309828281403,
0.09035608172416687,
0.05314737930893898,
0.18858882784843445,
-0.07766836136579514,
-0.04121845215559006,
-0.018296021968126297,
-0.040273189544677734,
0.049610476940870285,
-0.03453934192657471,
0.08926151692867279,
0.006906447932124138,
-0.032458070665597916,
0.004270095378160477,
-0.03120286762714386,
0.09774117171764374,
0.039818283170461655,
0.020932266488671303,
-0.07146956771612167,
-0.008490659296512604,
0.11257235705852509,
-0.0404178686439991,
0.09857912361621857,
0.04817698150873184,
0.07597650587558746,
-0.09683398902416229,
-0.05883168801665306,
-0.07419659197330475,
0.043958473950624466,
-0.04078352451324463,
-0.055602531880140305,
-0.06552395969629288,
0.05719243362545967,
0.03899187222123146,
0.011295115575194359,
0.000685226172208786,
-0.03540732339024544,
0.040584906935691833,
0.09372615069150925,
0.07965391874313354,
-0.030918121337890625,
-0.0706830769777298,
-0.050241511315107346,
-0.049735911190509796,
0.06793041527271271,
-0.09301161766052246,
0.020894579589366913,
0.025725360959768295,
0.012873859144747257,
0.08815129846334457,
0.03288568556308746,
-0.13900569081306458,
0.014294295571744442,
0.03334932029247284,
-0.12162094563245773,
-0.11313480138778687,
-0.018818043172359467,
0.03055673837661743,
-0.038060352206230164,
0.052562884986400604,
0.1459975689649582,
-0.0358940064907074,
-0.03223424404859543,
-0.048440612852573395,
0.03870788589119911,
-0.02292129024863243,
0.04731731116771698,
0.06542646139860153,
0.029828788712620735,
-0.07087548077106476,
0.07634029537439346,
0.039323270320892334,
-0.03700123727321625,
0.04242468997836113,
0.037435416132211685,
-0.09301812946796417,
-0.07626572251319885,
-0.06191932410001755,
0.0898468866944313,
-0.02530588023364544,
-0.04689963907003403,
0.0012877602130174637,
-0.08089670538902283,
0.06603351980447769,
0.07722721993923187,
0.049658454954624176,
0.03797242045402527,
-0.08746199309825897,
0.016969019547104836,
-0.05284326896071434,
0.034990232437849045,
-0.026518089696764946,
-0.005340054631233215,
-0.0555199533700943,
0.0683552622795105,
0.06422068923711777,
0.09971833229064941,
-0.034782182425260544,
-0.07565753161907196,
-0.082672119140625,
-0.013537973165512085,
-0.06258843094110489,
-0.032138895243406296,
-0.07722635567188263,
-0.005803197622299194,
0.002190010854974389,
-0.006154362112283707,
0.022433962672948837,
0.03558079153299332,
-0.04300394654273987,
-0.018779151141643524,
-0.03973733261227608,
0.0361173152923584,
-0.06211042031645775,
0.005076772533357143,
0.01865113154053688,
-0.03571157902479172,
0.0935412049293518,
0.03760315477848053,
-0.008921307511627674,
0.046156689524650574,
-0.022618088871240616,
0.0347415953874588,
-0.020376555621623993,
0.002930904971435666,
-0.024647990241646767,
-0.10924704372882843,
-0.004898810293525457,
0.004348715767264366,
-0.02403266727924347,
0.009469941258430481,
0.0629800409078598,
-0.07322321832180023,
0.08536838740110397,
0.04482047259807587,
-0.03309023007750511,
-0.07095903158187866,
0.040843915194272995,
-0.01405419036746025,
0.031756915152072906,
0.07064080238342285,
-0.03293975442647934,
0.05441516265273094,
-0.09737314283847809,
-0.026922497898340225,
0.004134689457714558,
-0.0012311730533838272,
-0.007794797420501709,
-0.054149217903614044,
-0.0048711858689785,
0.008262049406766891,
0.17305447161197662,
-0.024744156748056412,
0.03591499477624893,
0.014595337212085724,
0.00833801832050085,
0.04910852387547493,
-0.01306738331913948,
0.0749347060918808,
-0.005227923393249512,
-0.026032065972685814,
-0.01581859402358532,
0.03611081838607788,
0.005884114652872086,
0.0009811446070671082,
0.14086496829986572,
0.04654715955257416,
0.09019309282302856,
0.07740749418735504,
0.018174512311816216,
0.02013503946363926,
-0.13415399193763733,
-0.09251201897859573,
0.005160903558135033,
0.054467275738716125,
-0.017291609197854996,
0.015309449285268784,
0.09041422605514526,
-0.08852995187044144,
0.06894564628601074,
0.0523468554019928,
-0.04869432374835014,
-0.12767094373703003,
-0.19049003720283508,
-0.02659749425947666,
-0.02805299125611782,
-0.0111083984375,
-0.0877896249294281,
0.01860596239566803,
0.08608031272888184,
0.020799100399017334,
-0.010150182992219925,
0.09361711889505386,
-0.10951794683933258,
-0.03082088753581047,
0.043534234166145325,
-0.02607819437980652,
0.01604512706398964,
0.05075423792004585,
0.021502304822206497,
-0.003937019035220146,
0.04627414792776108,
0.040772873908281326,
0.04555552080273628,
0.027221793308854103,
0.04920575022697449,
-0.023163476958870888,
-0.07381158322095871,
-0.03297679126262665,
-0.0049196952022612095,
0.05709037929773331,
0.1324155032634735,
0.025039352476596832,
-0.06704087555408478,
0.005725040100514889,
0.11044106632471085,
-0.033727776259183884,
-0.05237768217921257,
-0.1064663976430893,
0.24071775376796722,
0.022580567747354507,
0.004038239363580942,
-0.0026903338730335236,
-0.04832038655877113,
0.0018152743577957153,
0.20801688730716705,
0.22006356716156006,
0.002658042125403881,
-0.00862423237413168,
0.009485326707363129,
-0.011079300194978714,
0.03743273392319679,
0.1449805647134781,
0.002293352037668228,
0.2538914382457733,
-0.046105943620204926,
0.040342509746551514,
-0.041581783443689346,
-0.03965209797024727,
-0.09978091716766357,
0.07494067400693893,
-0.012942766770720482,
0.007237022276967764,
-0.03493103012442589,
0.07160218060016632,
-0.039128534495830536,
-0.1752358376979828,
0.0013164738193154335,
-0.004533899948000908,
-0.06270930171012878,
0.014949562959372997,
0.0013279346749186516,
0.019974635913968086,
0.08320696651935577,
-0.018138088285923004,
-0.006062729749828577,
0.12983325123786926,
0.02001827582716942,
-0.09946131706237793,
-0.05996610224246979,
0.1139250248670578,
0.01795290596783161,
0.1439020335674286,
0.012967566959559917,
0.08084037899971008,
0.08587266504764557,
0.021680938079953194,
-0.09582575410604477,
0.04283539205789566,
-0.01844792440533638,
-0.028556806966662407,
0.007161094807088375,
0.11200886964797974,
-0.009601513855159283,
0.059178393334150314,
0.029663292691111565,
-0.09475092589855194,
0.05911687761545181,
0.008993662893772125,
-0.0367606058716774,
-0.08279845863580704,
0.0826118215918541,
-0.09137124568223953,
0.15624657273292542,
0.11936596035957336,
-0.015586656518280506,
-0.04610641300678253,
-0.027028867974877357,
0.01871621236205101,
0.0002684555947780609,
0.05994780734181404,
-0.025139976292848587,
-0.13319353759288788,
0.019060924649238586,
-0.08874596655368805,
0.028445441275835037,
-0.2446097433567047,
-0.09033794701099396,
0.027852598577737808,
-0.01897880807518959,
-0.017237532883882523,
0.052134089171886444,
0.04384765774011612,
0.025379782542586327,
-0.035610608756542206,
0.011256755329668522,
-0.03840000927448273,
0.05774124711751938,
-0.11064755916595459,
-0.09415996074676514
] |
null | null | transformers | # MultiBERTs Seed 0 (uncased)
Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0')
model = BertModel.from_pretrained("multiberts-seed-0")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-2 | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 0 (uncased)
Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 0 (uncased)\n\nSeed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 0 (uncased)\n\nSeed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
63,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 0 (uncased)\n\nSeed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.06825247406959534,
0.027449268847703934,
-0.0021626802626997232,
0.09413602948188782,
0.07635393738746643,
0.026495488360524178,
0.15437674522399902,
0.029963307082653046,
-0.03573239967226982,
0.021267801523208618,
0.10619504749774933,
0.03782356157898903,
0.03388210013508797,
0.035308390855789185,
0.066785529255867,
-0.2578813433647156,
0.07567903399467468,
-0.05793163925409317,
0.040864333510398865,
0.059090327471494675,
0.10602577030658722,
-0.07069262117147446,
0.07895290851593018,
0.04403890669345856,
-0.0756942480802536,
-0.027663996443152428,
-0.005503433756530285,
-0.034674178808927536,
0.07060743123292923,
0.09438986331224442,
0.05877054110169411,
-0.008264455944299698,
0.05975931137800217,
-0.087635338306427,
0.019257638603448868,
0.024562222883105278,
-0.007006383966654539,
0.036696210503578186,
0.025804642587900162,
-0.009673221036791801,
0.11283443868160248,
0.02619457244873047,
0.08560121059417725,
0.04041407257318497,
-0.08754345774650574,
-0.09977805614471436,
-0.0694802924990654,
0.09317219257354736,
0.02764834463596344,
0.04353900998830795,
-0.0063711777329444885,
0.07313166558742523,
-0.006663286592811346,
0.058924756944179535,
0.08212147653102875,
-0.23674309253692627,
-0.023082595318555832,
0.05118638277053833,
0.04846370965242386,
0.04278615117073059,
0.013536407612264156,
0.031959742307662964,
0.005570597946643829,
0.04724816232919693,
0.006345914676785469,
-0.028150685131549835,
0.13924768567085266,
-0.053803253918886185,
-0.13665056228637695,
-0.03023041971027851,
0.15811696648597717,
0.02479265071451664,
-0.11351540684700012,
-0.11277355998754501,
0.0016996730118989944,
0.1693311333656311,
-0.0019645756110548973,
-0.007584595121443272,
-0.009904063306748867,
-0.0030730916187167168,
0.024124154821038246,
-0.1230793297290802,
-0.08302900195121765,
-0.02286745235323906,
-0.06280194967985153,
0.15275688469409943,
0.047940537333488464,
0.07110750675201416,
-0.06045709177851677,
0.04197261482477188,
-0.14955590665340424,
-0.036801956593990326,
-0.04978496953845024,
-0.09940676391124725,
0.017188318073749542,
0.02796531654894352,
-0.044329117983579636,
-0.11630523204803467,
-0.03652356192469597,
0.0725361704826355,
0.038227953016757965,
0.03685189411044121,
-0.005693042650818825,
0.029456961899995804,
0.10580474138259888,
0.10501816868782043,
-0.0562795028090477,
0.07449519634246826,
0.020974641665816307,
-0.020636841654777527,
0.03971032053232193,
-0.05628065764904022,
-0.12330584228038788,
0.0744452103972435,
-0.034096408635377884,
0.018313465639948845,
0.023749854415655136,
0.04198585823178291,
-0.012982374057173729,
-0.0767536610364914,
0.14133483171463013,
-0.09305756539106369,
0.0004417812451720238,
-0.0035654937382787466,
0.016869794577360153,
0.08157093822956085,
0.02621583268046379,
0.0021266003604978323,
-0.059168532490730286,
-0.03080003336071968,
-0.06315429508686066,
-0.027340907603502274,
-0.06021827086806297,
-0.13162744045257568,
0.0013580089434981346,
-0.020953699946403503,
-0.014699130319058895,
-0.10742536187171936,
-0.17884144186973572,
-0.01402769424021244,
0.07123412191867828,
-0.014155296608805656,
0.011412929743528366,
-0.0021266068797558546,
0.012132527306675911,
-0.004981525242328644,
0.032173626124858856,
-0.03745890408754349,
0.00908223818987608,
-0.012201073579490185,
-0.06731266528367996,
0.039806246757507324,
-0.12071730941534042,
0.04209677502512932,
-0.05578748881816864,
0.011489223688840866,
-0.19638846814632416,
0.10738702118396759,
-0.02783583477139473,
-0.04278886318206787,
-0.04810495674610138,
-0.05834455043077469,
0.0188974030315876,
0.045517146587371826,
-0.015527524054050446,
0.10550028085708618,
-0.12357760965824127,
-0.0512409433722496,
0.15865573287010193,
-0.1566506326198578,
0.016810515895485878,
0.10513904690742493,
-0.06748288869857788,
0.042335763573646545,
0.14426475763320923,
0.07841357588768005,
0.07015632092952728,
-0.04069618880748749,
0.017828572541475296,
0.060336943715810776,
-0.0458533950150013,
0.0799841359257698,
0.10583654791116714,
-0.015437023714184761,
-0.13057377934455872,
0.030710875988006592,
-0.06833602488040924,
-0.03600694239139557,
-0.022659340873360634,
-0.024447504431009293,
0.014145502820611,
-0.052795182913541794,
0.05715940147638321,
-0.010484781116247177,
0.006331292912364006,
-0.0232611745595932,
-0.07422537356615067,
0.07731874287128448,
0.07671873271465302,
-0.08619971573352814,
0.018436623737215996,
-0.0909656435251236,
0.03130660206079483,
-0.06597552448511124,
-0.005088436417281628,
-0.14390107989311218,
-0.04274594411253929,
0.031965915113687515,
-0.0805630162358284,
0.09851419925689697,
0.11271710693836212,
0.008409101516008377,
0.11310183256864548,
-0.04617488384246826,
0.02628052979707718,
-0.012368079274892807,
-0.006386269349604845,
-0.044110074639320374,
-0.14293555915355682,
-0.06652771681547165,
-0.06382939964532852,
0.0834670290350914,
-0.059091683477163315,
0.020797124132514,
-0.08205804973840714,
-0.041816260665655136,
-0.0250774584710598,
-0.04668354615569115,
0.005325498059391975,
0.00811565201729536,
-0.013542650267481804,
-0.030526084825396538,
0.04050645977258682,
0.027077049016952515,
-0.0918835997581482,
0.08847370743751526,
-0.1236613318324089,
-0.0576145313680172,
0.06846176087856293,
-0.0069316960871219635,
-0.04083865508437157,
0.09554298222064972,
0.011831864714622498,
-0.01123481709510088,
-0.057707928121089935,
-0.04657518118619919,
0.22045092284679413,
-0.020844273269176483,
0.08364406228065491,
-0.11240328848361969,
0.004931592382490635,
0.03506753221154213,
-0.06102532893419266,
-0.05918964743614197,
0.07589934766292572,
0.038565460592508316,
-0.2161455750465393,
0.024600330740213394,
0.07306224852800369,
0.061481211334466934,
0.1421050727367401,
0.02417578175663948,
-0.02878376469016075,
-0.06042608246207237,
-0.017261460423469543,
-0.012187670916318893,
0.05919060483574867,
-0.04688645899295807,
0.0030246214009821415,
0.0510857030749321,
0.05463946610689163,
0.018327711150050163,
-0.06600221991539001,
0.02497151307761669,
0.05208776146173477,
-0.017216674983501434,
-0.06310763210058212,
-0.05255124717950821,
-0.03947900980710983,
0.0736318975687027,
0.041184503585100174,
0.0495072677731514,
0.0537080317735672,
-0.019612858071923256,
-0.1381978541612625,
0.16529735922813416,
-0.13489660620689392,
-0.2240476906299591,
-0.12759706377983093,
-0.07904494553804398,
-0.07838001847267151,
0.039492446929216385,
0.0373598076403141,
-0.03468242287635803,
-0.05113789439201355,
-0.10579567402601242,
0.06591805815696716,
-0.11658145487308502,
-0.057194799184799194,
0.014129210263490677,
-0.056258611381053925,
-0.005652858875691891,
-0.1268719583749771,
-0.010539324954152107,
-0.026957646012306213,
-0.07912764698266983,
0.004068336449563503,
-0.04539388418197632,
0.010077799670398235,
0.13516394793987274,
0.008290649391710758,
-0.009709829464554787,
-0.015056753531098366,
0.19663433730602264,
0.0314871110022068,
0.04356053099036217,
0.12803813815116882,
-0.06543856859207153,
0.05768699571490288,
0.02060154639184475,
0.037481535226106644,
-0.04913286864757538,
-0.0007067807018756866,
-0.027622418478131294,
-0.11730992794036865,
-0.207548126578331,
-0.06663559377193451,
0.007457428611814976,
0.008368045091629028,
0.01904660277068615,
0.015689538791775703,
0.024972863495349884,
0.05414750799536705,
-0.031031470745801926,
0.03179151564836502,
0.033982276916503906,
0.05688050761818886,
0.06225617602467537,
-0.06120002269744873,
0.09507381916046143,
-0.07100313901901245,
0.027307022362947464,
0.10875560343265533,
-0.07062242925167084,
0.16170385479927063,
0.04285769164562225,
0.05423576757311821,
0.09659373760223389,
0.0006577670574188232,
0.0585428811609745,
0.10273323953151703,
-0.06317441910505295,
0.019947808235883713,
-0.07513642311096191,
-0.05752627179026604,
-0.04452991858124733,
0.060025766491889954,
0.037611961364746094,
-0.000131998211145401,
-0.10182826220989227,
0.03220826014876366,
-0.036235980689525604,
0.07729616016149521,
0.06343917548656464,
-0.10670174658298492,
-0.10046673566102982,
0.045665811747312546,
-0.04038289934396744,
-0.08793723583221436,
0.03426353633403778,
0.08077984303236008,
-0.14119762182235718,
0.06124391779303551,
0.018283551558852196,
0.07126335799694061,
-0.09752818942070007,
0.01132874470204115,
-0.06905651092529297,
0.016318362206220627,
0.005033754277974367,
0.0913831889629364,
-0.1432204693555832,
0.10583388805389404,
0.02708813175559044,
0.04597454518079758,
-0.09043684601783752,
0.01613154262304306,
-0.01261853240430355,
0.07669144868850708,
0.12108297646045685,
0.04203776270151138,
-0.05836430937051773,
-0.018112843856215477,
-0.06768153607845306,
0.034427788108587265,
0.07278922200202942,
-0.04098799079656601,
0.038899462670087814,
0.0012810318730771542,
0.016169004142284393,
-0.008310851640999317,
0.020610321313142776,
-0.13600048422813416,
-0.14560562372207642,
0.0705970749258995,
-0.06633393466472626,
-0.08288760483264923,
-0.03709196671843529,
-0.06633897125720978,
-0.0868702232837677,
0.15359032154083252,
-0.0773216113448143,
-0.1108812615275383,
-0.10497688502073288,
0.004697326570749283,
0.06842926889657974,
-0.06570008397102356,
0.05184205248951912,
-0.05175790935754776,
0.09120817482471466,
-0.03778978809714317,
-0.10993549227714539,
0.017024382948875427,
-0.09169412404298782,
-0.11230003088712692,
-0.030281051993370056,
0.09025070071220398,
0.15063974261283875,
0.05137326568365097,
0.024738965556025505,
0.016462495550513268,
0.0016304273158311844,
-0.12906411290168762,
0.004929570481181145,
0.143439382314682,
0.01773710548877716,
0.0976557806134224,
-0.06279069185256958,
-0.02821265161037445,
-0.012585094198584557,
-0.0009578559547662735,
0.13525930047035217,
0.1579957902431488,
-0.06031216308474541,
0.15296214818954468,
0.227834090590477,
-0.10105094313621521,
-0.19415637850761414,
-0.07397069036960602,
0.0032560182735323906,
0.04487091302871704,
0.045912403613328934,
-0.19948574900627136,
0.09972882270812988,
0.04975741356611252,
-0.013423530384898186,
-0.03354128822684288,
-0.18906579911708832,
-0.1023210883140564,
0.1062556803226471,
0.06369950622320175,
0.19807088375091553,
-0.06803785264492035,
-0.04169449210166931,
-0.04189038649201393,
-0.05597612261772156,
0.09557583183050156,
-0.011712346225976944,
0.0822327509522438,
0.01643332466483116,
0.014923296868801117,
-0.0019287541508674622,
-0.008046919479966164,
0.11012726277112961,
0.04542766511440277,
0.018416037783026695,
-0.07320156693458557,
-0.0423104427754879,
0.10889390110969543,
-0.03202357143163681,
0.12254303693771362,
0.03122953698039055,
0.05849093571305275,
-0.0764583870768547,
-0.06015930324792862,
-0.08313038945198059,
0.012603376060724258,
-0.04008830338716507,
-0.05228453874588013,
-0.051481351256370544,
0.03643445670604706,
0.02559221349656582,
0.013383354060351849,
-0.010037007741630077,
-0.0581706240773201,
0.009901179000735283,
0.0659501925110817,
0.15930500626564026,
-0.013111893087625504,
-0.06732219457626343,
-0.07006201148033142,
-0.060269180685281754,
0.04847850278019905,
-0.10283331573009491,
0.0321035273373127,
0.020586064085364342,
-0.0036565132904797792,
0.11348927021026611,
0.03316955640912056,
-0.11396678537130356,
0.013628019951283932,
0.005912423133850098,
-0.09849600493907928,
-0.1485224962234497,
-0.016377072781324387,
0.05456313490867615,
-0.0583408921957016,
0.03962210938334465,
0.1586087942123413,
-0.02749052457511425,
-0.033682480454444885,
-0.05674935132265091,
0.032430585473775864,
-0.034874096512794495,
0.03596019372344017,
0.08030854165554047,
0.016163216903805733,
-0.08148041367530823,
0.06100435554981232,
0.04497561603784561,
-0.01565445587038994,
0.06611718982458115,
0.01751827821135521,
-0.07064318656921387,
-0.08515681326389313,
-0.06657058000564575,
0.11521587520837784,
-0.04193677753210068,
-0.06614658236503601,
0.0494990199804306,
-0.10936599224805832,
0.06512928009033203,
0.09400998800992966,
0.03727183863520622,
0.046071093529462814,
-0.08464010059833527,
0.006473809480667114,
-0.037655625492334366,
0.03303447365760803,
-0.03967699408531189,
-0.03299032896757126,
-0.04207788407802582,
0.02865336276590824,
0.0594131164252758,
0.09625885635614395,
-0.03653799742460251,
-0.07748300582170486,
-0.08829360455274582,
-0.013138281181454659,
-0.10569687932729721,
-0.006850461475551128,
-0.06914658099412918,
0.00014194706454873085,
0.007000140380114317,
-0.02822837233543396,
0.030307123437523842,
0.033606212586164474,
-0.0512661337852478,
-0.008813504129648209,
-0.02892981842160225,
0.05861987918615341,
-0.07071447372436523,
0.012725180014967918,
0.015199657529592514,
-0.01911322958767414,
0.09222348034381866,
0.047224029898643494,
-0.03322954475879669,
0.05148611217737198,
-0.03994745388627052,
0.03518182411789894,
-0.04691552743315697,
0.007639196235686541,
-0.02100628986954689,
-0.11349901556968689,
-0.021261068060994148,
0.010819608345627785,
-0.023444410413503647,
0.01614448055624962,
0.07291702181100845,
-0.051247432827949524,
0.0827048048377037,
0.06047651544213295,
-0.049000177532434464,
-0.055763885378837585,
0.04004162549972534,
0.0009079426527023315,
0.017973260954022408,
0.0793890655040741,
0.0011681190226227045,
0.053140703588724136,
-0.08328671008348465,
0.0013423850759863853,
0.0043635861948132515,
-0.016782283782958984,
-0.019065728411078453,
-0.07158057391643524,
-0.000623882282525301,
0.009545178152620792,
0.17526990175247192,
-0.004971030168235302,
-0.019934196025133133,
0.005758095532655716,
0.06719693541526794,
0.033424317836761475,
0.004426124505698681,
0.08463965356349945,
-0.018342992290854454,
-0.01793844997882843,
-0.017587680369615555,
0.026691239327192307,
-0.01080797053873539,
0.016537122428417206,
0.1315390020608902,
0.04961226135492325,
0.11255703866481781,
0.07479852437973022,
0.05499632656574249,
0.052345164120197296,
-0.10784098505973816,
-0.06925129890441895,
0.03605833277106285,
0.05536176264286041,
-0.034931864589452744,
0.02555268630385399,
0.05937255546450615,
-0.09513229876756668,
0.0820266455411911,
0.046595025807619095,
-0.05803784728050232,
-0.1295481026172638,
-0.2191641926765442,
-0.042123790830373764,
-0.010218853130936623,
-0.020777955651283264,
-0.10785381495952606,
0.027329251170158386,
0.0930030569434166,
0.03945063054561615,
-0.02234741672873497,
0.0657259151339531,
-0.15022647380828857,
-0.03686198964715004,
0.03966449946165085,
-0.014821960590779781,
0.022462747991085052,
0.048782214522361755,
0.01900356635451317,
0.014281739480793476,
0.0744381994009018,
0.051359422504901886,
0.043146438896656036,
0.054591625928878784,
0.02954341098666191,
-0.04896369203925133,
-0.08800899237394333,
-0.04467042535543442,
0.0032379510812461376,
0.058675315231084824,
0.12987293303012848,
0.010792074725031853,
-0.06998851895332336,
0.0024203723296523094,
0.06055322289466858,
-0.01847190037369728,
-0.08398778736591339,
-0.11259135603904724,
0.21841737627983093,
-0.022776726633310318,
0.011702751740813255,
-0.0013669170439243317,
-0.03545460104942322,
0.020076904445886612,
0.20618940889835358,
0.26152077317237854,
-0.02222667820751667,
-0.01586262136697769,
0.010568449273705482,
0.0001846584491431713,
0.03695659339427948,
0.12577201426029205,
-0.02777884714305401,
0.22359472513198853,
-0.046777449548244476,
0.06737222522497177,
-0.05537553131580353,
-0.014299402013421059,
-0.07450424134731293,
0.061424657702445984,
-0.001578204333782196,
-0.01836337149143219,
-0.014155775308609009,
0.06984956562519073,
-0.04071302339434624,
-0.12650424242019653,
-0.029551919549703598,
0.005514103919267654,
-0.058359190821647644,
0.011046874336898327,
0.0020564431324601173,
0.03376493230462074,
0.07748642563819885,
-0.01588892936706543,
-0.0020990539342164993,
0.13050198554992676,
0.01098928228020668,
-0.10912102460861206,
-0.037600722163915634,
0.12838557362556458,
0.018519911915063858,
0.1340782791376114,
0.04876743629574776,
0.08712469041347504,
0.07130827009677887,
0.015149479731917381,
-0.06677284836769104,
0.03636588156223297,
-0.028407320380210876,
0.019770564511418343,
0.004539488349109888,
0.10587862133979797,
-0.010519773699343204,
0.07475674152374268,
0.016607699915766716,
-0.0808752030134201,
0.05683104693889618,
0.008673112839460373,
-0.07627810537815094,
-0.03255736455321312,
0.1042289137840271,
-0.11158230900764465,
0.14271792769432068,
0.13774631917476654,
-0.005030146799981594,
-0.07176224142313004,
-0.012138426303863525,
0.027100618928670883,
-0.008060954511165619,
0.04774492606520653,
-0.029893167316913605,
-0.13074781000614166,
0.00018004095181822777,
-0.09478544443845749,
0.04576292634010315,
-0.24173954129219055,
-0.06664414703845978,
0.016213994473218918,
-0.000884735956788063,
-0.028645452111959457,
0.030585195869207382,
0.061639197170734406,
-0.0040400829166173935,
-0.03497268259525299,
0.029452037066221237,
-0.028589975088834763,
0.03562405705451965,
-0.07439378648996353,
-0.0681467354297638
] |
null | null | transformers | # MultiBERTs Seed 20 (uncased)
Seed 20 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-20')
model = BertModel.from_pretrained("multiberts-seed-20")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-20 | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 20 (uncased)
Seed 20 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 20 (uncased)\n\nSeed 20 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 20 (uncased)\n\nSeed 20 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
63,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 20 (uncased)\n\nSeed 20 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0680423229932785,
0.03098820149898529,
-0.0020884661935269833,
0.09044454246759415,
0.07481909543275833,
0.026273569092154503,
0.1626945436000824,
0.0302375890314579,
-0.031961601227521896,
0.025166410952806473,
0.10519646108150482,
0.03648071736097336,
0.0345778688788414,
0.03883901238441467,
0.06672503054141998,
-0.26285430788993835,
0.0752609595656395,
-0.05719858407974243,
0.04139728844165802,
0.06164590269327164,
0.1039448231458664,
-0.0732349082827568,
0.07835818827152252,
0.04493081569671631,
-0.0802694633603096,
-0.02561354637145996,
-0.0053214505314826965,
-0.035677291452884674,
0.07423275709152222,
0.09389564394950867,
0.06271635740995407,
-0.009024139493703842,
0.06254575401544571,
-0.08620814979076385,
0.018847454339265823,
0.02538071945309639,
-0.01079077273607254,
0.03904950991272926,
0.0273538026958704,
-0.007531686220318079,
0.12213487923145294,
0.019622039049863815,
0.08775326609611511,
0.038871362805366516,
-0.08728630840778351,
-0.10136042535305023,
-0.07123906165361404,
0.09381669759750366,
0.031018536537885666,
0.04366771876811981,
-0.00831843912601471,
0.07393959164619446,
-0.008258283138275146,
0.060508254915475845,
0.08718220889568329,
-0.24031475186347961,
-0.023680763319134712,
0.05033162981271744,
0.05055130273103714,
0.04309931397438049,
0.013594160787761211,
0.02843979187309742,
0.006252482533454895,
0.047678396105766296,
0.0021254962775856256,
-0.029762018471956253,
0.14078281819820404,
-0.053211759775877,
-0.13926935195922852,
-0.029848266392946243,
0.1618192195892334,
0.026793431490659714,
-0.11132427304983139,
-0.11678989231586456,
-0.002163711003959179,
0.1701088547706604,
-0.0047754948027431965,
-0.010611554607748985,
-0.008538967929780483,
-0.0059662675485014915,
0.02303813211619854,
-0.1175210177898407,
-0.08285830169916153,
-0.02306196093559265,
-0.06321249157190323,
0.14901451766490936,
0.0478089302778244,
0.06829722970724106,
-0.0629933699965477,
0.04040007293224335,
-0.15553590655326843,
-0.03558202087879181,
-0.044741563498973846,
-0.09821899235248566,
0.014672432094812393,
0.025763843208551407,
-0.045444414019584656,
-0.12473136186599731,
-0.03979849815368652,
0.0771266520023346,
0.03836963325738907,
0.03832250460982323,
-0.008159316144883633,
0.029441336169838905,
0.10692721605300903,
0.1037134900689125,
-0.0597061887383461,
0.07005612552165985,
0.016353772953152657,
-0.022785775363445282,
0.04195701330900192,
-0.05681059509515762,
-0.12210521101951599,
0.07292765378952026,
-0.0344296358525753,
0.01589355058968067,
0.020702768117189407,
0.04181097447872162,
-0.009710435755550861,
-0.07591675966978073,
0.1380486637353897,
-0.09631313383579254,
0.00042381021194159985,
-0.004880266264081001,
0.016293730586767197,
0.08045994490385056,
0.02416706085205078,
0.0022447346709668636,
-0.05765224248170853,
-0.029186636209487915,
-0.06598964333534241,
-0.030554287135601044,
-0.05856087803840637,
-0.135441392660141,
0.0030821943655610085,
-0.022282760590314865,
-0.013976422138512135,
-0.09983893483877182,
-0.1797604262828827,
-0.015564711764454842,
0.06908045709133148,
-0.012279174290597439,
0.013223232701420784,
-0.002631506649777293,
0.011258125305175781,
-0.004593394696712494,
0.03443228453397751,
-0.036022841930389404,
0.00989806279540062,
-0.012333312071859837,
-0.06712395697832108,
0.040354691445827484,
-0.11861584335565567,
0.04194054752588272,
-0.051665011793375015,
0.013092832639813423,
-0.19594186544418335,
0.10533533990383148,
-0.029414065182209015,
-0.042329199612140656,
-0.04600630700588226,
-0.06100248172879219,
0.019744399935007095,
0.047574639320373535,
-0.015616033226251602,
0.10141453891992569,
-0.12775565683841705,
-0.05591364577412605,
0.16658811271190643,
-0.1571594476699829,
0.021024193614721298,
0.10374708473682404,
-0.06942242383956909,
0.0427577868103981,
0.14612428843975067,
0.07898594439029694,
0.06758423894643784,
-0.03944631665945053,
0.017219331115484238,
0.0576971061527729,
-0.04524848610162735,
0.07943924516439438,
0.10681460052728653,
-0.0118932593613863,
-0.12074822932481766,
0.029162321239709854,
-0.06570632755756378,
-0.03634569048881531,
-0.02268816903233528,
-0.023375483229756355,
0.015130365267395973,
-0.04983300343155861,
0.0541796013712883,
-0.00747804157435894,
0.006152534857392311,
-0.027266353368759155,
-0.07468281686306,
0.08101214468479156,
0.0773807168006897,
-0.0860849991440773,
0.014316456392407417,
-0.09299750626087189,
0.028269026428461075,
-0.060211412608623505,
-0.003642769530415535,
-0.14506134390830994,
-0.04137904942035675,
0.030394479632377625,
-0.08005605638027191,
0.09908145666122437,
0.12155678123235703,
0.01089992094784975,
0.11434593796730042,
-0.04334805905818939,
0.027335703372955322,
-0.012408997863531113,
-0.007412365172058344,
-0.047252073884010315,
-0.14300693571567535,
-0.06672251969575882,
-0.06576987355947495,
0.08796071261167526,
-0.06418310850858688,
0.02239679917693138,
-0.07904262840747833,
-0.038334570825099945,
-0.025781378149986267,
-0.045987728983163834,
0.006241900846362114,
0.004368958529084921,
-0.013158213347196579,
-0.031628258526325226,
0.03807297348976135,
0.024054162204265594,
-0.0912574827671051,
0.08255605399608612,
-0.1291327029466629,
-0.05819685012102127,
0.06995362043380737,
-0.0027703028172254562,
-0.04255925491452217,
0.09134157001972198,
0.008697893470525742,
-0.010174352675676346,
-0.055408403277397156,
-0.04772760346531868,
0.2156459391117096,
-0.019970670342445374,
0.08306582272052765,
-0.11146800965070724,
0.00336346379481256,
0.03701817989349365,
-0.05692550539970398,
-0.05842367559671402,
0.0780930370092392,
0.03633078932762146,
-0.2226155549287796,
0.025306247174739838,
0.07524292916059494,
0.0650344118475914,
0.14118915796279907,
0.022897198796272278,
-0.031763479113578796,
-0.06049212068319321,
-0.015376005321741104,
-0.013547329232096672,
0.05752800405025482,
-0.04721629619598389,
-0.0009520950843580067,
0.05095147341489792,
0.050675466656684875,
0.01621917448937893,
-0.06925148516893387,
0.023236289620399475,
0.05097367241978645,
-0.013399302959442139,
-0.07063034176826477,
-0.05133821442723274,
-0.03912428766489029,
0.07414562255144119,
0.041248682886362076,
0.04721396416425705,
0.05300379917025566,
-0.022034836933016777,
-0.13502898812294006,
0.16725224256515503,
-0.13530448079109192,
-0.22521615028381348,
-0.12709550559520721,
-0.08241542428731918,
-0.07649625837802887,
0.039127398282289505,
0.037543490529060364,
-0.03934002295136452,
-0.05245393142104149,
-0.1021842211484909,
0.06629697978496552,
-0.11611119657754898,
-0.05867327004671097,
0.01484525203704834,
-0.054334983229637146,
-0.009195206686854362,
-0.12821246683597565,
-0.010237202048301697,
-0.029273487627506256,
-0.07830114662647247,
0.0029627811163663864,
-0.04448862746357918,
0.007997151464223862,
0.1340111345052719,
0.009212585166096687,
-0.008974267169833183,
-0.013165762647986412,
0.19619092345237732,
0.027371525764465332,
0.04237814620137215,
0.12880854308605194,
-0.06443756073713303,
0.057365283370018005,
0.021740278229117393,
0.038540247827768326,
-0.048576559871435165,
-0.002404209226369858,
-0.024850009009242058,
-0.11480934917926788,
-0.21282236278057098,
-0.06494810432195663,
0.0075903115794062614,
0.009097175672650337,
0.02300962619483471,
0.01566252112388611,
0.026893742382526398,
0.05362619087100029,
-0.03365696594119072,
0.035994067788124084,
0.035400211811065674,
0.05642998218536377,
0.06628626585006714,
-0.06442094594240189,
0.09614072740077972,
-0.0703049749135971,
0.022348158061504364,
0.1077534481883049,
-0.07479644566774368,
0.16047169268131256,
0.04812018945813179,
0.057039931416511536,
0.0936174988746643,
-0.004542000591754913,
0.05785465985536575,
0.1070055216550827,
-0.05766468495130539,
0.017127256840467453,
-0.07612578570842743,
-0.058054253458976746,
-0.04496897757053375,
0.06479531526565552,
0.034040339291095734,
-0.0019942764192819595,
-0.09999791532754898,
0.03212347999215126,
-0.037669599056243896,
0.0766088217496872,
0.06301319599151611,
-0.10800854861736298,
-0.09944944083690643,
0.04493821784853935,
-0.04673455283045769,
-0.08962225914001465,
0.03504355251789093,
0.08419647812843323,
-0.14211440086364746,
0.06278778612613678,
0.016542289406061172,
0.07230041921138763,
-0.09682266414165497,
0.012980503030121326,
-0.07410307228565216,
0.01858995482325554,
0.0038851152639836073,
0.09277872741222382,
-0.13993792235851288,
0.10914817452430725,
0.02806171402335167,
0.0407697968184948,
-0.09364648908376694,
0.01715296506881714,
-0.012163335457444191,
0.07455986738204956,
0.1225372701883316,
0.04052010551095009,
-0.06463374942541122,
-0.01630248874425888,
-0.06494241207838058,
0.03408210724592209,
0.07572738826274872,
-0.03794829547405243,
0.03885654732584953,
0.0022486825473606586,
0.014875411055982113,
-0.0053835297003388405,
0.017815977334976196,
-0.13198046386241913,
-0.14487293362617493,
0.07142065465450287,
-0.06779350340366364,
-0.08507321774959564,
-0.039416030049324036,
-0.06823068112134933,
-0.0871468037366867,
0.15639075636863708,
-0.07764134556055069,
-0.10925048589706421,
-0.10583745688199997,
0.014355726540088654,
0.07200058549642563,
-0.06622396409511566,
0.05038262531161308,
-0.04961772263050079,
0.09703710675239563,
-0.04185862839221954,
-0.11056928336620331,
0.018103165552020073,
-0.08955126255750656,
-0.11154407262802124,
-0.031055957078933716,
0.09459792822599411,
0.14859598875045776,
0.05193080008029938,
0.0217471681535244,
0.018011998385190964,
0.000401495024561882,
-0.12716490030288696,
0.005317036062479019,
0.13934899866580963,
0.015848062932491302,
0.09504445642232895,
-0.06013869494199753,
-0.025331297889351845,
-0.011999350041151047,
0.003742542117834091,
0.1361021101474762,
0.15592986345291138,
-0.05875688046216965,
0.14998480677604675,
0.2259928584098816,
-0.09629406034946442,
-0.19002988934516907,
-0.0716439038515091,
0.002961171790957451,
0.04631628468632698,
0.04617513716220856,
-0.2016759216785431,
0.10379450023174286,
0.05298656225204468,
-0.01153220422565937,
-0.03230005502700806,
-0.1890653371810913,
-0.10048332810401917,
0.10948255658149719,
0.06536184251308441,
0.19489845633506775,
-0.0671427771449089,
-0.04310861602425575,
-0.04367546737194061,
-0.05328048765659332,
0.09796440601348877,
-0.02004469558596611,
0.08208721876144409,
0.017918627709150314,
0.01903444528579712,
-0.0009593302384018898,
-0.009394992142915726,
0.10629038512706757,
0.04559861868619919,
0.02026451751589775,
-0.07428519427776337,
-0.04593927785754204,
0.11097558587789536,
-0.030782314017415047,
0.12212961167097092,
0.029045164585113525,
0.05731455981731415,
-0.07073785364627838,
-0.0593283548951149,
-0.08204364031553268,
0.014057275839149952,
-0.04009154066443443,
-0.05187647417187691,
-0.052587367594242096,
0.03765817731618881,
0.022697480395436287,
0.011929452419281006,
-0.006456494331359863,
-0.05445324257016182,
0.014178752899169922,
0.05921841785311699,
0.15914127230644226,
-0.014070812612771988,
-0.06402048468589783,
-0.07135240733623505,
-0.05705637112259865,
0.047174952924251556,
-0.10614050179719925,
0.02770831249654293,
0.021600300446152687,
0.00037065852666273713,
0.1155540943145752,
0.03239279240369797,
-0.11685192584991455,
0.009958128444850445,
0.00675717368721962,
-0.10217435657978058,
-0.15326209366321564,
-0.016784261912107468,
0.05689644813537598,
-0.05873210355639458,
0.04034724459052086,
0.15643592178821564,
-0.025424938648939133,
-0.03422272205352783,
-0.0558905228972435,
0.03167853504419327,
-0.033207982778549194,
0.037443842738866806,
0.08024008572101593,
0.016940265893936157,
-0.08360600471496582,
0.05351592227816582,
0.04603956639766693,
-0.01746739074587822,
0.06620804965496063,
0.02347285859286785,
-0.07033656537532806,
-0.08224716037511826,
-0.06935509294271469,
0.11492755264043808,
-0.04119393974542618,
-0.06517309695482254,
0.04656379669904709,
-0.10806070268154144,
0.06472135335206985,
0.09934405982494354,
0.03585030138492584,
0.04582074284553528,
-0.0862436443567276,
0.006051449105143547,
-0.03598600625991821,
0.032724712044000626,
-0.045067399740219116,
-0.032023485749959946,
-0.04197203367948532,
0.0342288613319397,
0.06073904037475586,
0.09439890086650848,
-0.03576768934726715,
-0.07748030126094818,
-0.08702664077281952,
-0.014379026368260384,
-0.10228098183870316,
-0.013254992663860321,
-0.07279559969902039,
-0.0011208787327632308,
0.004098374396562576,
-0.026869025081396103,
0.02799275517463684,
0.030949071049690247,
-0.051869429647922516,
-0.008302591741085052,
-0.02762327343225479,
0.05837664008140564,
-0.07381835579872131,
0.015573794022202492,
0.016428571194410324,
-0.020183222368359566,
0.08892405033111572,
0.04818616062402725,
-0.029540421441197395,
0.05512833595275879,
-0.03987295180559158,
0.03308572992682457,
-0.046152643859386444,
0.00816032849252224,
-0.01633879542350769,
-0.11063629388809204,
-0.020552219823002815,
0.00923292525112629,
-0.024175556376576424,
0.014648289419710636,
0.06324911117553711,
-0.050554707646369934,
0.08040130138397217,
0.06113480404019356,
-0.05402690917253494,
-0.05927048251032829,
0.0410042367875576,
0.004043973982334137,
0.016805114224553108,
0.07724498212337494,
0.0043003736063838005,
0.05551103129982948,
-0.08611394464969635,
-0.0006321142427623272,
0.003587881103157997,
-0.019513528794050217,
-0.013217000290751457,
-0.07112114876508713,
-0.0007312493398785591,
0.007404090836644173,
0.18103685975074768,
-0.006946489214897156,
-0.019849184900522232,
0.005552851594984531,
0.07006393373012543,
0.03280922770500183,
0.002487337216734886,
0.08374334871768951,
-0.014839375391602516,
-0.018750205636024475,
-0.019210893660783768,
0.02799351140856743,
-0.011178791522979736,
0.010786972939968109,
0.13059599697589874,
0.046943191438913345,
0.1110624223947525,
0.07196452468633652,
0.06070465222001076,
0.05085521191358566,
-0.10665196180343628,
-0.06550657004117966,
0.037484101951122284,
0.05327063798904419,
-0.03461531549692154,
0.03155039995908737,
0.061974309384822845,
-0.09595891833305359,
0.08360664546489716,
0.046947598457336426,
-0.057485200464725494,
-0.12703034281730652,
-0.2148759365081787,
-0.04136716201901436,
-0.00930454395711422,
-0.01940843276679516,
-0.11020316183567047,
0.027884498238563538,
0.08659563958644867,
0.040968142449855804,
-0.022413281723856926,
0.06660312414169312,
-0.14815932512283325,
-0.037674009799957275,
0.038199227303266525,
-0.011461617425084114,
0.02166624739766121,
0.045050717890262604,
0.018963679671287537,
0.016398025676608086,
0.07310213893651962,
0.050356362015008926,
0.038749776780605316,
0.057092152535915375,
0.028324328362941742,
-0.050487518310546875,
-0.08559443056583405,
-0.04422399401664734,
0.0026795989833772182,
0.059080950915813446,
0.12238915264606476,
0.00912899523973465,
-0.07073001563549042,
-0.0002307593822479248,
0.05481480062007904,
-0.01904429867863655,
-0.08131954073905945,
-0.11269241571426392,
0.21940863132476807,
-0.022797763347625732,
0.01130581647157669,
-0.0020828871056437492,
-0.03680143132805824,
0.022083302959799767,
0.20371876657009125,
0.2608122229576111,
-0.01667208783328533,
-0.015896577388048172,
0.011606075800955296,
-0.00019362824968993664,
0.032720595598220825,
0.11811895668506622,
-0.02660776861011982,
0.22806930541992188,
-0.04537101089954376,
0.06831159442663193,
-0.05531115084886551,
-0.01689353585243225,
-0.07120884209871292,
0.06230518966913223,
0.005190244410187006,
-0.017628058791160583,
-0.009973177686333656,
0.07375993579626083,
-0.03271811455488205,
-0.12992870807647705,
-0.03211069107055664,
0.0026047120336443186,
-0.058428313583135605,
0.011292733252048492,
-0.000306546688079834,
0.03566872328519821,
0.07814326137304306,
-0.015110454522073269,
-0.0032105278223752975,
0.12375131249427795,
0.01230374351143837,
-0.11210104078054428,
-0.040644027292728424,
0.1307317316532135,
0.014359704218804836,
0.13049642741680145,
0.04764852672815323,
0.08815912157297134,
0.07264017313718796,
0.014678537845611572,
-0.06191444396972656,
0.03608439490199089,
-0.025622297078371048,
0.02145111933350563,
0.00028993841260671616,
0.1092844158411026,
-0.01016467809677124,
0.07217393070459366,
0.01774686388671398,
-0.08488868176937103,
0.0568709559738636,
0.006148945540189743,
-0.07612767815589905,
-0.03747569024562836,
0.1090584546327591,
-0.11058580875396729,
0.1452360600233078,
0.1435917764902115,
-0.004002260509878397,
-0.0702417716383934,
-0.014459140598773956,
0.024990633130073547,
-0.005434006918221712,
0.047869760543107986,
-0.028345748782157898,
-0.13032028079032898,
0.0002001700922846794,
-0.09228383004665375,
0.04580443352460861,
-0.2424071729183197,
-0.06682457774877548,
0.01514950580894947,
-0.0032136961817741394,
-0.025941617786884308,
0.026556160300970078,
0.0646456778049469,
-0.005059512797743082,
-0.03359995782375336,
0.03282652422785759,
-0.027126513421535492,
0.03657352179288864,
-0.07416079938411713,
-0.06560555100440979
] |
null | null | transformers | # MultiBERTs Seed 21 (uncased)
Seed 21 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-21')
model = BertModel.from_pretrained("multiberts-seed-21")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-21 | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 21 (uncased)
Seed 21 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 21 (uncased)\n\nSeed 21 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 21 (uncased)\n\nSeed 21 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
63,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 21 (uncased)\n\nSeed 21 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.06720063090324402,
0.024497758597135544,
-0.0020492803305387497,
0.09174629300832748,
0.07455983757972717,
0.0272976104170084,
0.16355764865875244,
0.03138056397438049,
-0.03164476156234741,
0.02697417140007019,
0.10765793919563293,
0.034209154546260834,
0.031506650149822235,
0.03723369911313057,
0.06752429157495499,
-0.2612994313240051,
0.07446703314781189,
-0.05766035616397858,
0.038833748549222946,
0.061950232833623886,
0.10250037908554077,
-0.07017409801483154,
0.07654736191034317,
0.04478076100349426,
-0.0778157189488411,
-0.028296202421188354,
-0.007018543779850006,
-0.032193247228860855,
0.07563766092061996,
0.09479329735040665,
0.061084941029548645,
-0.012749643065035343,
0.06547322124242783,
-0.08946286141872406,
0.018597496673464775,
0.0233464352786541,
-0.011248137801885605,
0.03989483416080475,
0.026658665388822556,
-0.008276166394352913,
0.11417132616043091,
0.02053900435566902,
0.08375470340251923,
0.04153793677687645,
-0.09026560932397842,
-0.10608827322721481,
-0.07263687252998352,
0.09830992668867111,
0.029266241937875748,
0.04573399946093559,
-0.007170930504798889,
0.07391692698001862,
-0.0060944934375584126,
0.05809872969985008,
0.0913831889629364,
-0.24032488465309143,
-0.023379800841212273,
0.046269774436950684,
0.04972212016582489,
0.04219722002744675,
0.012644646689295769,
0.02725372463464737,
0.008548110723495483,
0.04932216554880142,
0.0026035094633698463,
-0.030861463397741318,
0.14134109020233154,
-0.05433608591556549,
-0.13760530948638916,
-0.0307875107973814,
0.1571846753358841,
0.029398484155535698,
-0.10978422313928604,
-0.11492137610912323,
-0.005258060060441494,
0.16293145716190338,
-0.0019947662949562073,
-0.011993800289928913,
-0.00920034572482109,
-0.004314684309065342,
0.02713587135076523,
-0.12006422877311707,
-0.08361291885375977,
-0.02577964775264263,
-0.06339090317487717,
0.16138070821762085,
0.04656060039997101,
0.07021033763885498,
-0.06010587513446808,
0.039572540670633316,
-0.14699706435203552,
-0.037236105650663376,
-0.04555386304855347,
-0.0962233617901802,
0.01588670164346695,
0.028190402314066887,
-0.04364340007305145,
-0.1189354658126831,
-0.04039046913385391,
0.07846364378929138,
0.035115890204906464,
0.038172923028469086,
-0.004856353625655174,
0.030507629737257957,
0.10817433148622513,
0.10648341476917267,
-0.06103292107582092,
0.08056800812482834,
0.015145245008170605,
-0.022117305546998978,
0.04361956566572189,
-0.05447404459118843,
-0.12026098370552063,
0.07398483157157898,
-0.029929067939519882,
0.01676887273788452,
0.017632165923714638,
0.03900286927819252,
-0.011617066338658333,
-0.07723418623209,
0.1410112977027893,
-0.09469398856163025,
-0.002419427502900362,
-0.0035423829685896635,
0.01530684158205986,
0.07765565067529678,
0.021946873515844345,
-0.0008260636241175234,
-0.05625247210264206,
-0.026461344212293625,
-0.06712725013494492,
-0.029092997312545776,
-0.05666780471801758,
-0.13178832828998566,
0.002916131168603897,
-0.024403726682066917,
-0.01481691189110279,
-0.10076282173395157,
-0.177381694316864,
-0.017090873792767525,
0.07094669342041016,
-0.01414940319955349,
0.017057277262210846,
-0.000057054334320127964,
0.012392466887831688,
-0.004111278802156448,
0.03406442701816559,
-0.03802698850631714,
0.010297026485204697,
-0.01093521248549223,
-0.06551913172006607,
0.039413295686244965,
-0.12155269831418991,
0.04226536303758621,
-0.05635518580675125,
0.014022765681147575,
-0.18996644020080566,
0.10421507060527802,
-0.029900923371315002,
-0.04154512286186218,
-0.046709731221199036,
-0.06081896275281906,
0.01695704087615013,
0.046189360320568085,
-0.015248360112309456,
0.09852264821529388,
-0.12054699659347534,
-0.053769681602716446,
0.16208738088607788,
-0.15770119428634644,
0.019141998142004013,
0.10370759665966034,
-0.06657334417104721,
0.042193133383989334,
0.14239820837974548,
0.08321334421634674,
0.06751371175050735,
-0.037927404046058655,
0.017519060522317886,
0.058510370552539825,
-0.04720403254032135,
0.07926834374666214,
0.10930964350700378,
-0.01101627480238676,
-0.11561648547649384,
0.028203826397657394,
-0.06370608508586884,
-0.037121161818504333,
-0.023138055577874184,
-0.022824862971901894,
0.015306513756513596,
-0.04982432350516319,
0.050817593932151794,
-0.006077871657907963,
0.005546923726797104,
-0.026844963431358337,
-0.07607876509428024,
0.06958946585655212,
0.07704333961009979,
-0.08597680181264877,
0.014783427119255066,
-0.09092743694782257,
0.029482856392860413,
-0.058862097561359406,
-0.005195309408009052,
-0.14333781599998474,
-0.04569561034440994,
0.030576735734939575,
-0.07432623207569122,
0.09981589019298553,
0.12248803675174713,
0.008882156573235989,
0.1118704080581665,
-0.043842628598213196,
0.025483228266239166,
-0.01246877945959568,
-0.010813003405928612,
-0.045449405908584595,
-0.1406053602695465,
-0.06911276280879974,
-0.06530749052762985,
0.08364205062389374,
-0.05822629854083061,
0.02015857771039009,
-0.07970571517944336,
-0.04144975543022156,
-0.02751954272389412,
-0.04604979604482651,
0.007321434561163187,
0.004240656737238169,
-0.011629548855125904,
-0.03297652676701546,
0.0401277095079422,
0.025918716564774513,
-0.08875301480293274,
0.08900686353445053,
-0.13011226058006287,
-0.057300396263599396,
0.06571798771619797,
-0.002011233940720558,
-0.04301968961954117,
0.09190864115953445,
0.007603577803820372,
-0.010114251635968685,
-0.05611361563205719,
-0.04569607600569725,
0.22213448584079742,
-0.019570719450712204,
0.08285649120807648,
-0.11122074723243713,
0.0037410608492791653,
0.036411695182323456,
-0.06062418594956398,
-0.058476682752370834,
0.07786767184734344,
0.030588429421186447,
-0.22950240969657898,
0.026687290519475937,
0.07070216536521912,
0.06599206477403641,
0.13986007869243622,
0.023072930052876472,
-0.03125528246164322,
-0.06227854639291763,
-0.013502322137355804,
-0.013479981571435928,
0.06111029535531998,
-0.03714602440595627,
0.00009014403622131795,
0.04879581183195114,
0.053886547684669495,
0.017159001901745796,
-0.07093553990125656,
0.021809034049510956,
0.05089128762483597,
-0.013338982127606869,
-0.0636095330119133,
-0.05179315060377121,
-0.040342673659324646,
0.07241733372211456,
0.04254534840583801,
0.04855172336101532,
0.05227503180503845,
-0.021921545267105103,
-0.13666383922100067,
0.1675264537334442,
-0.13235625624656677,
-0.22396698594093323,
-0.13180866837501526,
-0.08094333857297897,
-0.07269962877035141,
0.038750059902668,
0.03913046419620514,
-0.03749263286590576,
-0.05039899796247482,
-0.09959375858306885,
0.06991058588027954,
-0.1176963672041893,
-0.06304361671209335,
0.018007438629865646,
-0.05464549362659454,
-0.009234583005309105,
-0.12911629676818848,
-0.012059688568115234,
-0.030231699347496033,
-0.07713019847869873,
0.004416954703629017,
-0.04544853791594505,
0.007323582656681538,
0.13126307725906372,
0.00819840282201767,
-0.009361295029520988,
-0.01380833238363266,
0.19518284499645233,
0.02824946492910385,
0.042514458298683167,
0.12930084764957428,
-0.06194110959768295,
0.05907392501831055,
0.019648194313049316,
0.039020851254463196,
-0.049334485083818436,
-0.0015171114355325699,
-0.0229769479483366,
-0.1146317720413208,
-0.21552355587482452,
-0.06773538142442703,
0.010386083275079727,
0.006549030542373657,
0.024202724918723106,
0.017624713480472565,
0.03040521964430809,
0.05539766699075699,
-0.03637896850705147,
0.033273160457611084,
0.028973184525966644,
0.057185232639312744,
0.06883832812309265,
-0.060530129820108414,
0.09467959403991699,
-0.0694662407040596,
0.024297570809721947,
0.10745425522327423,
-0.07111035287380219,
0.16476407647132874,
0.04722315073013306,
0.06370405852794647,
0.09233343601226807,
-0.00748036615550518,
0.05699163302779198,
0.10475815832614899,
-0.05655825510621071,
0.017638958990573883,
-0.0763041079044342,
-0.05669237673282623,
-0.048174016177654266,
0.062417954206466675,
0.029465891420841217,
0.0015998370945453644,
-0.0981777012348175,
0.03519564867019653,
-0.0385095588862896,
0.08181698620319366,
0.06621749699115753,
-0.10710577666759491,
-0.10035623610019684,
0.042806029319763184,
-0.04695463180541992,
-0.08756136894226074,
0.03569475933909416,
0.08250148594379425,
-0.14204061031341553,
0.058070141822099686,
0.016979578882455826,
0.07332286238670349,
-0.09926697611808777,
0.012279878370463848,
-0.07017278671264648,
0.015085971914231777,
0.005299828015267849,
0.0945863202214241,
-0.13690374791622162,
0.11702023446559906,
0.0283515527844429,
0.04021564498543739,
-0.09174937009811401,
0.01654120348393917,
-0.01235438883304596,
0.07071375101804733,
0.12281693518161774,
0.0413326621055603,
-0.05824257433414459,
-0.015850871801376343,
-0.06603868305683136,
0.034507740288972855,
0.0775233805179596,
-0.037084877490997314,
0.03808156028389931,
-0.0005189105868339539,
0.01691867597401142,
-0.006850550416857004,
0.016960740089416504,
-0.13180656731128693,
-0.14119595289230347,
0.06953023374080658,
-0.0670759528875351,
-0.07992430031299591,
-0.03766822814941406,
-0.06733290106058121,
-0.08677298575639725,
0.15225891768932343,
-0.0685647502541542,
-0.10816331207752228,
-0.10567424446344376,
0.011634371243417263,
0.07132839411497116,
-0.06785385310649872,
0.04676532372832298,
-0.05219954252243042,
0.09115131199359894,
-0.041682545095682144,
-0.111214280128479,
0.01889873296022415,
-0.08916425704956055,
-0.11174436658620834,
-0.031410686671733856,
0.09642533957958221,
0.14961698651313782,
0.05228526145219803,
0.02113923989236355,
0.018018193542957306,
-0.0004721321165561676,
-0.12939408421516418,
0.008159574121236801,
0.130253404378891,
0.01641286537051201,
0.09104293584823608,
-0.06268982589244843,
-0.01839861273765564,
-0.015338890254497528,
0.004356885328888893,
0.139443039894104,
0.15357106924057007,
-0.05999869108200073,
0.14610043168067932,
0.22894912958145142,
-0.09593765437602997,
-0.19105860590934753,
-0.07649331539869308,
0.0034487592056393623,
0.047140080481767654,
0.04211721196770668,
-0.20351791381835938,
0.10178549587726593,
0.06004748120903969,
-0.012969927862286568,
-0.030143262818455696,
-0.18845966458320618,
-0.0984359085559845,
0.10495573282241821,
0.06402276456356049,
0.19578039646148682,
-0.0685899406671524,
-0.04228533059358597,
-0.04122232645750046,
-0.058307453989982605,
0.09280799329280853,
-0.005595233291387558,
0.08555031567811966,
0.014164471998810768,
0.015926267951726913,
-0.0033614877611398697,
-0.008897790685296059,
0.10746386647224426,
0.04362001270055771,
0.02115994319319725,
-0.07323502004146576,
-0.045778315514326096,
0.11930941045284271,
-0.03175909444689751,
0.12364521622657776,
0.025926679372787476,
0.05898971110582352,
-0.07204581797122955,
-0.05824578180909157,
-0.08258868753910065,
0.011127373203635216,
-0.039337776601314545,
-0.05292492359876633,
-0.053056713193655014,
0.03803182393312454,
0.022314012050628662,
0.010510732419788837,
-0.004252670332789421,
-0.05389814078807831,
0.014573483727872372,
0.055515583604574203,
0.15586480498313904,
-0.01832672208547592,
-0.06778132915496826,
-0.07304343581199646,
-0.05715247616171837,
0.0479712076485157,
-0.10675857961177826,
0.028047475963830948,
0.023602809756994247,
0.001411670702509582,
0.11496074497699738,
0.03373841941356659,
-0.1147594302892685,
0.012927267700433731,
0.005446545779705048,
-0.09877286106348038,
-0.15794780850410461,
-0.018333014100790024,
0.05891606956720352,
-0.05781527981162071,
0.043046288192272186,
0.15475308895111084,
-0.026097819209098816,
-0.03629884496331215,
-0.055351026356220245,
0.02896524779498577,
-0.035766154527664185,
0.03618848696351051,
0.08012054860591888,
0.018862195312976837,
-0.08185835182666779,
0.05589868500828743,
0.045310258865356445,
-0.010019045323133469,
0.06600026786327362,
0.028594445437192917,
-0.06741362065076828,
-0.08353346586227417,
-0.06598793715238571,
0.10860634595155716,
-0.03776835650205612,
-0.06400297582149506,
0.04614836722612381,
-0.10600288212299347,
0.06326757371425629,
0.10730987787246704,
0.03754992038011551,
0.04740245267748833,
-0.08666063100099564,
0.005307914689183235,
-0.03781317174434662,
0.03093971312046051,
-0.04844602197408676,
-0.030413497239351273,
-0.04587085545063019,
0.028141548857092857,
0.059972282499074936,
0.09498117864131927,
-0.03635643795132637,
-0.08099661767482758,
-0.0868568941950798,
-0.014520212076604366,
-0.10195977985858917,
-0.012732351198792458,
-0.07371184229850769,
-0.0017597622936591506,
0.0028650821186602116,
-0.024887949228286743,
0.02888469398021698,
0.03386230766773224,
-0.04796810448169708,
-0.010128962807357311,
-0.02643701434135437,
0.05572634935379028,
-0.07613326609134674,
0.015150300227105618,
0.013957049697637558,
-0.017447859048843384,
0.0889802873134613,
0.04843803495168686,
-0.028511689975857735,
0.05218273028731346,
-0.03942706435918808,
0.03417401388287544,
-0.04348239675164223,
0.006206901278346777,
-0.019780101254582405,
-0.1165182888507843,
-0.022039219737052917,
0.007465582340955734,
-0.022957727313041687,
0.015242361463606358,
0.06525322794914246,
-0.05099967122077942,
0.08236309140920639,
0.057335611432790756,
-0.05462293699383736,
-0.058098867535591125,
0.03904665634036064,
0.004867427051067352,
0.016870997846126556,
0.07734990119934082,
0.005737048573791981,
0.05616725981235504,
-0.08814966678619385,
-0.00006862729787826538,
0.005158550105988979,
-0.016753874719142914,
-0.022290000692009926,
-0.0713178813457489,
-0.0046591805294156075,
0.009517505764961243,
0.17607176303863525,
-0.00884066428989172,
-0.026424288749694824,
0.006662523373961449,
0.07115256786346436,
0.02901107259094715,
0.002324149012565613,
0.08103758096694946,
-0.017267592251300812,
-0.0208163782954216,
-0.01660919561982155,
0.0276913084089756,
-0.013814873062074184,
0.012230012565851212,
0.13140279054641724,
0.04979066550731659,
0.11390607804059982,
0.07225821167230606,
0.059309929609298706,
0.049726489931344986,
-0.11083708703517914,
-0.06767337024211884,
0.03848132863640785,
0.04801715910434723,
-0.0321512371301651,
0.030342701822519302,
0.061172716319561005,
-0.09179677069187164,
0.08310535550117493,
0.04569311812520027,
-0.056478723883628845,
-0.12532366812229156,
-0.21110841631889343,
-0.04028351604938507,
-0.010402960702776909,
-0.018700478598475456,
-0.1097700297832489,
0.026203949004411697,
0.09726978838443756,
0.03969467058777809,
-0.022130021825432777,
0.06641285866498947,
-0.14707916975021362,
-0.03830237686634064,
0.03717735782265663,
-0.011488702148199081,
0.024273749440908432,
0.041877686977386475,
0.02279236912727356,
0.015823446214199066,
0.06941051781177521,
0.05195678398013115,
0.04044584929943085,
0.058667492121458054,
0.02725757285952568,
-0.054685745388269424,
-0.08777768164873123,
-0.04480947554111481,
0.0060574510134756565,
0.05712524428963661,
0.1261170506477356,
0.007270932197570801,
-0.0697159543633461,
0.0014037550427019596,
0.05657772719860077,
-0.018432755023241043,
-0.08134819567203522,
-0.11185310781002045,
0.22496497631072998,
-0.025661520659923553,
0.01169057097285986,
-0.0033739162608981133,
-0.034558236598968506,
0.024286089465022087,
0.1969570815563202,
0.2610534429550171,
-0.01744743622839451,
-0.015486178919672966,
0.01261778362095356,
-0.000905429944396019,
0.03349282592535019,
0.11917606741189957,
-0.026655288413167,
0.2256801426410675,
-0.04935508966445923,
0.06264874339103699,
-0.05754378065466881,
-0.016618940979242325,
-0.07106450945138931,
0.057315677404403687,
0.0019990126602351665,
-0.015588467009365559,
-0.013399122282862663,
0.07109137624502182,
-0.03443067520856857,
-0.11553170531988144,
-0.028378065675497055,
0.00008559622801840305,
-0.057689350098371506,
0.01189861074090004,
-0.004322937224060297,
0.03340686485171318,
0.08183549344539642,
-0.012800528667867184,
-0.0029963627457618713,
0.12595641613006592,
0.0121506592258811,
-0.11427071690559387,
-0.04118324816226959,
0.12723150849342346,
0.014520976692438126,
0.1332036703824997,
0.04725653678178787,
0.08727335929870605,
0.07218516618013382,
0.014673734083771706,
-0.06366637349128723,
0.035557400435209274,
-0.026518329977989197,
0.02066745236515999,
-0.0020707212388515472,
0.10335367172956467,
-0.007541977800428867,
0.07696714252233505,
0.0177738256752491,
-0.09094648063182831,
0.05562242865562439,
0.009664781391620636,
-0.07282763719558716,
-0.03635000064969063,
0.10663031041622162,
-0.11007539182901382,
0.14642119407653809,
0.14319506287574768,
-0.0037203379906713963,
-0.07294391095638275,
-0.013431481085717678,
0.022268002852797508,
-0.0012077423743903637,
0.05146009102463722,
-0.0282625462859869,
-0.13241136074066162,
0.00048230821266770363,
-0.0894496738910675,
0.0429651215672493,
-0.24716225266456604,
-0.0686434954404831,
0.014467773959040642,
-0.004238526336848736,
-0.02210186794400215,
0.024423468858003616,
0.061535291373729706,
-0.0038497489877045155,
-0.03215503692626953,
0.028986463323235512,
-0.026181617751717567,
0.03564445674419403,
-0.07331645488739014,
-0.06564012169837952
] |
null | null | transformers | # MultiBERTs Seed 22 (uncased)
Seed 22 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-22')
model = BertModel.from_pretrained("multiberts-seed-22")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-22 | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 22 (uncased)
Seed 22 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 22 (uncased)\n\nSeed 22 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 22 (uncased)\n\nSeed 22 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
63,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 22 (uncased)\n\nSeed 22 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0664047971367836,
0.033232733607292175,
-0.0021739371586591005,
0.09141367673873901,
0.07505691051483154,
0.027763932943344116,
0.1613689363002777,
0.031084489077329636,
-0.032939884811639786,
0.024523502215743065,
0.10349321365356445,
0.03544238954782486,
0.033897045999765396,
0.03137186914682388,
0.0694526731967926,
-0.26155853271484375,
0.07531982660293579,
-0.05838944762945175,
0.0356101468205452,
0.060808777809143066,
0.10308755934238434,
-0.07050076127052307,
0.07626109570264816,
0.04476694390177727,
-0.07890761643648148,
-0.02700263075530529,
-0.005337076261639595,
-0.03268799930810928,
0.07550487667322159,
0.09716527163982391,
0.06260469555854797,
-0.011345804668962955,
0.06353242695331573,
-0.0896635353565216,
0.018561266362667084,
0.0237722210586071,
-0.010354673489928246,
0.03758249431848526,
0.029693976044654846,
-0.004855324514210224,
0.11824876070022583,
0.02385009080171585,
0.08579378575086594,
0.039104219526052475,
-0.08626014739274979,
-0.09972366690635681,
-0.07031267881393433,
0.09719493240118027,
0.030307799577713013,
0.04321592301130295,
-0.007638080976903439,
0.07204559445381165,
-0.007928176783025265,
0.05900650471448898,
0.08196933567523956,
-0.24109724164009094,
-0.02344576083123684,
0.04442086070775986,
0.048126399517059326,
0.0394299253821373,
0.014781160280108452,
0.02762109786272049,
0.005618460476398468,
0.0496794730424881,
0.0009928066283464432,
-0.030073344707489014,
0.13994264602661133,
-0.05298752337694168,
-0.13815459609031677,
-0.029580116271972656,
0.15425275266170502,
0.026491601020097733,
-0.11083365231752396,
-0.1126459389925003,
-0.001844095066189766,
0.17601698637008667,
-0.0028665317222476006,
-0.00870759878307581,
-0.009653901681303978,
-0.0036747343838214874,
0.028466995805501938,
-0.12021514028310776,
-0.08344131708145142,
-0.024511728435754776,
-0.06512676179409027,
0.1557002067565918,
0.048160746693611145,
0.07046425342559814,
-0.05985575541853905,
0.03828258067369461,
-0.15233832597732544,
-0.03526687249541283,
-0.04509691894054413,
-0.09783728420734406,
0.014372184872627258,
0.028717176988720894,
-0.044406697154045105,
-0.11932998895645142,
-0.03878304362297058,
0.07257921993732452,
0.0399266816675663,
0.037732914090156555,
-0.008239444345235825,
0.030478766188025475,
0.10571594536304474,
0.10494707524776459,
-0.05993292108178139,
0.07737383991479874,
0.015383663587272167,
-0.020198486745357513,
0.03973492980003357,
-0.05541764199733734,
-0.1232854425907135,
0.07341353595256805,
-0.033291205763816833,
0.017023228108882904,
0.022165250033140182,
0.038516927510499954,
-0.010944744572043419,
-0.07694311439990997,
0.1403064876794815,
-0.0956711545586586,
0.0007964277174323797,
-0.003433384234085679,
0.0166546031832695,
0.07905124127864838,
0.022530654445290565,
0.003255867399275303,
-0.05873507261276245,
-0.03026745095849037,
-0.06475500762462616,
-0.02814970165491104,
-0.05756430700421333,
-0.13401459157466888,
0.0036926232278347015,
-0.021654220297932625,
-0.015327299013733864,
-0.10156499594449997,
-0.1774757355451584,
-0.015651760622859,
0.06919845938682556,
-0.013997718691825867,
0.013289344497025013,
0.00026674324180930853,
0.012689104303717613,
-0.004596341401338577,
0.034195754677057266,
-0.041679248213768005,
0.00996948592364788,
-0.0117049440741539,
-0.06762974709272385,
0.03848516196012497,
-0.12038637697696686,
0.0420142225921154,
-0.05222642049193382,
0.013065343722701073,
-0.1938599944114685,
0.1055925115942955,
-0.029395263642072678,
-0.04204200208187103,
-0.04624014347791672,
-0.06189931556582451,
0.020138386636972427,
0.04672100394964218,
-0.01617702655494213,
0.10005009174346924,
-0.12555502355098724,
-0.05415636673569679,
0.16200843453407288,
-0.15855303406715393,
0.02022543177008629,
0.10509245097637177,
-0.06797711551189423,
0.03995668888092041,
0.14431501924991608,
0.07923945784568787,
0.07166280597448349,
-0.03758816793560982,
0.0187036395072937,
0.05977068096399307,
-0.046523161232471466,
0.08107227087020874,
0.11054550111293793,
-0.012553330510854721,
-0.1254594326019287,
0.02831714227795601,
-0.0657324269413948,
-0.03537754714488983,
-0.02246793359518051,
-0.023320088163018227,
0.015170156955718994,
-0.05121306702494621,
0.05615011602640152,
-0.006907222792506218,
0.0049184467643499374,
-0.025968149304389954,
-0.07462012022733688,
0.08508342504501343,
0.07784914970397949,
-0.08569582551717758,
0.014316298067569733,
-0.09320098161697388,
0.029108751565217972,
-0.06125864386558533,
-0.00575330201536417,
-0.14381620287895203,
-0.043663326650857925,
0.030344577506184578,
-0.07702843844890594,
0.09938697516918182,
0.12162701785564423,
0.008760244585573673,
0.1154135912656784,
-0.04428853094577789,
0.02643362432718277,
-0.01436946727335453,
-0.009074117057025433,
-0.04482874646782875,
-0.14193157851696014,
-0.06867875158786774,
-0.06432292610406876,
0.08490746468305588,
-0.061984483152627945,
0.02109556645154953,
-0.08148759603500366,
-0.03939495235681534,
-0.027152109891176224,
-0.044758863747119904,
0.00538767222315073,
0.005917683243751526,
-0.012403584085404873,
-0.029843879863619804,
0.04054379463195801,
0.02535782754421234,
-0.08983632922172546,
0.08869761973619461,
-0.1260274201631546,
-0.05185679718852043,
0.06764356046915054,
-0.0033297184854745865,
-0.03849269449710846,
0.09461529552936554,
0.009015591815114021,
-0.011349804699420929,
-0.05560540035367012,
-0.04820329695940018,
0.21846339106559753,
-0.019761990755796432,
0.08265876770019531,
-0.1134214699268341,
0.00400387030094862,
0.03609449788928032,
-0.05889267474412918,
-0.058640994131565094,
0.07758541405200958,
0.03543839603662491,
-0.22372522950172424,
0.02431708574295044,
0.07220136374235153,
0.06534264236688614,
0.1409093141555786,
0.023836534470319748,
-0.031007811427116394,
-0.061171840876340866,
-0.014380071312189102,
-0.012844743207097054,
0.05740555375814438,
-0.04407893121242523,
0.001340776914730668,
0.04968283325433731,
0.05200102925300598,
0.017797693610191345,
-0.07175494730472565,
0.021923279389739037,
0.052632756531238556,
-0.014627063646912575,
-0.06935684382915497,
-0.053810667246580124,
-0.03745926544070244,
0.07327914237976074,
0.042752668261528015,
0.048775814473629,
0.05269572511315346,
-0.022029466927051544,
-0.13827447593212128,
0.16613316535949707,
-0.13696056604385376,
-0.22995787858963013,
-0.13392215967178345,
-0.0792926624417305,
-0.07560107111930847,
0.03959968686103821,
0.03844212740659714,
-0.038120612502098083,
-0.05003479868173599,
-0.10220520943403244,
0.06924279779195786,
-0.11702287197113037,
-0.059423379600048065,
0.01648002862930298,
-0.05324894189834595,
-0.007199204992502928,
-0.12716326117515564,
-0.009418245404958725,
-0.026511238887906075,
-0.07717373967170715,
0.001845039427280426,
-0.045479800552129745,
0.008026054129004478,
0.13488870859146118,
0.007854699157178402,
-0.01188883651047945,
-0.012975630350410938,
0.19775554537773132,
0.029423490166664124,
0.042371585965156555,
0.12975716590881348,
-0.06667566299438477,
0.059536293148994446,
0.020780222490429878,
0.03980950266122818,
-0.04652025178074837,
-0.001329188235104084,
-0.024032946676015854,
-0.11378568410873413,
-0.21428486704826355,
-0.06708244234323502,
0.009669306688010693,
0.008666777051985264,
0.02099652960896492,
0.017205405980348587,
0.024813847616314888,
0.052425455302000046,
-0.03345352038741112,
0.03480786830186844,
0.03256336599588394,
0.05540790781378746,
0.06487660109996796,
-0.06244850903749466,
0.09575539827346802,
-0.07030853629112244,
0.023796692490577698,
0.10948598384857178,
-0.07459291815757751,
0.15737983584403992,
0.045961931347846985,
0.0521710142493248,
0.0932396799325943,
-0.0024548545479774475,
0.05874096602201462,
0.10431794077157974,
-0.0608651302754879,
0.018499117344617844,
-0.0749005526304245,
-0.05813013017177582,
-0.04769419506192207,
0.06496193259954453,
0.036589596420526505,
-0.0011265072971582413,
-0.09998992085456848,
0.033442020416259766,
-0.03905227407813072,
0.07887859642505646,
0.06618166714906693,
-0.10449720919132233,
-0.10065734386444092,
0.04463556036353111,
-0.04767393320798874,
-0.08926145732402802,
0.0353785902261734,
0.08727297186851501,
-0.14147165417671204,
0.05966782197356224,
0.018064238131046295,
0.07351997494697571,
-0.09663605690002441,
0.012226376682519913,
-0.06860972940921783,
0.013317115604877472,
0.006156406365334988,
0.09466547518968582,
-0.133222758769989,
0.10980745404958725,
0.028308041393756866,
0.03890815004706383,
-0.09065989404916763,
0.0174567811191082,
-0.011647669598460197,
0.07621092349290848,
0.11912526190280914,
0.04250660538673401,
-0.06014738231897354,
-0.02122737653553486,
-0.06544528901576996,
0.03423095494508743,
0.07644299417734146,
-0.037992268800735474,
0.03939706087112427,
0.0006427336484193802,
0.0159035325050354,
-0.00680308835580945,
0.017190203070640564,
-0.13299329578876495,
-0.14286044239997864,
0.07144039869308472,
-0.0699826031923294,
-0.08140937238931656,
-0.039727866649627686,
-0.06938788294792175,
-0.08771183341741562,
0.1537168025970459,
-0.07558176666498184,
-0.1092311218380928,
-0.10435262322425842,
0.007515675388276577,
0.07204042375087738,
-0.06661997735500336,
0.0498165562748909,
-0.05006740987300873,
0.0948689728975296,
-0.03979944437742233,
-0.11078749597072601,
0.016659114509820938,
-0.08856210112571716,
-0.11143627762794495,
-0.030324667692184448,
0.09660853445529938,
0.14859811961650848,
0.053111765533685684,
0.02198036015033722,
0.017369767650961876,
0.002532200887799263,
-0.1285683810710907,
0.005524693056941032,
0.13802006840705872,
0.017848126590251923,
0.09711554646492004,
-0.062411606311798096,
-0.020544353872537613,
-0.013075504451990128,
0.0029451847076416016,
0.13644544780254364,
0.15480035543441772,
-0.0593646839261055,
0.14928801357746124,
0.22698239982128143,
-0.0990617573261261,
-0.19237536191940308,
-0.07460325956344604,
0.004094429314136505,
0.047439299523830414,
0.04474623128771782,
-0.20325589179992676,
0.10315078496932983,
0.055968914180994034,
-0.011898590251803398,
-0.03472615405917168,
-0.18972274661064148,
-0.10043041408061981,
0.10386604070663452,
0.06354718655347824,
0.19259485602378845,
-0.06824254244565964,
-0.04407491534948349,
-0.043342724442481995,
-0.059157922863960266,
0.0945039689540863,
-0.016052450984716415,
0.08175735920667648,
0.01626976579427719,
0.01501547172665596,
-0.001517927274107933,
-0.008362142369151115,
0.11071863770484924,
0.043570347130298615,
0.01845998503267765,
-0.07436177134513855,
-0.043956223875284195,
0.11188623309135437,
-0.03137901425361633,
0.1222999095916748,
0.027478989213705063,
0.05914352834224701,
-0.07664662599563599,
-0.059533920139074326,
-0.08264854550361633,
0.009673694148659706,
-0.04012823849916458,
-0.05033670365810394,
-0.05055924877524376,
0.03737650066614151,
0.022313157096505165,
0.01225797738879919,
-0.00846300832927227,
-0.05482888221740723,
0.013959608972072601,
0.060801584273576736,
0.15788286924362183,
-0.013667430728673935,
-0.0678863525390625,
-0.07233685255050659,
-0.05769221857190132,
0.046980924904346466,
-0.10525406897068024,
0.030968111008405685,
0.023378074169158936,
-0.0008264980278909206,
0.11563198268413544,
0.03235800564289093,
-0.11423058062791824,
0.011803502216935158,
0.006837585940957069,
-0.10002642869949341,
-0.15816251933574677,
-0.018450800329446793,
0.059487633407115936,
-0.058968111872673035,
0.038258183747529984,
0.15777714550495148,
-0.025953177362680435,
-0.03517743945121765,
-0.05535385012626648,
0.03025997243821621,
-0.035407654941082,
0.03736645728349686,
0.08031897246837616,
0.01818760856986046,
-0.08070304989814758,
0.054025061428546906,
0.04502265900373459,
-0.014555517584085464,
0.0660262331366539,
0.02428380399942398,
-0.0689145028591156,
-0.08412504196166992,
-0.06403575837612152,
0.11201778799295425,
-0.03882952034473419,
-0.06719592213630676,
0.04620473459362984,
-0.10792645812034607,
0.06580489873886108,
0.10362798720598221,
0.03737938404083252,
0.04766371101140976,
-0.08581943809986115,
0.006242057308554649,
-0.03397819399833679,
0.03296591714024544,
-0.045821428298950195,
-0.033415377140045166,
-0.0448572114109993,
0.03204462677240372,
0.059852235019207,
0.09732741862535477,
-0.03644106164574623,
-0.07871682196855545,
-0.08710496127605438,
-0.014322402887046337,
-0.11111562699079514,
-0.009573248215019703,
-0.07214805483818054,
0.0002231365069746971,
0.0044733816757798195,
-0.02728855237364769,
0.029978733509778976,
0.03156821429729462,
-0.05009254068136215,
-0.00973835214972496,
-0.029118217527866364,
0.05735010653734207,
-0.07399515062570572,
0.01488085463643074,
0.017096810042858124,
-0.018156878650188446,
0.08980028331279755,
0.04640660434961319,
-0.02933788299560547,
0.054771482944488525,
-0.030034419149160385,
0.033588655292987823,
-0.0455193966627121,
0.008400509133934975,
-0.018843796104192734,
-0.11360448598861694,
-0.020780229941010475,
0.008874865248799324,
-0.023375412449240685,
0.01561290305107832,
0.07019452750682831,
-0.050392016768455505,
0.08155331015586853,
0.060052983462810516,
-0.05548998340964317,
-0.057627204805612564,
0.03963513672351837,
0.0035504475235939026,
0.017816849052906036,
0.07760414481163025,
0.004328542854636908,
0.05676252767443657,
-0.08636051416397095,
-0.0008790874853730202,
0.004262122325599194,
-0.015798740088939667,
-0.014433702453970909,
-0.0715160220861435,
-0.003161010332405567,
0.008386713452637196,
0.17679035663604736,
-0.005421964451670647,
-0.019321877509355545,
0.005791379138827324,
0.06904801726341248,
0.0342678502202034,
0.0038132034242153168,
0.08264462649822235,
-0.016446245834231377,
-0.01951773837208748,
-0.018687263131141663,
0.025164879858493805,
-0.014142438769340515,
0.01109405979514122,
0.13022087514400482,
0.04973229020833969,
0.11965469270944595,
0.07199157774448395,
0.06417703628540039,
0.052475716918706894,
-0.11246269196271896,
-0.07101062685251236,
0.036914508789777756,
0.05102565884590149,
-0.035174138844013214,
0.03317927196621895,
0.0597018264234066,
-0.09411786496639252,
0.08358476310968399,
0.04653555154800415,
-0.0579027384519577,
-0.12750409543514252,
-0.21126559376716614,
-0.04124073684215546,
-0.010071737691760063,
-0.019916478544473648,
-0.10938291251659393,
0.026359163224697113,
0.09235396981239319,
0.03924989327788353,
-0.02236182801425457,
0.06365088373422623,
-0.14880838990211487,
-0.03715401887893677,
0.03767623007297516,
-0.012148256413638592,
0.02452903985977173,
0.04807522892951965,
0.020184669643640518,
0.016667718067765236,
0.07403229922056198,
0.051502954214811325,
0.04023502394556999,
0.056188710033893585,
0.02819237858057022,
-0.050959229469299316,
-0.08686773478984833,
-0.04400504380464554,
0.003356045577675104,
0.058574557304382324,
0.12419719249010086,
0.009857095777988434,
-0.06877603381872177,
0.0006534033454954624,
0.0566251277923584,
-0.01952122524380684,
-0.0826956182718277,
-0.11308564990758896,
0.2223311960697174,
-0.026071783155202866,
0.014623220078647137,
-0.001095278188586235,
-0.03626445680856705,
0.019884981215000153,
0.20317745208740234,
0.2591872811317444,
-0.01976473070681095,
-0.01627218723297119,
0.013700947165489197,
-0.0001995139755308628,
0.03470926359295845,
0.12167933583259583,
-0.026360206305980682,
0.2273063361644745,
-0.04657985270023346,
0.06174689531326294,
-0.05722788721323013,
-0.015835516154766083,
-0.07093346863985062,
0.05779191106557846,
0.0011455916101112962,
-0.016853950917720795,
-0.01252332329750061,
0.0726957768201828,
-0.03421556204557419,
-0.12564343214035034,
-0.030925113707780838,
0.0041411020793020725,
-0.05928769335150719,
0.013219689950346947,
0.0025655673816800117,
0.032157573848962784,
0.07865475118160248,
-0.014985873363912106,
-0.002569582313299179,
0.12870708107948303,
0.011686384677886963,
-0.11293576657772064,
-0.04053153097629547,
0.12993395328521729,
0.016862928867340088,
0.12928995490074158,
0.04804723337292671,
0.08979038894176483,
0.07175661623477936,
0.016579031944274902,
-0.06358552724123001,
0.03348908945918083,
-0.02603202685713768,
0.01918194815516472,
-0.00220292154699564,
0.10751676559448242,
-0.008857944048941135,
0.07521131634712219,
0.019079992547631264,
-0.08681875467300415,
0.053991906344890594,
0.005048241466283798,
-0.07499387860298157,
-0.03762589395046234,
0.10726602375507355,
-0.11314865946769714,
0.14282742142677307,
0.14470067620277405,
-0.0034078466705977917,
-0.07132918387651443,
-0.011680232360959053,
0.02340957336127758,
-0.006385700311511755,
0.05267389118671417,
-0.028693070635199547,
-0.1315242350101471,
-0.00001548090949654579,
-0.09749623388051987,
0.045402415096759796,
-0.24660184979438782,
-0.06984151899814606,
0.01693820208311081,
-0.0024680630303919315,
-0.023889176547527313,
0.02917376160621643,
0.062346719205379486,
-0.004857739899307489,
-0.03388801962137222,
0.02222074195742607,
-0.02715231664478779,
0.0358671173453331,
-0.07238252460956573,
-0.06703099608421326
] |
null | null | transformers | # MultiBERTs Seed 23 (uncased)
Seed 23 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-23')
model = BertModel.from_pretrained("multiberts-seed-23")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-23 | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 23 (uncased)
Seed 23 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 23 (uncased)\n\nSeed 23 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 23 (uncased)\n\nSeed 23 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
63,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 23 (uncased)\n\nSeed 23 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.06632763147354126,
0.03262614831328392,
-0.002115974435582757,
0.09264026582241058,
0.07548315823078156,
0.027377013117074966,
0.1659753918647766,
0.03247910365462303,
-0.035463374108076096,
0.022773511707782745,
0.10499963164329529,
0.03570770099759102,
0.03442595154047012,
0.029945336282253265,
0.06876830011606216,
-0.2551425099372864,
0.0771440863609314,
-0.057951949536800385,
0.04657287895679474,
0.06343738734722137,
0.1007142961025238,
-0.06999419629573822,
0.07639642059803009,
0.04198921471834183,
-0.07786183804273605,
-0.022432375699281693,
-0.00637461431324482,
-0.03245289996266365,
0.07428303360939026,
0.09581942111253738,
0.0646262913942337,
-0.011048919521272182,
0.06576026976108551,
-0.09099821746349335,
0.01788802444934845,
0.022414177656173706,
-0.012581554241478443,
0.03766564279794693,
0.02757260575890541,
-0.005351581145077944,
0.10956566780805588,
0.018085090443491936,
0.08763773739337921,
0.03841545060276985,
-0.08692283183336258,
-0.09251669049263,
-0.07208344340324402,
0.09865027666091919,
0.03200291842222214,
0.04632004722952843,
-0.007221870124340057,
0.07632353901863098,
-0.00861671194434166,
0.06078968942165375,
0.09172696620225906,
-0.2361719310283661,
-0.024261552840471268,
0.04529843106865883,
0.048942092806100845,
0.04453640431165695,
0.016460854560136795,
0.025372952222824097,
0.005613140761852264,
0.049546629190444946,
0.0015116601716727018,
-0.030233051627874374,
0.14354732632637024,
-0.05210129916667938,
-0.13819468021392822,
-0.031023357063531876,
0.1540255844593048,
0.026691673323512077,
-0.11025728285312653,
-0.11516670882701874,
-0.0025161709636449814,
0.16320893168449402,
-0.0038087302818894386,
-0.007966062985360622,
-0.008357178419828415,
-0.005433019250631332,
0.02544635906815529,
-0.11850614845752716,
-0.0820498988032341,
-0.025305505841970444,
-0.0620245523750782,
0.15471675992012024,
0.04826735705137253,
0.06918314099311829,
-0.05928770825266838,
0.03867948055267334,
-0.1480012834072113,
-0.034318115562200546,
-0.044931285083293915,
-0.09445912390947342,
0.013618648052215576,
0.028626572340726852,
-0.04436957836151123,
-0.12229082733392715,
-0.040227893739938736,
0.0785219669342041,
0.03514266759157181,
0.039220284670591354,
-0.0064148204401135445,
0.030174246057868004,
0.10607855021953583,
0.10669153928756714,
-0.06137813255190849,
0.07626824080944061,
0.014019547961652279,
-0.01864827238023281,
0.04233434796333313,
-0.05632564052939415,
-0.12244896590709686,
0.07460737228393555,
-0.03101225197315216,
0.017767377197742462,
0.019551580771803856,
0.038449566811323166,
-0.010980179533362389,
-0.07669965922832489,
0.14417827129364014,
-0.0971757024526596,
-0.0004682443104684353,
-0.0033336773049086332,
0.01923295482993126,
0.08007662743330002,
0.022829687222838402,
0.0008137099794112146,
-0.06015302613377571,
-0.02889985963702202,
-0.06397704035043716,
-0.029973268508911133,
-0.05752742290496826,
-0.13623124361038208,
0.0026920754462480545,
-0.023485880345106125,
-0.015211459249258041,
-0.09930147230625153,
-0.1799100935459137,
-0.015726285055279732,
0.06966562569141388,
-0.014086175709962845,
0.014897308312356472,
-0.0014305126387625933,
0.011173065751791,
-0.0034224260598421097,
0.03350573405623436,
-0.041721489280462265,
0.010004590265452862,
-0.01193855982273817,
-0.06446591764688492,
0.03667670488357544,
-0.12354504317045212,
0.042503632605075836,
-0.053066056221723557,
0.013404622673988342,
-0.1844378411769867,
0.10527556389570236,
-0.031108051538467407,
-0.043494269251823425,
-0.04378953203558922,
-0.06261998414993286,
0.02409876510500908,
0.04661252349615097,
-0.012988150119781494,
0.09870558232069016,
-0.11943919956684113,
-0.05518103018403053,
0.1577746570110321,
-0.15797126293182373,
0.023628387600183487,
0.10340407490730286,
-0.06908350437879562,
0.035265494138002396,
0.14272230863571167,
0.07878606766462326,
0.07239355146884918,
-0.03345121070742607,
0.018125180155038834,
0.05808534845709801,
-0.04412807151675224,
0.07799488306045532,
0.10934759676456451,
-0.008733455091714859,
-0.1218969076871872,
0.02870262786746025,
-0.06944748014211655,
-0.03497108817100525,
-0.022409023717045784,
-0.023810619488358498,
0.015601430088281631,
-0.05043333023786545,
0.05358096584677696,
-0.006205501966178417,
0.005393057130277157,
-0.027700891718268394,
-0.07491083443164825,
0.08839462697505951,
0.07612695544958115,
-0.0871858298778534,
0.014399652369320393,
-0.09139589965343475,
0.03073982149362564,
-0.06292742490768433,
-0.007782837375998497,
-0.14068201184272766,
-0.04451154172420502,
0.02976946160197258,
-0.07283925265073776,
0.09785313904285431,
0.12018902599811554,
0.008842923678457737,
0.11459290236234665,
-0.04323943704366684,
0.025322729721665382,
-0.017418023198843002,
-0.00887660589069128,
-0.043612364679574966,
-0.14019441604614258,
-0.07044190913438797,
-0.06596527993679047,
0.07643765956163406,
-0.05440225079655647,
0.021368151530623436,
-0.08199037611484528,
-0.04343051835894585,
-0.028707560151815414,
-0.047980695962905884,
0.006029595620930195,
0.0070251589640975,
-0.0126569215208292,
-0.030493352562189102,
0.04144997149705887,
0.024159809574484825,
-0.08736875653266907,
0.08554045855998993,
-0.12754467129707336,
-0.05051037296652794,
0.06806301325559616,
-0.0006175534799695015,
-0.04178005829453468,
0.09448829293251038,
0.008041389286518097,
-0.010749537497758865,
-0.05507584661245346,
-0.0479096919298172,
0.2203054428100586,
-0.018696263432502747,
0.08282612264156342,
-0.11238646507263184,
0.002102478640154004,
0.036505430936813354,
-0.05636131763458252,
-0.05999414622783661,
0.07781956344842911,
0.03916831687092781,
-0.220462828874588,
0.026455841958522797,
0.06987924873828888,
0.06599223613739014,
0.1415799856185913,
0.02461307868361473,
-0.032302822917699814,
-0.060905616730451584,
-0.012934057973325253,
-0.013870740309357643,
0.05987332761287689,
-0.0495908260345459,
-0.002074986230581999,
0.04798934608697891,
0.05090802162885666,
0.017760418355464935,
-0.07249444723129272,
0.021026764065027237,
0.052934274077415466,
-0.01591123454272747,
-0.06764957308769226,
-0.05684121698141098,
-0.03813304007053375,
0.0712108239531517,
0.04366842657327652,
0.04443817213177681,
0.053239792585372925,
-0.022778989747166634,
-0.13834276795387268,
0.16468533873558044,
-0.1339959055185318,
-0.2290397435426712,
-0.1364140510559082,
-0.08423303812742233,
-0.0765799954533577,
0.03770750015974045,
0.038227204233407974,
-0.03855787590146065,
-0.050442174077034,
-0.10373135656118393,
0.06858587265014648,
-0.11818372458219528,
-0.060096219182014465,
0.01890893280506134,
-0.05527718365192413,
-0.008166363462805748,
-0.1282052993774414,
-0.009927378967404366,
-0.026953570544719696,
-0.07941552996635437,
0.0012143179774284363,
-0.04570669308304787,
0.007267116568982601,
0.1314815878868103,
0.0066244397312402725,
-0.010502036660909653,
-0.014246143400669098,
0.19702781736850739,
0.03064252808690071,
0.04092802107334137,
0.1245461106300354,
-0.06628741323947906,
0.05774005502462387,
0.020772255957126617,
0.041017238050699234,
-0.04762382060289383,
-0.0014874832704663277,
-0.021856943145394325,
-0.11482618749141693,
-0.21362292766571045,
-0.06408146768808365,
0.009646633639931679,
0.009180506691336632,
0.021972863003611565,
0.01719023287296295,
0.020532874390482903,
0.054446879774332047,
-0.03218166157603264,
0.03385470062494278,
0.030033957213163376,
0.056960854679346085,
0.06552058458328247,
-0.0629294291138649,
0.09613493084907532,
-0.06908287107944489,
0.02254079282283783,
0.10948611795902252,
-0.07227365672588348,
0.1528366655111313,
0.04680125042796135,
0.05567329004406929,
0.09262624382972717,
-0.0010346192866563797,
0.05642622336745262,
0.10641041398048401,
-0.05900677293539047,
0.018330631777644157,
-0.07538563758134842,
-0.05747361481189728,
-0.0474342405796051,
0.061577729880809784,
0.03269753232598305,
0.0023375097662210464,
-0.10077638924121857,
0.031196538358926773,
-0.03966844826936722,
0.08379194140434265,
0.06500314921140671,
-0.10705582797527313,
-0.09994137287139893,
0.04343922436237335,
-0.05079583451151848,
-0.08666367083787918,
0.03660082817077637,
0.08401823043823242,
-0.1396963894367218,
0.058443356305360794,
0.01687738671898842,
0.07256771624088287,
-0.09909933060407639,
0.011812573298811913,
-0.06688284873962402,
0.013541759923100471,
0.0059551275335252285,
0.09329886734485626,
-0.13424409925937653,
0.11267697066068649,
0.02727881260216236,
0.03665386885404587,
-0.0904698371887207,
0.0183180533349514,
-0.014254560694098473,
0.07134804129600525,
0.12061107158660889,
0.04247158020734787,
-0.05233704671263695,
-0.02154717594385147,
-0.06834009289741516,
0.03590024635195732,
0.0754968672990799,
-0.03564690053462982,
0.04149811342358589,
0.0005969768390059471,
0.0164676234126091,
-0.005731805693358183,
0.021188143640756607,
-0.1301824152469635,
-0.14452257752418518,
0.07149375975131989,
-0.06960441172122955,
-0.08869778364896774,
-0.03927851840853691,
-0.07112886011600494,
-0.09184592217206955,
0.16409584879875183,
-0.07575582712888718,
-0.10971534997224808,
-0.10507646203041077,
0.008177288807928562,
0.07125604897737503,
-0.0647118091583252,
0.04864790290594101,
-0.04975731670856476,
0.09303581714630127,
-0.0411929190158844,
-0.11174638569355011,
0.01713361032307148,
-0.08848019689321518,
-0.11153273284435272,
-0.03021087870001793,
0.09559206664562225,
0.14836294949054718,
0.05253361165523529,
0.019312843680381775,
0.01705571450293064,
0.003508754074573517,
-0.12910196185112,
0.005065491423010826,
0.13702349364757538,
0.009862478822469711,
0.09675145149230957,
-0.05958075821399689,
-0.024229127913713455,
-0.013508656993508339,
0.0046021826565265656,
0.13626393675804138,
0.15597006678581238,
-0.05804046243429184,
0.15103891491889954,
0.2257205843925476,
-0.09749148786067963,
-0.18828776478767395,
-0.07577691972255707,
0.0020251767709851265,
0.048887643963098526,
0.04434651881456375,
-0.20158234238624573,
0.10140025615692139,
0.056243255734443665,
-0.010711047798395157,
-0.041487909853458405,
-0.19288595020771027,
-0.0994492918252945,
0.1047765463590622,
0.07023081183433533,
0.19629520177841187,
-0.06807233393192291,
-0.04333806037902832,
-0.0418909527361393,
-0.05831735581159592,
0.09639108180999756,
-0.013115253299474716,
0.08443216979503632,
0.013737250119447708,
0.01769818365573883,
-0.002259056083858013,
-0.010531693696975708,
0.10712017118930817,
0.04167747497558594,
0.020020201802253723,
-0.07413452863693237,
-0.046080101281404495,
0.11475734412670135,
-0.03143581748008728,
0.12412776052951813,
0.025575872510671616,
0.05963513255119324,
-0.07383592426776886,
-0.060139019042253494,
-0.08216287195682526,
0.007447598502039909,
-0.04045892506837845,
-0.0509796068072319,
-0.051646072417497635,
0.03716631233692169,
0.023287324234843254,
0.010882938280701637,
-0.004682706668972969,
-0.052474360913038254,
0.012760883197188377,
0.0606633760035038,
0.15742690861225128,
-0.014915641397237778,
-0.06561923027038574,
-0.07375730574131012,
-0.05884934961795807,
0.04617514833807945,
-0.09923160821199417,
0.02763047069311142,
0.024473683908581734,
0.0002675808500498533,
0.11551712453365326,
0.03256608545780182,
-0.11474792659282684,
0.011284684762358665,
0.007259538397192955,
-0.10102449357509613,
-0.15232200920581818,
-0.01785876601934433,
0.05505400896072388,
-0.06020950525999069,
0.038507990539073944,
0.1601203978061676,
-0.026238977909088135,
-0.0355086587369442,
-0.05412932485342026,
0.030476339161396027,
-0.03621826320886612,
0.03858624026179314,
0.07880368828773499,
0.017527179792523384,
-0.08185023814439774,
0.05405494570732117,
0.042382799088954926,
-0.009219160303473473,
0.06688030064105988,
0.029350338503718376,
-0.0680665448307991,
-0.08214089274406433,
-0.06446033716201782,
0.11085421591997147,
-0.04391036182641983,
-0.06375089287757874,
0.04069926217198372,
-0.10554718971252441,
0.06753900647163391,
0.10570142418146133,
0.03695714473724365,
0.04711063206195831,
-0.08720038831233978,
0.006098458543419838,
-0.03424486517906189,
0.030967120081186295,
-0.048916563391685486,
-0.030985359102487564,
-0.04439297318458557,
0.030634669587016106,
0.06000945717096329,
0.09359465539455414,
-0.03651260584592819,
-0.07784159481525421,
-0.08965986967086792,
-0.012368543073534966,
-0.10997660458087921,
-0.011296028271317482,
-0.07424402236938477,
0.0005515944212675095,
0.004662677645683289,
-0.024290241301059723,
0.0271739698946476,
0.03289871662855148,
-0.049115799367427826,
-0.010268955491483212,
-0.0289582759141922,
0.05536898225545883,
-0.07566733658313751,
0.017497826367616653,
0.01573505997657776,
-0.018076978623867035,
0.08875571191310883,
0.04421905428171158,
-0.03014569915831089,
0.05383383482694626,
-0.0411766916513443,
0.03360644727945328,
-0.04516834765672684,
0.006513210944831371,
-0.019060727208852768,
-0.11347395926713943,
-0.02043999545276165,
0.011763682588934898,
-0.024382829666137695,
0.017201662063598633,
0.07139497995376587,
-0.05141568183898926,
0.07997160404920578,
0.06099790334701538,
-0.05480251833796501,
-0.05689895153045654,
0.041053518652915955,
0.005653101950883865,
0.016375860199332237,
0.07556534558534622,
0.003119291737675667,
0.05682181566953659,
-0.0866089016199112,
-0.0009380890987813473,
0.005267948843538761,
-0.01566983014345169,
-0.020237533375620842,
-0.07111621648073196,
-0.0036239149048924446,
0.00931152980774641,
0.18446150422096252,
-0.005631641484797001,
-0.021266411989927292,
0.006136310286819935,
0.06764824688434601,
0.038130514323711395,
0.0023139426484704018,
0.08454747498035431,
-0.017623351886868477,
-0.02160019800066948,
-0.01850903034210205,
0.024083374068140984,
-0.014981372281908989,
0.0058778151869773865,
0.1290081888437271,
0.0500667504966259,
0.11633849143981934,
0.07303892821073532,
0.06490859389305115,
0.05226946249604225,
-0.10580258071422577,
-0.07381860911846161,
0.03767627850174904,
0.05188514292240143,
-0.0342576839029789,
0.03159786015748978,
0.05989930033683777,
-0.0932072103023529,
0.08481787145137787,
0.04906266927719116,
-0.05769054591655731,
-0.1261519491672516,
-0.20938263833522797,
-0.04096917808055878,
-0.009646618738770485,
-0.018648099154233932,
-0.11116497218608856,
0.02478707581758499,
0.08956042677164078,
0.04105021059513092,
-0.021870944648981094,
0.06454210728406906,
-0.1402120292186737,
-0.036223508417606354,
0.03766857087612152,
-0.011884355917572975,
0.02284032665193081,
0.04297219589352608,
0.019181815907359123,
0.017554810270667076,
0.07408051192760468,
0.05185289308428764,
0.03952282667160034,
0.0583176463842392,
0.02685846947133541,
-0.0501682385802269,
-0.08693987131118774,
-0.043807435780763626,
0.001473159994930029,
0.05881438031792641,
0.12855416536331177,
0.008284710347652435,
-0.06869668513536453,
-0.00007157586514949799,
0.05178951844573021,
-0.020006997510790825,
-0.07954404503107071,
-0.11300847679376602,
0.2219572514295578,
-0.026454661041498184,
0.014364548958837986,
-0.002101793885231018,
-0.03572601452469826,
0.021522875875234604,
0.20328377187252045,
0.26059454679489136,
-0.018426690250635147,
-0.01474111806601286,
0.014828899875283241,
0.00041620107367634773,
0.033554162830114365,
0.11992202699184418,
-0.02810504287481308,
0.2233949601650238,
-0.04603438824415207,
0.0603673942387104,
-0.05671845003962517,
-0.0165251512080431,
-0.06552588194608688,
0.0528850182890892,
0.00546889565885067,
-0.01600782200694084,
-0.01584310084581375,
0.07141125947237015,
-0.03383278846740723,
-0.12246056646108627,
-0.029439490288496017,
0.0022694782819598913,
-0.05846361443400383,
0.012569593265652657,
-0.0018071187660098076,
0.032138798385858536,
0.07801265269517899,
-0.012206659652292728,
-0.00034958403557538986,
0.12609462440013885,
0.011540621519088745,
-0.11644741147756577,
-0.04121105372905731,
0.12912458181381226,
0.01620820350944996,
0.1314387321472168,
0.04873734340071678,
0.08667368441820145,
0.07200704514980316,
0.016457997262477875,
-0.06329941004514694,
0.03653918579220772,
-0.025882262736558914,
0.01987125352025032,
-0.004322821274399757,
0.1090371385216713,
-0.008213275112211704,
0.08055134117603302,
0.01659022457897663,
-0.08458704501390457,
0.05523257702589035,
0.005057025700807571,
-0.07363125681877136,
-0.038568947464227676,
0.10555481165647507,
-0.11168107390403748,
0.14392045140266418,
0.14672207832336426,
-0.0023249583318829536,
-0.0710841566324234,
-0.012864886783063412,
0.02396024391055107,
-0.0077033815905451775,
0.05150105059146881,
-0.02760808914899826,
-0.13329273462295532,
0.00020514987409114838,
-0.09344857931137085,
0.04424520954489708,
-0.25256019830703735,
-0.07020259648561478,
0.01694401167333126,
-0.0025326977483928204,
-0.026301365345716476,
0.027757562696933746,
0.06073924899101257,
-0.0020559444092214108,
-0.03287319839000702,
0.02659491077065468,
-0.026453806087374687,
0.03640322387218475,
-0.07411941885948181,
-0.06591983139514923
] |
null | null | transformers | # MultiBERTs Seed 24 (uncased)
Seed 24 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-24')
model = BertModel.from_pretrained("multiberts-seed-24")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-24 | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 24 (uncased)
Seed 24 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 24 (uncased)\n\nSeed 24 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 24 (uncased)\n\nSeed 24 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\nThe model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
63,
111,
335,
134,
25,
95,
48,
3,
222,
110,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 24 (uncased)\n\nSeed 24 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\n\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\n This way, the model learns an inner representation of the English language that can then be used to extract features\n useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\n classifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\n\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\n\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\n\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.06919142603874207,
0.033048246055841446,
-0.0020364061929285526,
0.09517088532447815,
0.07650336623191833,
0.02986183762550354,
0.16795827448368073,
0.03007756546139717,
-0.04037985950708389,
0.02743298001587391,
0.10605151951313019,
0.033276669681072235,
0.03291307017207146,
0.03180626779794693,
0.07031679898500443,
-0.25611722469329834,
0.07417680323123932,
-0.05585569143295288,
0.044936906546354294,
0.0647779032588005,
0.09647716581821442,
-0.07009251415729523,
0.07506094872951508,
0.03891415894031525,
-0.08385656774044037,
-0.020885808393359184,
-0.006434556096792221,
-0.03339622542262077,
0.07759800553321838,
0.09563194215297699,
0.061406172811985016,
-0.014355557970702648,
0.06482292711734772,
-0.08178716152906418,
0.017842711880803108,
0.021988248452544212,
-0.012660473585128784,
0.03702563792467117,
0.022286957129836082,
-0.0011384286917746067,
0.10773605108261108,
0.021208899095654488,
0.08398512005805969,
0.038137927651405334,
-0.08942299336194992,
-0.10339740663766861,
-0.06943738460540771,
0.09140783548355103,
0.027423441410064697,
0.050289712846279144,
-0.010533461347222328,
0.08065525442361832,
-0.004290096927434206,
0.057971835136413574,
0.08220569789409637,
-0.24294260144233704,
-0.023234615102410316,
0.044815532863140106,
0.047922283411026,
0.04823841154575348,
0.011557958088815212,
0.022991985082626343,
0.00831073522567749,
0.0513586662709713,
0.00011904165148735046,
-0.02988290786743164,
0.14749710261821747,
-0.05720540136098862,
-0.13602496683597565,
-0.03138508275151253,
0.15231037139892578,
0.030161768198013306,
-0.11083795875310898,
-0.11366065591573715,
-0.004580815322697163,
0.1699584722518921,
-0.006242495961487293,
-0.0011686664074659348,
-0.007344891782850027,
-0.005344773642718792,
0.021026434376835823,
-0.11375986039638519,
-0.08201226592063904,
-0.022349631413817406,
-0.06235403195023537,
0.15426191687583923,
0.04898069053888321,
0.06899352371692657,
-0.05920585244894028,
0.035935696214437485,
-0.14024174213409424,
-0.03663567826151848,
-0.04133359715342522,
-0.0917263776063919,
0.01858341693878174,
0.03342333808541298,
-0.043813325464725494,
-0.1069440022110939,
-0.03871860355138779,
0.07683266699314117,
0.023294486105442047,
0.037535715848207474,
-0.007708318531513214,
0.028024591505527496,
0.10068656504154205,
0.1001034528017044,
-0.06035643443465233,
0.0781797468662262,
0.01144937239587307,
-0.018462784588336945,
0.0403745099902153,
-0.05474993586540222,
-0.12260055541992188,
0.07311269640922546,
-0.03417157381772995,
0.01700662076473236,
0.017741890624165535,
0.03743787109851837,
-0.00958157517015934,
-0.07328996062278748,
0.1421438455581665,
-0.09676434844732285,
-0.0000073015689849853516,
-0.0045969001948833466,
0.02091747149825096,
0.0739624872803688,
0.018643295392394066,
-0.003688857890665531,
-0.05979910120368004,
-0.035136885941028595,
-0.06852716207504272,
-0.03439484164118767,
-0.055328525602817535,
-0.13447250425815582,
0.0027849581092596054,
-0.034633539617061615,
-0.012805542908608913,
-0.09939529001712799,
-0.17243799567222595,
-0.01821608655154705,
0.06423665583133698,
-0.01538088358938694,
0.012499519623816013,
0.0003805318847298622,
0.007319072261452675,
-0.0037941187620162964,
0.035238273441791534,
-0.0370309017598629,
0.009276898577809334,
-0.011911019682884216,
-0.059318527579307556,
0.041440945118665695,
-0.12329395860433578,
0.04216717556118965,
-0.05560623109340668,
0.011748895049095154,
-0.1792423278093338,
0.106795534491539,
-0.028789330273866653,
-0.04123961925506592,
-0.04492064192891121,
-0.06442362070083618,
0.023557312786579132,
0.047015056014060974,
-0.01252005249261856,
0.09829570353031158,
-0.11617158353328705,
-0.05594691261649132,
0.1702340543270111,
-0.15440435707569122,
0.024315986782312393,
0.10233063995838165,
-0.06914373487234116,
0.03181495890021324,
0.14438605308532715,
0.07652365416288376,
0.0695691928267479,
-0.02844233065843582,
0.014523413963615894,
0.05728362500667572,
-0.04309805855154991,
0.0731109008193016,
0.10623359680175781,
-0.008752569556236267,
-0.1240476593375206,
0.027606289833784103,
-0.06404771655797958,
-0.033263660967350006,
-0.02403794229030609,
-0.021545972675085068,
0.016336780041456223,
-0.04857651889324188,
0.05447285994887352,
-0.0031056199222803116,
0.004704604856669903,
-0.029082229360938072,
-0.07436911016702652,
0.08045265078544617,
0.07400743663311005,
-0.08681866526603699,
0.012011160142719746,
-0.09381178766489029,
0.030849765986204147,
-0.06609856337308884,
-0.004288019612431526,
-0.14389312267303467,
-0.047280117869377136,
0.03151537477970123,
-0.06660214066505432,
0.09933982789516449,
0.12339804321527481,
0.00804995559155941,
0.115255206823349,
-0.04343237727880478,
0.022489210590720177,
-0.017700862139463425,
-0.010753417387604713,
-0.045500498265028,
-0.14268732070922852,
-0.06785863637924194,
-0.06583406776189804,
0.08044599741697311,
-0.05833306163549423,
0.01965312659740448,
-0.08145805448293686,
-0.04468557611107826,
-0.030398663133382797,
-0.046494416892528534,
0.005515918135643005,
0.006737392395734787,
-0.009061542339622974,
-0.031568173319101334,
0.04112572968006134,
0.02634371444582939,
-0.0882076695561409,
0.09110654145479202,
-0.12876854836940765,
-0.05321366339921951,
0.06889522820711136,
0.00517817959189415,
-0.043871693313121796,
0.10223963856697083,
0.0056643253192305565,
-0.010790269821882248,
-0.056876227259635925,
-0.04605184495449066,
0.22187471389770508,
-0.021584894508123398,
0.08247321844100952,
-0.1135299950838089,
0.003660098882392049,
0.04115910828113556,
-0.05746346712112427,
-0.05641584098339081,
0.08152183890342712,
0.03441175818443298,
-0.21937048435211182,
0.026340078562498093,
0.07240196317434311,
0.07092174887657166,
0.1435178965330124,
0.019783008843660355,
-0.033978451043367386,
-0.06083940342068672,
-0.018284328281879425,
-0.014607110992074013,
0.05950620397925377,
-0.04034511744976044,
-0.0010754112154245377,
0.04744379222393036,
0.0517842173576355,
0.015352485701441765,
-0.07255962491035461,
0.022655270993709564,
0.05262374132871628,
-0.014548715204000473,
-0.06852208077907562,
-0.055277831852436066,
-0.03734739124774933,
0.0724029690027237,
0.044985927641391754,
0.04485561326146126,
0.052009422332048416,
-0.02041120082139969,
-0.13731639087200165,
0.16779345273971558,
-0.13359563052654266,
-0.22631660103797913,
-0.13620807230472565,
-0.08241154253482819,
-0.07125454396009445,
0.037160489708185196,
0.03845537453889847,
-0.04019599407911301,
-0.04763098433613777,
-0.10214310139417648,
0.06736171245574951,
-0.11482061445713043,
-0.05784384161233902,
0.019649356603622437,
-0.05506598949432373,
-0.004690955393016338,
-0.1258336901664734,
-0.009259132668375969,
-0.027781737968325615,
-0.07772781699895859,
0.0020939381793141365,
-0.04625888168811798,
0.006106882356107235,
0.12552602589130402,
0.00483761727809906,
-0.011891069822013378,
-0.012650489807128906,
0.20405086874961853,
0.02802148088812828,
0.04221819341182709,
0.12593568861484528,
-0.06272228062152863,
0.05829707905650139,
0.02120613120496273,
0.041230179369449615,
-0.0500374510884285,
-0.003261493518948555,
-0.019315654411911964,
-0.11622241884469986,
-0.21478024125099182,
-0.0626632571220398,
0.011006753891706467,
0.007518699858337641,
0.024035805836319923,
0.017509788274765015,
0.01954592764377594,
0.052940770983695984,
-0.032731495797634125,
0.03757579252123833,
0.022445475682616234,
0.05342666804790497,
0.06836898624897003,
-0.059782736003398895,
0.09581823647022247,
-0.07107186317443848,
0.020179592072963715,
0.11058565974235535,
-0.07316141575574875,
0.16133710741996765,
0.04671114683151245,
0.053631700575351715,
0.09366947412490845,
-0.0008863359689712524,
0.057746272534132004,
0.10355907678604126,
-0.05937747657299042,
0.017442597076296806,
-0.07503494620323181,
-0.055658914148807526,
-0.04327262565493584,
0.05999516695737839,
0.03025238588452339,
-0.0007284730672836304,
-0.10037127882242203,
0.032035648822784424,
-0.04253234341740608,
0.0833050012588501,
0.06959423422813416,
-0.11693497002124786,
-0.10390380024909973,
0.04052014648914337,
-0.05099784582853317,
-0.08786702901124954,
0.03658836707472801,
0.0873047411441803,
-0.1376914381980896,
0.058335404843091965,
0.019342441111803055,
0.07300446182489395,
-0.1031474843621254,
0.012795206159353256,
-0.06988327205181122,
0.009947571903467178,
0.005318279378116131,
0.09169384837150574,
-0.12433546781539917,
0.11546315997838974,
0.026800956577062607,
0.03648883104324341,
-0.0884934812784195,
0.018386894837021828,
-0.01357800792902708,
0.07274597138166428,
0.12079702317714691,
0.0437622144818306,
-0.050870589911937714,
-0.019099382683634758,
-0.0655645951628685,
0.03665180131793022,
0.07417196035385132,
-0.03709392994642258,
0.04352131113409996,
0.00260727945715189,
0.015318037942051888,
-0.0053108916617929935,
0.006285307928919792,
-0.12803301215171814,
-0.14135527610778809,
0.06851569563150406,
-0.060207121074199677,
-0.0884772539138794,
-0.03826818987727165,
-0.07243741303682327,
-0.08090907335281372,
0.15340343117713928,
-0.06956618279218674,
-0.11159668862819672,
-0.1043926253914833,
0.017517246305942535,
0.07499679923057556,
-0.06474895775318146,
0.04946261644363403,
-0.05112176388502121,
0.09534774720668793,
-0.03664555773139,
-0.11011442542076111,
0.017949756234884262,
-0.0886201560497284,
-0.11340455710887909,
-0.02873113565146923,
0.09483443200588226,
0.1484948992729187,
0.05043366551399231,
0.020521201193332672,
0.018886253237724304,
0.0004848632961511612,
-0.12844544649124146,
0.010800564661622047,
0.11855728924274445,
0.016745086759328842,
0.09134016931056976,
-0.057023949921131134,
-0.02624286338686943,
-0.01211690902709961,
0.004769165068864822,
0.13247150182724,
0.15018245577812195,
-0.058941543102264404,
0.14174675941467285,
0.23390936851501465,
-0.095289446413517,
-0.18645089864730835,
-0.07547080516815186,
0.002170485444366932,
0.051421210169792175,
0.04211147874593735,
-0.20586133003234863,
0.10292179882526398,
0.04977691173553467,
-0.008684982545673847,
-0.03330612555146217,
-0.1922341287136078,
-0.09786158800125122,
0.10394569486379623,
0.06555281579494476,
0.19739770889282227,
-0.0639471486210823,
-0.045956194400787354,
-0.04114549607038498,
-0.04517208784818649,
0.09026582539081573,
-0.013086747378110886,
0.08583582937717438,
0.013213392347097397,
0.019997745752334595,
-0.004635803401470184,
-0.008358972147107124,
0.10374316573143005,
0.04775633662939072,
0.018419688567519188,
-0.07224614918231964,
-0.04235013201832771,
0.1150076687335968,
-0.028097841888666153,
0.12082403153181076,
0.024994060397148132,
0.06300093233585358,
-0.07757018506526947,
-0.05912186577916145,
-0.08448854833841324,
0.0058218250051140785,
-0.04090386629104614,
-0.04755827784538269,
-0.04796092212200165,
0.03616735339164734,
0.0227533970028162,
0.009083566255867481,
-0.020472561940550804,
-0.0506962314248085,
0.014570513740181923,
0.05855736508965492,
0.15062502026557922,
-0.01498224213719368,
-0.06691969931125641,
-0.07195191085338593,
-0.056067824363708496,
0.0481485091149807,
-0.1039784774184227,
0.02413562312722206,
0.024012915790081024,
-0.0004953385796397924,
0.11252303421497345,
0.033660128712654114,
-0.11523140966892242,
0.01123395562171936,
0.00852937437593937,
-0.10312308371067047,
-0.15390971302986145,
-0.01578490249812603,
0.04699545353651047,
-0.05759495869278908,
0.040308982133865356,
0.1612362265586853,
-0.026625975966453552,
-0.035114265978336334,
-0.05419134348630905,
0.028921224176883698,
-0.03434142470359802,
0.03754151985049248,
0.0820620208978653,
0.017456958070397377,
-0.08348312973976135,
0.04878927022218704,
0.042907580733299255,
-0.0166193675249815,
0.06705061346292496,
0.031148293986916542,
-0.07075216621160507,
-0.08239278942346573,
-0.06330865621566772,
0.10562771558761597,
-0.040583766996860504,
-0.06652462482452393,
0.04138803482055664,
-0.10627429932355881,
0.07109332084655762,
0.10009961575269699,
0.0361483059823513,
0.04874128848314285,
-0.08782446384429932,
0.004386827349662781,
-0.03595820441842079,
0.032292693853378296,
-0.053526394069194794,
-0.029115969315171242,
-0.0415714830160141,
0.02510472945868969,
0.06212615221738815,
0.09466627240180969,
-0.03516780957579613,
-0.07479540258646011,
-0.08647210150957108,
-0.013837937265634537,
-0.10418199747800827,
-0.011986227706074715,
-0.07047183811664581,
-0.0005214954726397991,
0.002018599072471261,
-0.024877946823835373,
0.028114736080169678,
0.03330346941947937,
-0.04761093109846115,
-0.01239034067839384,
-0.02719062566757202,
0.05998220294713974,
-0.0754731297492981,
0.019830938428640366,
0.01316602062433958,
-0.018342802301049232,
0.09167332202196121,
0.04527512937784195,
-0.029533574357628822,
0.05055279657244682,
-0.023083394393324852,
0.036426544189453125,
-0.04267609119415283,
0.006210547871887684,
-0.017934095114469528,
-0.11323689669370651,
-0.017613893374800682,
0.009354539215564728,
-0.022270599380135536,
0.016943030059337616,
0.07075324654579163,
-0.054115451872348785,
0.08120188117027283,
0.06061647832393646,
-0.05488364025950432,
-0.05753248184919357,
0.039421867579221725,
0.004770282655954361,
0.014242846518754959,
0.07530577480792999,
0.0031443946063518524,
0.05552997067570686,
-0.087041936814785,
-0.002846186049282551,
0.004960803780704737,
-0.019840307533740997,
-0.015183156356215477,
-0.07072050869464874,
-0.0014153588563203812,
0.008560662157833576,
0.18001486361026764,
-0.005493819713592529,
-0.015898488461971283,
0.0051281992346048355,
0.07193181663751602,
0.031112177297472954,
0.00426899641752243,
0.07887323200702667,
-0.017100781202316284,
-0.025183364748954773,
-0.01671176217496395,
0.022147808223962784,
-0.016169430688023567,
0.007343258708715439,
0.12224069237709045,
0.048214685171842575,
0.12728017568588257,
0.0720079243183136,
0.07021057605743408,
0.055935755372047424,
-0.11423762142658234,
-0.06975966691970825,
0.03303704410791397,
0.05479089170694351,
-0.031012728810310364,
0.03211553394794464,
0.057451508939266205,
-0.08783090114593506,
0.08460770547389984,
0.046892642974853516,
-0.05675561726093292,
-0.12171365320682526,
-0.1987842172384262,
-0.04065374657511711,
-0.009709468111395836,
-0.018479369580745697,
-0.1119014322757721,
0.024576440453529358,
0.09498459845781326,
0.04066280275583267,
-0.022589966654777527,
0.07250704616308212,
-0.13642793893814087,
-0.037154827266931534,
0.0415363535284996,
-0.01094728335738182,
0.02237880975008011,
0.04655513912439346,
0.019836125895380974,
0.018213707953691483,
0.07674998044967651,
0.05029929429292679,
0.039074141532182693,
0.05530434474349022,
0.028921443969011307,
-0.05204518139362335,
-0.08506950736045837,
-0.044227514415979385,
0.0026551629416644573,
0.05827615410089493,
0.12179825454950333,
0.007291384041309357,
-0.0683751329779625,
-0.0003536185249686241,
0.05105358362197876,
-0.022247113287448883,
-0.0801374539732933,
-0.11199025809764862,
0.2098003625869751,
-0.027309469878673553,
0.011670333333313465,
-0.0034202169626951218,
-0.03547566384077072,
0.02190457656979561,
0.2038455307483673,
0.25982341170310974,
-0.016418354585766792,
-0.016139501705765724,
0.01932675950229168,
-0.0005213767290115356,
0.03715018555521965,
0.12433628737926483,
-0.026417911052703857,
0.22630101442337036,
-0.045939087867736816,
0.054651930928230286,
-0.062490224838256836,
-0.01553515437990427,
-0.06577529013156891,
0.05771372467279434,
0.004151002503931522,
-0.01474702823907137,
-0.014134310185909271,
0.07385841757059097,
-0.028605017811059952,
-0.12868055701255798,
-0.023708835244178772,
0.003241559024900198,
-0.059797149151563644,
0.01149313896894455,
-0.0008531007915735245,
0.03156803175806999,
0.07755047082901001,
-0.012023705057799816,
-0.0010498138144612312,
0.1254369020462036,
0.010919442400336266,
-0.11431171000003815,
-0.041542135179042816,
0.125106543302536,
0.014363333582878113,
0.12791016697883606,
0.046577244997024536,
0.087876096367836,
0.07172466814517975,
0.016398802399635315,
-0.06784675270318985,
0.03690413758158684,
-0.0263657346367836,
0.024415435269474983,
-0.003227238543331623,
0.10686802119016647,
-0.006230484694242477,
0.07150132209062576,
0.015603851526975632,
-0.08546730130910873,
0.05345943570137024,
0.007191061973571777,
-0.0734119713306427,
-0.03898680955171585,
0.10402271896600723,
-0.11145506799221039,
0.14369362592697144,
0.14745095372200012,
-0.004413902293890715,
-0.07468527555465698,
-0.008249503560364246,
0.01953156851232052,
-0.005260626785457134,
0.04352811723947525,
-0.026389140635728836,
-0.13521935045719147,
-0.0005657477304339409,
-0.08930812776088715,
0.04584476724267006,
-0.2562090754508972,
-0.0680357813835144,
0.018306009471416473,
-0.00040354859083890915,
-0.026336971670389175,
0.02977869287133217,
0.05577309429645538,
-0.003984689246863127,
-0.03320268914103508,
0.023583708330988884,
-0.02638881653547287,
0.03556080907583237,
-0.07653365284204483,
-0.06913693994283676
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 0k (uncased)
Seed 3 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-0k')
model = BertModel.from_pretrained("multiberts-seed-3-0k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-0k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 0k (uncased)
Seed 3 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 0k (uncased)\nSeed 3 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 0k (uncased)\nSeed 3 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 0k (uncased)\nSeed 3 intermediate checkpoint 0k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0821273922920227,
-0.0008355353493243456,
-0.0023131028283387423,
0.0681232437491417,
0.08477875590324402,
0.0018161293119192123,
0.11765961349010468,
0.05110025778412819,
-0.03394912928342819,
0.024457234889268875,
0.09332641959190369,
0.03606465458869934,
0.040791407227516174,
0.06533204019069672,
0.09465037286281586,
-0.25504326820373535,
0.05320081487298012,
-0.06307652592658997,
0.059774212539196014,
0.07577212154865265,
0.10067801922559738,
-0.07116472721099854,
0.06180504336953163,
0.033367693424224854,
-0.08018447458744049,
-0.018107334151864052,
-0.017443491145968437,
-0.03372017294168472,
0.09786941111087799,
0.06944721937179565,
0.05942496657371521,
0.003443041816353798,
0.05420953780412674,
-0.09134645760059357,
0.01646043173968792,
0.044813379645347595,
0.00007054349407553673,
0.023163769394159317,
-0.00731850229203701,
0.01235642284154892,
0.10766014456748962,
0.03771289810538292,
0.0785275548696518,
0.03564539551734924,
-0.09541885554790497,
-0.11101929098367691,
-0.07996956259012222,
0.10436398535966873,
0.05313706770539284,
0.04392435401678085,
-0.006481616757810116,
0.078178271651268,
-0.03262784332036972,
0.0743204653263092,
0.11219185590744019,
-0.25398364663124084,
-0.010086609981954098,
0.07105279713869095,
0.04629138112068176,
0.04356931895017624,
0.013892771676182747,
0.02870354801416397,
0.0036542564630508423,
0.04570319876074791,
0.030171293765306473,
-0.02398446947336197,
0.12229940295219421,
-0.04588121920824051,
-0.15136513113975525,
-0.043231718242168427,
0.12176143378019333,
-0.006493734195828438,
-0.12598803639411926,
-0.10531403124332428,
-0.026542063802480698,
0.1162056252360344,
-0.0028813015669584274,
-0.016632892191410065,
-0.004173033870756626,
0.010422050021588802,
0.022887464612722397,
-0.09396622329950333,
-0.08616822212934494,
-0.0277266763150692,
-0.03426075726747513,
0.13084480166435242,
0.04612810164690018,
0.050950899720191956,
-0.03344482183456421,
0.08643840253353119,
-0.11392249166965485,
-0.039177801460027695,
-0.05533203110098839,
-0.08435937762260437,
-0.01754097081720829,
0.00940733216702938,
-0.029958173632621765,
-0.08662208169698715,
-0.05789290368556976,
0.1183493584394455,
0.03534277528524399,
0.031146664172410965,
-0.0009430209174752235,
0.041090019047260284,
0.07538846135139465,
0.09541873633861542,
-0.03849845379590988,
0.05421529710292816,
0.03263094276189804,
-0.01953374408185482,
0.05962909013032913,
-0.052224576473236084,
-0.10042020678520203,
0.08032797276973724,
0.0010684812441468239,
0.04079226404428482,
0.027744587510824203,
0.03429357334971428,
-0.013151504099369049,
-0.07517603784799576,
0.16543930768966675,
-0.07613463699817657,
-0.009802871383726597,
-0.014595097862184048,
0.011949973180890083,
0.05182899534702301,
0.030274897813796997,
-0.006297106854617596,
-0.04893724247813225,
-0.007140414789319038,
-0.054855599999427795,
-0.02506009116768837,
-0.053448740392923355,
-0.11820678412914276,
0.0006950097158551216,
-0.04012386500835419,
-0.033090971410274506,
-0.1413159966468811,
-0.21221354603767395,
-0.019430777058005333,
0.06635095179080963,
-0.004177634604275227,
-0.009213161654770374,
0.02331625483930111,
0.014470269903540611,
-0.021509557962417603,
0.0098762521520257,
-0.04636950045824051,
-0.001117657870054245,
-0.005829077214002609,
-0.0325685515999794,
0.053536199033260345,
-0.04275647923350334,
0.023761151358485222,
-0.07090213894844055,
0.02133769355714321,
-0.21203437447547913,
0.09048822522163391,
-0.033335790038108826,
-0.0007660798728466034,
-0.03977654129266739,
-0.045165468007326126,
0.010566230863332748,
0.04661453515291214,
-0.008927559480071068,
0.1169508546590805,
-0.13594898581504822,
-0.04990962892770767,
0.1777271181344986,
-0.15929843485355377,
-0.0020375028252601624,
0.0985722541809082,
-0.04699234664440155,
0.05212939530611038,
0.13253362476825714,
0.09954232722520828,
0.08169347047805786,
-0.07180507481098175,
0.008553869090974331,
0.059991609305143356,
-0.06573368608951569,
0.05568581819534302,
0.09123711287975311,
-0.025807928293943405,
-0.13476009666919708,
0.03002306818962097,
-0.07723866403102875,
-0.012452365830540657,
-0.025180703029036522,
-0.020999468863010406,
0.008234959095716476,
-0.036773473024368286,
0.02828923985362053,
0.005347792990505695,
0.015406839549541473,
-0.03933170437812805,
-0.08295488357543945,
0.03112618625164032,
0.07474267482757568,
-0.07404989004135132,
0.044770222157239914,
-0.0697755515575409,
0.06486634910106659,
-0.07657229900360107,
-0.005466185510158539,
-0.16616836190223694,
-0.028119251132011414,
0.045437250286340714,
-0.04738845303654671,
0.05176905542612076,
0.090221107006073,
0.0023088788148015738,
0.12148715555667877,
-0.03911096602678299,
0.0022012549452483654,
-0.007459871470928192,
-0.010376350954174995,
-0.04779031127691269,
-0.12270207703113556,
-0.08321286737918854,
-0.06916151940822601,
0.09862275421619415,
-0.07312024384737015,
0.02785683237016201,
-0.0719660297036171,
-0.02218247763812542,
-0.009945878759026527,
-0.06090641766786575,
-0.0025164037942886353,
0.012191686779260635,
-0.02907416597008705,
-0.04711245000362396,
0.05097651481628418,
0.049352824687957764,
-0.06212044134736061,
0.07689272612333298,
-0.10875290632247925,
-0.06164057180285454,
0.05531458556652069,
0.014240162447094917,
-0.07915087044239044,
0.08757036179304123,
-0.01962360553443432,
-0.012437744066119194,
-0.05562852323055267,
-0.041259635239839554,
0.19522470235824585,
-0.019672758877277374,
0.09899234771728516,
-0.092262402176857,
0.00011818233178928494,
0.027904583141207695,
-0.04751105606555939,
-0.021536879241466522,
0.05904272943735123,
0.05131954699754715,
-0.1867087483406067,
0.014207616448402405,
0.053440824151039124,
0.07700280845165253,
0.11100640147924423,
0.028172967955470085,
-0.0242706760764122,
-0.04593030363321304,
-0.009419232606887817,
0.006443322170525789,
0.05505121499300003,
-0.025827977806329727,
-0.006334641482681036,
0.032143138349056244,
0.06003844738006592,
0.019095972180366516,
-0.07904212176799774,
0.03384135663509369,
0.06572982668876648,
-0.01810317486524582,
-0.03635792061686516,
-0.023883838206529617,
-0.05995915085077286,
0.06175806745886803,
0.0543491393327713,
0.03556657209992409,
0.027469273656606674,
-0.013947462663054466,
-0.1377878487110138,
0.18897388875484467,
-0.11454610526561737,
-0.25936710834503174,
-0.10613073408603668,
-0.05841716378927231,
-0.03246146813035011,
0.03942466154694557,
0.058215923607349396,
-0.0278579443693161,
-0.043424591422080994,
-0.11952267587184906,
0.058929070830345154,
-0.06876016408205032,
-0.03359467536211014,
-0.01141442358493805,
-0.05402907729148865,
-0.018391944468021393,
-0.1272617131471634,
-0.013426166027784348,
-0.029579171910881996,
-0.07822217047214508,
0.008426046930253506,
-0.036424070596694946,
0.027453076094388962,
0.1370084285736084,
0.037403322756290436,
-0.018801899626851082,
-0.019082412123680115,
0.19269639253616333,
0.013316521421074867,
0.05731254443526268,
0.11404427886009216,
-0.026942569762468338,
0.05375008285045624,
0.04558458551764488,
0.024318620562553406,
-0.04949645325541496,
0.012452960014343262,
-0.013701074756681919,
-0.12115605175495148,
-0.1730452924966812,
-0.07157015055418015,
-0.00436003552749753,
0.006709505803883076,
0.017355317249894142,
0.0372346006333828,
0.024603495374321938,
0.038617223501205444,
-0.029763704165816307,
0.0224432572722435,
-0.0092892125248909,
0.08315207064151764,
0.02706705406308174,
-0.07320889085531235,
0.09445320069789886,
-0.059269823133945465,
0.02054097130894661,
0.11001719534397125,
-0.05876602977514267,
0.18486937880516052,
0.02385062724351883,
0.055848442018032074,
0.10563299059867859,
0.0173029787838459,
0.05166088044643402,
0.08621236681938171,
-0.04754967987537384,
0.005360733717679977,
-0.061781156808137894,
-0.05338428169488907,
-0.036916445940732956,
0.04651215299963951,
0.03369811922311783,
0.01802946627140045,
-0.1216604933142662,
0.01805919036269188,
0.00012418394908308983,
0.13849034905433655,
0.050267066806554794,
-0.12161122262477875,
-0.12109579145908356,
0.03502770513296127,
-0.044861163944005966,
-0.05993242561817169,
0.029330115765333176,
0.05515514686703682,
-0.1534958779811859,
0.047291237860918045,
-0.0051284534856677055,
0.0649523138999939,
-0.09047996997833252,
0.014917486347258091,
-0.04517701268196106,
-0.00027573946863412857,
0.006197636015713215,
0.06994597613811493,
-0.13951900601387024,
0.10366105288267136,
0.019569294527173042,
0.050095684826374054,
-0.07870641350746155,
0.014694076031446457,
-0.012731899507343769,
0.10646400600671768,
0.11581290513277054,
0.04295419156551361,
-0.04916476085782051,
-0.02063921093940735,
-0.04855355992913246,
0.021934980526566505,
0.06028003990650177,
-0.07829300314188004,
0.06138908863067627,
0.007509860210120678,
0.008318539708852768,
-0.023305559530854225,
0.025227736681699753,
-0.13317620754241943,
-0.1228870153427124,
0.0636790543794632,
-0.07732675969600677,
-0.09990303218364716,
-0.05643783509731293,
-0.0631425529718399,
-0.05062759667634964,
0.2151757776737213,
-0.11106559634208679,
-0.0904233455657959,
-0.09944216161966324,
-0.01771087571978569,
0.045474033802747726,
-0.06458328664302826,
0.04432510584592819,
-0.04094870388507843,
0.09037972241640091,
-0.04659879207611084,
-0.11041031777858734,
0.03303081542253494,
-0.11395733058452606,
-0.11565249413251877,
-0.04225275665521622,
0.10705050826072693,
0.11543326079845428,
0.03873202204704285,
0.012773046270012856,
0.011504026129841805,
0.0021697040647268295,
-0.11783842742443085,
0.014442790299654007,
0.13782060146331787,
0.0020412150770425797,
0.0731310173869133,
-0.05948898196220398,
0.026623211801052094,
-0.01815272495150566,
0.00040305592119693756,
0.1331925392150879,
0.18815985321998596,
-0.06105714663863182,
0.17660146951675415,
0.20038685202598572,
-0.10409606993198395,
-0.1903613954782486,
-0.0550839900970459,
-0.0026581771671772003,
0.04260946810245514,
0.04971841722726822,
-0.18372824788093567,
0.08828315883874893,
0.03361436352133751,
-0.03125973418354988,
0.013180483132600784,
-0.2375054955482483,
-0.10963601619005203,
0.08589015156030655,
0.059549424797296524,
0.18719258904457092,
-0.0817115381360054,
-0.038050878793001175,
-0.014950606971979141,
-0.03633665293455124,
0.05261555314064026,
-0.03415020927786827,
0.09254038333892822,
0.005457425490021706,
-0.03177881985902786,
0.0018361229449510574,
-0.031375493854284286,
0.09663069248199463,
0.03907191753387451,
0.021254422143101692,
-0.06903831660747528,
-0.004632517695426941,
0.10846028476953506,
-0.04028770327568054,
0.09943917393684387,
0.03911576792597771,
0.07422004640102386,
-0.09705962240695953,
-0.0602324940264225,
-0.0783909261226654,
0.04350028187036514,
-0.04241659492254257,
-0.05508508160710335,
-0.06299907714128494,
0.05986515060067177,
0.03834039717912674,
0.011917035095393658,
0.001990901306271553,
-0.04221857711672783,
0.045963104814291,
0.08876366913318634,
0.08648892492055893,
-0.034050531685352325,
-0.0751073807477951,
-0.053905121982097626,
-0.05018111318349838,
0.06527502834796906,
-0.08877621591091156,
0.01868242770433426,
0.02761770412325859,
0.008614261634647846,
0.08981260657310486,
0.03508251532912254,
-0.1374134123325348,
0.011399103328585625,
0.03211061283946037,
-0.12232499569654465,
-0.10604367405176163,
-0.01866258680820465,
0.029594674706459045,
-0.03907125070691109,
0.05690840631723404,
0.14939451217651367,
-0.03495845943689346,
-0.03255601227283478,
-0.047350965440273285,
0.03712956979870796,
-0.022330546751618385,
0.04867362976074219,
0.06156793236732483,
0.03054776042699814,
-0.07233957946300507,
0.07706226408481598,
0.03511552885174751,
-0.034527719020843506,
0.04493362084031105,
0.0416848286986351,
-0.09394139796495438,
-0.0804397463798523,
-0.05876673012971878,
0.08681229501962662,
-0.02279559336602688,
-0.04541081190109253,
0.00010496564209461212,
-0.08241431415081024,
0.06894948333501816,
0.07527277618646622,
0.0472789965569973,
0.03685188293457031,
-0.0870400071144104,
0.015413148328661919,
-0.05385265499353409,
0.033189672976732254,
-0.030723756179213524,
-0.006095593795180321,
-0.056765757501125336,
0.06379024684429169,
0.06492573767900467,
0.0966581404209137,
-0.033855848014354706,
-0.07674891501665115,
-0.08612316846847534,
-0.012738699093461037,
-0.063313327729702,
-0.03321167826652527,
-0.07741601765155792,
-0.007132099010050297,
0.001973325153812766,
-0.0020497385412454605,
0.0232329610735178,
0.03575211018323898,
-0.04229756444692612,
-0.019456937909126282,
-0.03656797483563423,
0.03871926665306091,
-0.06475377827882767,
0.006965650245547295,
0.01252918690443039,
-0.03709721565246582,
0.09384283423423767,
0.0371270477771759,
-0.01482315268367529,
0.04589678347110748,
-0.02856704592704773,
0.033340081572532654,
-0.020446397364139557,
-0.0006271190941333771,
-0.025335069745779037,
-0.11143042147159576,
-0.00695686973631382,
0.007008789107203484,
-0.024963214993476868,
0.01230387482792139,
0.05872780829668045,
-0.07146933674812317,
0.08499199151992798,
0.04478386789560318,
-0.030906639993190765,
-0.0690736398100853,
0.04186313599348068,
-0.014503568410873413,
0.030812082812190056,
0.07106874138116837,
-0.035139650106430054,
0.05476704239845276,
-0.09690212458372116,
-0.027896281331777573,
0.003971701953560114,
-0.004302341490983963,
-0.016039011999964714,
-0.05238626152276993,
-0.0036461129784584045,
0.0068684229627251625,
0.1712084710597992,
-0.023166179656982422,
0.03653073310852051,
0.014133254066109657,
0.0067037418484687805,
0.05253222584724426,
-0.012265359982848167,
0.07219241559505463,
-0.007302702404558659,
-0.02439154125750065,
-0.010622900910675526,
0.03690999746322632,
0.0039603374898433685,
0.004285022616386414,
0.14434859156608582,
0.04947206377983093,
0.08856379240751266,
0.07640783488750458,
0.014649184420704842,
0.01703549362719059,
-0.1344677209854126,
-0.08902798593044281,
0.009429284371435642,
0.060143765062093735,
-0.018604766577482224,
0.01470167562365532,
0.09188218414783478,
-0.08901485800743103,
0.06915604323148727,
0.04929722845554352,
-0.04828158766031265,
-0.12674061954021454,
-0.19891119003295898,
-0.02475189045071602,
-0.02884156070649624,
-0.011896714568138123,
-0.09152582287788391,
0.015310894697904587,
0.09043089300394058,
0.026624735444784164,
-0.009843957610428333,
0.09100208431482315,
-0.10306292772293091,
-0.029510952532291412,
0.0436757355928421,
-0.028685452416539192,
0.014839379116892815,
0.04951753467321396,
0.0254373736679554,
-0.005655156448483467,
0.044137969613075256,
0.04342427849769592,
0.04605172201991081,
0.027869833633303642,
0.05122772976756096,
-0.02592698484659195,
-0.07734029740095139,
-0.03152621537446976,
-0.002862736117094755,
0.05368194356560707,
0.13984747231006622,
0.02444978803396225,
-0.07124307006597519,
0.007545073516666889,
0.10885128378868103,
-0.031235482543706894,
-0.05132882297039032,
-0.10829721391201019,
0.24855154752731323,
0.019892407581210136,
0.0016881499905139208,
-0.004905530251562595,
-0.04457267001271248,
0.002441639080643654,
0.20988936722278595,
0.22518907487392426,
0.001688120304606855,
-0.00867327582091093,
0.011680640280246735,
-0.011017721146345139,
0.03641149401664734,
0.14668647944927216,
0.0030503254383802414,
0.2540806233882904,
-0.04868821054697037,
0.04083139821887016,
-0.04108235985040665,
-0.037877704948186874,
-0.09903085231781006,
0.0675259605050087,
-0.007416364271193743,
0.006882486864924431,
-0.03156675770878792,
0.07101403921842575,
-0.045178480446338654,
-0.16703036427497864,
0.0007066940888762474,
-0.00018428894691169262,
-0.06189899891614914,
0.009747182950377464,
-0.0038484949618577957,
0.01847865991294384,
0.08546951413154602,
-0.017514057457447052,
-0.006856106221675873,
0.13096600770950317,
0.017676593735814095,
-0.09573182463645935,
-0.05604684352874756,
0.11669067293405533,
0.022542409598827362,
0.14148321747779846,
0.012031346559524536,
0.07912100851535797,
0.08812591433525085,
0.02176862396299839,
-0.09507787972688675,
0.043879687786102295,
-0.01946459710597992,
-0.02784334309399128,
0.007976863533258438,
0.10892292857170105,
-0.008617376908659935,
0.06611204892396927,
0.02849700301885605,
-0.09078512340784073,
0.06348009407520294,
0.011693313717842102,
-0.035730503499507904,
-0.07820678502321243,
0.08612199127674103,
-0.09209262579679489,
0.1552862524986267,
0.12201449275016785,
-0.014730817638337612,
-0.0487428680062294,
-0.029321711510419846,
0.02243184484541416,
-0.002822261769324541,
0.056838616728782654,
-0.02596275880932808,
-0.13505715131759644,
0.02012884058058262,
-0.08607568591833115,
0.026656072586774826,
-0.25217893719673157,
-0.08891955018043518,
0.032607320696115494,
-0.016434023156762123,
-0.02157256379723549,
0.05087318643927574,
0.04144829139113426,
0.025477295741438866,
-0.037307072430849075,
0.023350169882178307,
-0.039436593651771545,
0.059618014842271805,
-0.11040543019771576,
-0.09167097508907318
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 1000k (uncased)
Seed 3 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1000k')
model = BertModel.from_pretrained("multiberts-seed-3-1000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-1000k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 1000k (uncased)
Seed 3 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 1000k (uncased)\nSeed 3 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 1000k (uncased)\nSeed 3 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 1000k (uncased)\nSeed 3 intermediate checkpoint 1000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08137103170156479,
-0.0022962281946092844,
-0.0020568619947880507,
0.068326435983181,
0.08654902875423431,
0.0020971805788576603,
0.1164238378405571,
0.0492381788790226,
-0.03215666115283966,
0.02364429458975792,
0.09507386386394501,
0.03423821181058884,
0.04269152507185936,
0.06730922311544418,
0.09762046486139297,
-0.25887250900268555,
0.05257726460695267,
-0.06367873400449753,
0.05982793867588043,
0.07542134821414948,
0.10010785609483719,
-0.07270720601081848,
0.0634736567735672,
0.033909182995557785,
-0.08033167570829391,
-0.017344461753964424,
-0.018885958939790726,
-0.03647392988204956,
0.10009407997131348,
0.06650425493717194,
0.060389671474695206,
0.0016839280724525452,
0.05921369045972824,
-0.08918419480323792,
0.015833433717489243,
0.0445820614695549,
0.0012179715558886528,
0.023059941828250885,
-0.009937126189470291,
0.017846643924713135,
0.10652511566877365,
0.03747204318642616,
0.0791873037815094,
0.03307347744703293,
-0.09576494991779327,
-0.11027351766824722,
-0.08197009563446045,
0.10620072484016418,
0.05451915040612221,
0.04139627888798714,
-0.006602700799703598,
0.075918048620224,
-0.03220636397600174,
0.07386499643325806,
0.10818687081336975,
-0.2521149814128876,
-0.009477810934185982,
0.07166972756385803,
0.04863318055868149,
0.04338083416223526,
0.013849223963916302,
0.026921937242150307,
0.005047328770160675,
0.04429015517234802,
0.028417494148015976,
-0.024367529898881912,
0.11702444404363632,
-0.04553927853703499,
-0.15314871072769165,
-0.04216377064585686,
0.12012512236833572,
-0.007489357143640518,
-0.1258717030286789,
-0.10281431674957275,
-0.02656129002571106,
0.11184267699718475,
-0.001596527174115181,
-0.01993390917778015,
-0.004458962939679623,
0.009289752691984177,
0.021921848878264427,
-0.09305767714977264,
-0.08467670530080795,
-0.029447227716445923,
-0.03699661046266556,
0.12885332107543945,
0.04718448966741562,
0.051325373351573944,
-0.033630527555942535,
0.08846262097358704,
-0.11905331909656525,
-0.03797948360443115,
-0.053073473274707794,
-0.0843982845544815,
-0.018191970884799957,
0.007839011028409004,
-0.026707936078310013,
-0.08322644233703613,
-0.05957475304603577,
0.11753806471824646,
0.040610138326883316,
0.029897689819335938,
0.002676323289051652,
0.04123082384467125,
0.07435379922389984,
0.09759886562824249,
-0.040851905941963196,
0.04875973239541054,
0.03321539983153343,
-0.01983358897268772,
0.05860833823680878,
-0.05078420788049698,
-0.10151180624961853,
0.07821939885616302,
0.000027476809918880463,
0.04103376716375351,
0.02487388253211975,
0.03765750303864479,
-0.010610520839691162,
-0.07189007103443146,
0.16371116042137146,
-0.07611530274152756,
-0.010991301387548447,
-0.018657684326171875,
0.0104060098528862,
0.04782062768936157,
0.029251517727971077,
-0.0050230612978339195,
-0.047552429139614105,
-0.004610849544405937,
-0.05460728704929352,
-0.027240347117185593,
-0.055927589535713196,
-0.11944995820522308,
-0.0001547238789498806,
-0.0464571937918663,
-0.03251204639673233,
-0.14143209159374237,
-0.216816246509552,
-0.020020773634314537,
0.06297773122787476,
-0.0013018874451518059,
-0.01018665824085474,
0.021836137399077415,
0.01501874066889286,
-0.02196584641933441,
0.00926213152706623,
-0.043560270220041275,
-0.001158994622528553,
-0.007285770960152149,
-0.03288879990577698,
0.05689636617898941,
-0.04139608144760132,
0.022619688883423805,
-0.07060511410236359,
0.022805960848927498,
-0.20955626666545868,
0.08814918994903564,
-0.03303404524922371,
0.002476021647453308,
-0.03799431025981903,
-0.04497871547937393,
0.009160410612821579,
0.04632992297410965,
-0.008014331571757793,
0.11576852202415466,
-0.13907209038734436,
-0.04850775748491287,
0.17845690250396729,
-0.15771132707595825,
-0.0037118345499038696,
0.09896723926067352,
-0.04804283380508423,
0.05750062316656113,
0.13496722280979156,
0.09712014347314835,
0.08269616216421127,
-0.07238927483558655,
0.008860795758664608,
0.05965721234679222,
-0.06842252612113953,
0.05382944643497467,
0.0900866836309433,
-0.02535516954958439,
-0.13496583700180054,
0.029986802488565445,
-0.0762244313955307,
-0.010878565721213818,
-0.025182681158185005,
-0.02090890146791935,
0.009992498904466629,
-0.03805181384086609,
0.02720961719751358,
0.004513356368988752,
0.016266794875264168,
-0.04070426896214485,
-0.08030623197555542,
0.026763183996081352,
0.07421422004699707,
-0.07161353528499603,
0.04325994849205017,
-0.06998558342456818,
0.062264278531074524,
-0.0755729079246521,
-0.004669690504670143,
-0.16459286212921143,
-0.024469995871186256,
0.04686250537633896,
-0.04945344850420952,
0.05046208202838898,
0.08809927105903625,
0.00231967493891716,
0.12229980528354645,
-0.03939156234264374,
0.0019527855329215527,
-0.006112698465585709,
-0.009214489720761776,
-0.050051018595695496,
-0.11887948215007782,
-0.08170638978481293,
-0.06930044293403625,
0.09465258568525314,
-0.07086850702762604,
0.028653912246227264,
-0.07102163136005402,
-0.024532990530133247,
-0.01014610193669796,
-0.05848431959748268,
-0.004266657866537571,
0.012512820772826672,
-0.028804028406739235,
-0.04785919189453125,
0.049207501113414764,
0.0488320030272007,
-0.061076901853084564,
0.07768581062555313,
-0.10562926530838013,
-0.06630435585975647,
0.05508559197187424,
0.016359549015760422,
-0.08195216953754425,
0.08968036621809006,
-0.019719809293746948,
-0.012673045508563519,
-0.05710345506668091,
-0.04428672417998314,
0.19706594944000244,
-0.021140318363904953,
0.09811624884605408,
-0.0924977958202362,
-0.0010076891630887985,
0.027677109465003014,
-0.046621061861515045,
-0.019169021397829056,
0.05786043778061867,
0.05312247946858406,
-0.18734264373779297,
0.013836845755577087,
0.05368281900882721,
0.07482733577489853,
0.11135311424732208,
0.027858665212988853,
-0.02295874059200287,
-0.046736009418964386,
-0.012494341470301151,
0.005561402533203363,
0.05421590059995651,
-0.02652764320373535,
-0.00987006165087223,
0.03012849949300289,
0.05883704498410225,
0.01830556057393551,
-0.08006027340888977,
0.03421742841601372,
0.06551645696163177,
-0.01619044318795204,
-0.04014824330806732,
-0.023036876693367958,
-0.06060571223497391,
0.06225322186946869,
0.05527656525373459,
0.03626091405749321,
0.026309533044695854,
-0.014602656476199627,
-0.13588584959506989,
0.18919727206230164,
-0.11346010863780975,
-0.2562148869037628,
-0.10588796436786652,
-0.05891653895378113,
-0.02778978832066059,
0.041097354143857956,
0.05632365494966507,
-0.030525943264365196,
-0.043380603194236755,
-0.11816401779651642,
0.062059834599494934,
-0.06501874327659607,
-0.030551662668585777,
-0.011683803051710129,
-0.05270281434059143,
-0.023199882358312607,
-0.12720556557178497,
-0.013042537495493889,
-0.03087081015110016,
-0.0771012008190155,
0.0065903812646865845,
-0.035686615854501724,
0.02955816313624382,
0.13703851401805878,
0.03640956059098244,
-0.01887819729745388,
-0.019394055008888245,
0.19120784103870392,
0.01188654638826847,
0.059935063123703,
0.1115206629037857,
-0.026627888903021812,
0.054167039692401886,
0.047818053513765335,
0.023409616202116013,
-0.04853156954050064,
0.014502272941172123,
-0.01584417000412941,
-0.12215133756399155,
-0.17120054364204407,
-0.06935300678014755,
-0.002734425012022257,
0.005882963538169861,
0.018771208822727203,
0.03532634675502777,
0.02059105411171913,
0.042727280408144,
-0.02932688407599926,
0.02696278505027294,
-0.012338396161794662,
0.08164826035499573,
0.022368699312210083,
-0.07488588243722916,
0.09272760897874832,
-0.059863653033971786,
0.014843067154288292,
0.10879844427108765,
-0.058575816452503204,
0.18826663494110107,
0.024669624865055084,
0.055654071271419525,
0.1044670045375824,
0.018535736948251724,
0.052522093057632446,
0.09142394363880157,
-0.046496301889419556,
0.005470269359648228,
-0.06138668954372406,
-0.052019231021404266,
-0.03448430076241493,
0.04664909467101097,
0.03214586526155472,
0.020042050629854202,
-0.1201731488108635,
0.019551187753677368,
-0.0007056192262098193,
0.1369818150997162,
0.04826632887125015,
-0.12309375405311584,
-0.11936081945896149,
0.03515952453017235,
-0.044425252825021744,
-0.061149775981903076,
0.029325954616069794,
0.05793213099241257,
-0.15499672293663025,
0.04697036370635033,
-0.00463511236011982,
0.0641617476940155,
-0.09250064939260483,
0.014510090462863445,
-0.043465014547109604,
-0.00016317050904035568,
0.004859578795731068,
0.06967870891094208,
-0.13529770076274872,
0.10599943995475769,
0.019297828897833824,
0.051424894481897354,
-0.0809338390827179,
0.015191423706710339,
-0.011483747512102127,
0.1046120896935463,
0.11681267619132996,
0.042701005935668945,
-0.05149262398481369,
-0.025160716846585274,
-0.047155268490314484,
0.019901221618056297,
0.06144712120294571,
-0.07883685827255249,
0.06262975186109543,
0.008854790590703487,
0.009048549458384514,
-0.02358676679432392,
0.021740775555372238,
-0.13263927400112152,
-0.12189600616693497,
0.0617179349064827,
-0.07680025696754456,
-0.09848860651254654,
-0.05592205002903938,
-0.06324903666973114,
-0.05254986882209778,
0.20807629823684692,
-0.11578308045864105,
-0.09003593027591705,
-0.09704872965812683,
-0.018283233046531677,
0.04789423197507858,
-0.06600116938352585,
0.04584665223956108,
-0.038600463420152664,
0.08890558779239655,
-0.047818735241889954,
-0.10833398997783661,
0.0324551984667778,
-0.1137835755944252,
-0.11566692590713501,
-0.043288975954055786,
0.10625714063644409,
0.11475234478712082,
0.038342926651239395,
0.013596611097455025,
0.012758010998368263,
0.0013558976352214813,
-0.11893989890813828,
0.014045627787709236,
0.13400816917419434,
0.0005658939480781555,
0.07304249703884125,
-0.06072581559419632,
0.029915329068899155,
-0.01919320970773697,
-0.0005363281816244125,
0.13478271663188934,
0.1877996027469635,
-0.06201252341270447,
0.17293494939804077,
0.20309403538703918,
-0.10565175116062164,
-0.19334471225738525,
-0.05417140945792198,
-0.0015225401148200035,
0.04454948380589485,
0.04684290289878845,
-0.18011315166950226,
0.09021230041980743,
0.034255869686603546,
-0.03136467561125755,
0.014431849122047424,
-0.23738622665405273,
-0.11249678581953049,
0.08904653042554855,
0.05917210131883621,
0.18425247073173523,
-0.07936117053031921,
-0.036747779697179794,
-0.017655687406659126,
-0.030811354517936707,
0.050602637231349945,
-0.03375004976987839,
0.09253078699111938,
0.006605029106140137,
-0.029746368527412415,
0.002040593884885311,
-0.032136134803295135,
0.09517115354537964,
0.041472263634204865,
0.0230127964168787,
-0.07260691374540329,
-0.010084100067615509,
0.11435991525650024,
-0.039399921894073486,
0.10048425197601318,
0.04368416219949722,
0.07362968474626541,
-0.09389463067054749,
-0.06017617881298065,
-0.07660717517137527,
0.0436452254652977,
-0.04149061441421509,
-0.0569273978471756,
-0.06532267481088638,
0.060759544372558594,
0.03851917013525963,
0.009405313059687614,
-0.0026062149554491043,
-0.03681305795907974,
0.04231381416320801,
0.08400247246026993,
0.08588046580553055,
-0.032697927206754684,
-0.07035881280899048,
-0.05002792552113533,
-0.04925484582781792,
0.06515304744243622,
-0.08512216806411743,
0.016139499843120575,
0.027687247842550278,
0.009600531309843063,
0.08916185051202774,
0.034904420375823975,
-0.13858532905578613,
0.011216619983315468,
0.034117165952920914,
-0.12264878302812576,
-0.10542730242013931,
-0.018306441605091095,
0.028333473950624466,
-0.0396890789270401,
0.056155793368816376,
0.14347819983959198,
-0.03589095175266266,
-0.03264589607715607,
-0.04921043664216995,
0.03863897547125816,
-0.020549314096570015,
0.0532844141125679,
0.06377369165420532,
0.03041807748377323,
-0.07208691537380219,
0.07653786242008209,
0.03622327744960785,
-0.03566053509712219,
0.04307032749056816,
0.042542796581983566,
-0.09500238299369812,
-0.07839594036340714,
-0.05788847431540489,
0.08711954206228256,
-0.019641639664769173,
-0.045040201395750046,
0.0009458530694246292,
-0.08240952342748642,
0.06749051809310913,
0.07594645768404007,
0.04747774451971054,
0.03657139837741852,
-0.08768593519926071,
0.015868280082941055,
-0.054116904735565186,
0.03472353145480156,
-0.02922370471060276,
-0.005179842934012413,
-0.05630990117788315,
0.06810234487056732,
0.06352049112319946,
0.0968637466430664,
-0.03392444923520088,
-0.07469044625759125,
-0.08442240208387375,
-0.012390615418553352,
-0.05640339106321335,
-0.033533982932567596,
-0.07700753211975098,
-0.007062520831823349,
0.0004473188892006874,
-0.0020702071487903595,
0.022439001128077507,
0.0342230424284935,
-0.0424894243478775,
-0.01901216432452202,
-0.035960420966148376,
0.03753826022148132,
-0.06064686179161072,
0.006020555272698402,
0.015222853049635887,
-0.035472821444272995,
0.09233817458152771,
0.03430888429284096,
-0.011988740414381027,
0.04787512123584747,
-0.025758560746908188,
0.035194531083106995,
-0.018890557810664177,
0.0009270049631595612,
-0.025083497166633606,
-0.11107733845710754,
-0.00580198597162962,
0.005064360797405243,
-0.023496881127357483,
0.012724519707262516,
0.05723324045538902,
-0.07188257575035095,
0.08425232768058777,
0.04660622775554657,
-0.029973261058330536,
-0.07058387994766235,
0.04190312698483467,
-0.013680022209882736,
0.03072259947657585,
0.0727735087275505,
-0.035388801246881485,
0.05326494202017784,
-0.09520533680915833,
-0.027807094156742096,
0.003213524352759123,
-0.0065312497317790985,
-0.013279905542731285,
-0.05187803506851196,
-0.003218548372387886,
0.007787317968904972,
0.17527835071086884,
-0.022943612188100815,
0.03473486751317978,
0.012923040427267551,
0.010969250462949276,
0.05005606636404991,
-0.012543680146336555,
0.07404202222824097,
-0.006049278192222118,
-0.026880335062742233,
-0.01589657925069332,
0.038694918155670166,
0.004291873425245285,
0.0014131851494312286,
0.14238139986991882,
0.04564981535077095,
0.09138727188110352,
0.07650981843471527,
0.01772349514067173,
0.01845247484743595,
-0.1262778788805008,
-0.0919206291437149,
0.006351529620587826,
0.0571460947394371,
-0.019471725448966026,
0.016842391341924667,
0.08917208760976791,
-0.08817777037620544,
0.06886686384677887,
0.04983396455645561,
-0.049211446195840836,
-0.1262870877981186,
-0.2001124620437622,
-0.02402319386601448,
-0.02464018203318119,
-0.01068837195634842,
-0.09098557382822037,
0.013922716490924358,
0.09005381166934967,
0.025585219264030457,
-0.01081559807062149,
0.09456139802932739,
-0.10775759816169739,
-0.03080543503165245,
0.04533832520246506,
-0.027220698073506355,
0.014481444843113422,
0.04858752340078354,
0.02346653677523136,
-0.007270077243447304,
0.043042197823524475,
0.04278365522623062,
0.044256456196308136,
0.022919083014130592,
0.04963643476366997,
-0.024975281208753586,
-0.07389678806066513,
-0.03260347247123718,
-0.0046438113786280155,
0.05493276193737984,
0.13481004536151886,
0.024019518867135048,
-0.07160935550928116,
0.007948844693601131,
0.10493072867393494,
-0.0306561179459095,
-0.05287230759859085,
-0.1069972887635231,
0.23959539830684662,
0.023118529468774796,
0.0020010811276733875,
-0.004506758414208889,
-0.04550844430923462,
0.002828191965818405,
0.21133127808570862,
0.22692230343818665,
0.00460348604246974,
-0.010018812492489815,
0.009042652323842049,
-0.010938281193375587,
0.03529438376426697,
0.1474149525165558,
0.004486754536628723,
0.2496880441904068,
-0.047793321311473846,
0.0427105575799942,
-0.040041483938694,
-0.03866782411932945,
-0.09806536138057709,
0.06889905035495758,
-0.007052266504615545,
0.007785526569932699,
-0.03345079720020294,
0.07173512876033783,
-0.0433058962225914,
-0.16751445829868317,
-0.00237995944917202,
-0.0014277573209255934,
-0.06148340553045273,
0.010583323426544666,
-0.005111984442919493,
0.02001662366092205,
0.08456383645534515,
-0.015588547103106976,
-0.0051512629725039005,
0.12476687878370285,
0.01872915029525757,
-0.09759118407964706,
-0.05890617519617081,
0.11634107679128647,
0.02497744932770729,
0.1422622948884964,
0.011330055072903633,
0.08086486160755157,
0.08827921003103256,
0.019654523581266403,
-0.09532228112220764,
0.045468639582395554,
-0.019232630729675293,
-0.02529544197022915,
0.008142677135765553,
0.10740408301353455,
-0.00823574885725975,
0.05800924450159073,
0.026886165142059326,
-0.08922600746154785,
0.06495322287082672,
0.00875333696603775,
-0.03399477154016495,
-0.08350029587745667,
0.08822038769721985,
-0.09210474789142609,
0.15818609297275543,
0.12257690727710724,
-0.015290298499166965,
-0.0459330677986145,
-0.029809869825839996,
0.018269697204232216,
-0.002539468929171562,
0.05335786193609238,
-0.026346201077103615,
-0.13387072086334229,
0.01756698451936245,
-0.08980751782655716,
0.026506122201681137,
-0.2485465109348297,
-0.0886848121881485,
0.02950112521648407,
-0.017727896571159363,
-0.020222924649715424,
0.05089394748210907,
0.043422047048807144,
0.026533572003245354,
-0.03585096076130867,
0.020265942439436913,
-0.03955134004354477,
0.0592317059636116,
-0.11293379962444305,
-0.09384799003601074
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 100k (uncased)
Seed 3 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-100k')
model = BertModel.from_pretrained("multiberts-seed-3-100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-100k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 100k (uncased)
Seed 3 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 100k (uncased)\nSeed 3 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 100k (uncased)\nSeed 3 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 100k (uncased)\nSeed 3 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08274905383586884,
-0.0034708937164396048,
-0.002173640765249729,
0.06758169084787369,
0.08567400276660919,
0.0014834562316536903,
0.11855526268482208,
0.04949835687875748,
-0.03065221756696701,
0.02459520846605301,
0.09351140260696411,
0.03648101165890694,
0.04245457798242569,
0.06794026494026184,
0.09673458337783813,
-0.25674811005592346,
0.0527515783905983,
-0.0631241500377655,
0.06001942604780197,
0.07667240500450134,
0.09986315667629242,
-0.07260460406541824,
0.06266935169696808,
0.03390347957611084,
-0.08079925179481506,
-0.017352784052491188,
-0.017711207270622253,
-0.03627724200487137,
0.10035613179206848,
0.06724909693002701,
0.06043078750371933,
0.0006912983953952789,
0.058177269995212555,
-0.09157100319862366,
0.016170065850019455,
0.04514304921030998,
-0.0008078403770923615,
0.024065889418125153,
-0.009345687925815582,
0.016536371782422066,
0.10635969042778015,
0.038202378898859024,
0.0794927328824997,
0.03517422452569008,
-0.09641329199075699,
-0.1103777140378952,
-0.08265292644500732,
0.10406079888343811,
0.05494027957320213,
0.04253087192773819,
-0.006849293597042561,
0.07783117890357971,
-0.0324782058596611,
0.07463173568248749,
0.11435891687870026,
-0.2540513575077057,
-0.009822292253375053,
0.07253481447696686,
0.04904388636350632,
0.04267074167728424,
0.012126445770263672,
0.027408525347709656,
0.004600554704666138,
0.04566496983170509,
0.030909329652786255,
-0.02433646097779274,
0.12069709599018097,
-0.044734857976436615,
-0.1527305245399475,
-0.042364105582237244,
0.11931132525205612,
-0.005388276651501656,
-0.12633703649044037,
-0.1046631708741188,
-0.02856854721903801,
0.11061699688434601,
-0.0037117386236786842,
-0.019907690584659576,
-0.003484581131488085,
0.010240189731121063,
0.021359149366617203,
-0.09286646544933319,
-0.08520488440990448,
-0.02931295894086361,
-0.03579968214035034,
0.1294020116329193,
0.046197839081287384,
0.05017373338341713,
-0.032582588493824005,
0.08722401410341263,
-0.11389483511447906,
-0.03804204240441322,
-0.05366931110620499,
-0.08453576266765594,
-0.01891699805855751,
0.008675951510667801,
-0.028934048488736153,
-0.08634249120950699,
-0.058593519032001495,
0.11977742612361908,
0.037361253052949905,
0.029956970363855362,
0.0008759754709899426,
0.04180862754583359,
0.07446631044149399,
0.09846217185258865,
-0.040534742176532745,
0.04920428991317749,
0.031577348709106445,
-0.020610742270946503,
0.060537807643413544,
-0.051195479929447174,
-0.10142844915390015,
0.078316330909729,
0.0020960746333003044,
0.04079312086105347,
0.023694559931755066,
0.03579913452267647,
-0.012423044070601463,
-0.07323166728019714,
0.16397404670715332,
-0.07687000930309296,
-0.011061621829867363,
-0.018325448036193848,
0.012540582567453384,
0.048659779131412506,
0.029093042016029358,
-0.0052957478910684586,
-0.04802935570478439,
-0.0050407592207193375,
-0.05499971657991409,
-0.025807393714785576,
-0.05493614450097084,
-0.1189308613538742,
-0.00035567115992307663,
-0.04504667967557907,
-0.033402957022190094,
-0.1408466249704361,
-0.21391043066978455,
-0.020234374329447746,
0.06346507370471954,
-0.0026600188575685024,
-0.008626040071249008,
0.022810202091932297,
0.015006331726908684,
-0.02188573218882084,
0.009762799367308617,
-0.04664471000432968,
-0.0010051969438791275,
-0.006531383842229843,
-0.03325755521655083,
0.056736186146736145,
-0.039894964545965195,
0.022954288870096207,
-0.06993161141872406,
0.022135796025395393,
-0.21056291460990906,
0.08951199054718018,
-0.03270198404788971,
0.0026418957859277725,
-0.038511693477630615,
-0.0447344109416008,
0.007657788693904877,
0.04699288308620453,
-0.008619540371000767,
0.11756908893585205,
-0.137275829911232,
-0.05033639818429947,
0.1818166822195053,
-0.15756389498710632,
-0.003936838358640671,
0.09882420301437378,
-0.04807478189468384,
0.05511661246418953,
0.13471609354019165,
0.09745011478662491,
0.0778316855430603,
-0.07593082636594772,
0.009503756649792194,
0.060873717069625854,
-0.06716778874397278,
0.05451733618974686,
0.08934744447469711,
-0.026508044451475143,
-0.13274316489696503,
0.029365085065364838,
-0.07477512210607529,
-0.009079803712666035,
-0.025087205693125725,
-0.021039752289652824,
0.009125469252467155,
-0.03809791058301926,
0.028084494173526764,
0.004867302253842354,
0.016903890296816826,
-0.04094820097088814,
-0.08180898427963257,
0.02806728146970272,
0.07371453940868378,
-0.07217267155647278,
0.04372527822852135,
-0.07106192409992218,
0.0625937432050705,
-0.07474220544099808,
-0.005067568272352219,
-0.16565755009651184,
-0.024797633290290833,
0.04637518152594566,
-0.04581311345100403,
0.04883955791592598,
0.08922494202852249,
0.0030657262541353703,
0.12163827568292618,
-0.03886471688747406,
0.0020676918793469667,
-0.007446452975273132,
-0.01012558862566948,
-0.05023161321878433,
-0.1194566860795021,
-0.08289356529712677,
-0.06922248005867004,
0.09513284265995026,
-0.06989718228578568,
0.02896658144891262,
-0.07221321761608124,
-0.02292521670460701,
-0.009520113468170166,
-0.05985112488269806,
-0.003300188109278679,
0.01139945536851883,
-0.02884659916162491,
-0.047511741518974304,
0.049119845032691956,
0.049245573580265045,
-0.06077936291694641,
0.07713788747787476,
-0.10664480924606323,
-0.0621313601732254,
0.05557732284069061,
0.014003628864884377,
-0.08244577050209045,
0.08567866683006287,
-0.019740022718906403,
-0.01202609483152628,
-0.05712573230266571,
-0.04522120952606201,
0.19318880140781403,
-0.019759707152843475,
0.09951978921890259,
-0.09150359034538269,
-0.0013778908178210258,
0.02753191441297531,
-0.045994359999895096,
-0.019105251878499985,
0.056852392852306366,
0.05129827931523323,
-0.18619820475578308,
0.01458616554737091,
0.054690293967723846,
0.07746219635009766,
0.11026796698570251,
0.02717634290456772,
-0.02413247898221016,
-0.04652583599090576,
-0.011372442357242107,
0.005249937064945698,
0.054383471608161926,
-0.02649979665875435,
-0.009472295641899109,
0.03000965341925621,
0.05800660699605942,
0.016431884840130806,
-0.07868927717208862,
0.03313455730676651,
0.06579072773456573,
-0.017196301370859146,
-0.03641863167285919,
-0.023341478779911995,
-0.06077192351222038,
0.0615934357047081,
0.05416150018572807,
0.03498737886548042,
0.02661297284066677,
-0.015254028141498566,
-0.13593244552612305,
0.18920180201530457,
-0.11371320486068726,
-0.2567428946495056,
-0.10659615695476532,
-0.06110431253910065,
-0.027074376121163368,
0.04140570014715195,
0.05723001807928085,
-0.03064965456724167,
-0.04286359250545502,
-0.1183059811592102,
0.06266340613365173,
-0.06526770442724228,
-0.03187551721930504,
-0.010160157456994057,
-0.053533829748630524,
-0.02199465222656727,
-0.12798064947128296,
-0.013217838481068611,
-0.03139733895659447,
-0.07681430131196976,
0.00683057215064764,
-0.03641563653945923,
0.029144972562789917,
0.13687244057655334,
0.037980858236551285,
-0.01835181936621666,
-0.019468586891889572,
0.1950247883796692,
0.011133449152112007,
0.05903869494795799,
0.11216855049133301,
-0.02619989588856697,
0.05413895472884178,
0.04590846598148346,
0.024968545883893967,
-0.04924546927213669,
0.013268306851387024,
-0.01445083599537611,
-0.12103262543678284,
-0.1725168228149414,
-0.06981884688138962,
-0.003121900837868452,
0.0051320940256118774,
0.018831387162208557,
0.03630699962377548,
0.026257695630192757,
0.041926827281713486,
-0.03064274787902832,
0.02595452219247818,
-0.012149889022111893,
0.08249184489250183,
0.026582185178995132,
-0.07468824088573456,
0.09403851628303528,
-0.05875871703028679,
0.014785230159759521,
0.10806536674499512,
-0.05942481756210327,
0.18692055344581604,
0.025916343554854393,
0.05804510414600372,
0.10311576724052429,
0.018396198749542236,
0.051974475383758545,
0.09003643691539764,
-0.04642507806420326,
0.004981668666005135,
-0.062113240361213684,
-0.05234691500663757,
-0.035989731550216675,
0.04682566225528717,
0.030978571623563766,
0.017807554453611374,
-0.11885946989059448,
0.016200728714466095,
-0.0005687639350071549,
0.1418963372707367,
0.04797489941120148,
-0.12104710936546326,
-0.11930382251739502,
0.034779395908117294,
-0.04573924094438553,
-0.060343436896800995,
0.029692694544792175,
0.056522149592638016,
-0.15555568039417267,
0.04603848606348038,
-0.006251818500459194,
0.06496548652648926,
-0.0918084979057312,
0.015226488001644611,
-0.045105934143066406,
-0.00022412557154893875,
0.004830662161111832,
0.06853886693716049,
-0.136581152677536,
0.10404884815216064,
0.019709017127752304,
0.050324201583862305,
-0.07925031334161758,
0.015283261425793171,
-0.011105441488325596,
0.10508380830287933,
0.11720271408557892,
0.04227115213871002,
-0.049744486808776855,
-0.024922236800193787,
-0.04788656905293465,
0.019160475581884384,
0.060873501002788544,
-0.07802881300449371,
0.06215653568506241,
0.007574569433927536,
0.007968885824084282,
-0.022562354803085327,
0.021266262978315353,
-0.1321047693490982,
-0.12245847284793854,
0.06122400611639023,
-0.07793750613927841,
-0.09691959619522095,
-0.05627477541565895,
-0.06382446736097336,
-0.05212120711803436,
0.20875287055969238,
-0.11485232412815094,
-0.09007741510868073,
-0.0976918637752533,
-0.01807519420981407,
0.04679829999804497,
-0.0650690495967865,
0.04594377055764198,
-0.0383649505674839,
0.09055090695619583,
-0.04775606095790863,
-0.10899202525615692,
0.03412625938653946,
-0.1141628846526146,
-0.11503289639949799,
-0.04337376356124878,
0.10729260742664337,
0.11560539901256561,
0.03843489661812782,
0.013614756055176258,
0.012867100536823273,
0.0016172528266906738,
-0.11827805638313293,
0.015210745856165886,
0.13405713438987732,
0.001491812989115715,
0.07258587330579758,
-0.061814337968826294,
0.0318879559636116,
-0.019716480746865273,
0.0004471093416213989,
0.13523298501968384,
0.1862475872039795,
-0.06264074891805649,
0.1746601164340973,
0.20200680196285248,
-0.10467841476202011,
-0.1907200813293457,
-0.05600457638502121,
-0.0017380435019731522,
0.044183410704135895,
0.050881825387477875,
-0.18345746397972107,
0.08948095142841339,
0.03569056838750839,
-0.03164365142583847,
0.016060370951890945,
-0.23876014351844788,
-0.11063360422849655,
0.09026435762643814,
0.05925815552473068,
0.18614327907562256,
-0.08142409473657608,
-0.038468729704618454,
-0.016957830637693405,
-0.03349415957927704,
0.04955809563398361,
-0.03464784473180771,
0.093080073595047,
0.0063401442021131516,
-0.032317571341991425,
0.0025276578962802887,
-0.031524188816547394,
0.09472723305225372,
0.04020724445581436,
0.023164786398410797,
-0.07205156981945038,
-0.0073685962706804276,
0.11241915822029114,
-0.03913918882608414,
0.10010979324579239,
0.04098232463002205,
0.07289837300777435,
-0.0924147292971611,
-0.06078293174505234,
-0.07660972326993942,
0.043607257306575775,
-0.041721031069755554,
-0.05685512721538544,
-0.0651586651802063,
0.059687551110982895,
0.03845079988241196,
0.010086356662213802,
0.00035411491990089417,
-0.03806859627366066,
0.046357691287994385,
0.08622521907091141,
0.08548922836780548,
-0.03369941934943199,
-0.07600083947181702,
-0.052292101085186005,
-0.04916832223534584,
0.06563853472471237,
-0.08362129330635071,
0.016486182808876038,
0.026102973148226738,
0.009296181611716747,
0.09052647650241852,
0.034772250801324844,
-0.1386767327785492,
0.010846292600035667,
0.033529601991176605,
-0.12160246819257736,
-0.107245072722435,
-0.018171757459640503,
0.02859039418399334,
-0.039813727140426636,
0.05707840994000435,
0.14596910774707794,
-0.03459043800830841,
-0.03280121460556984,
-0.04853333532810211,
0.03857585787773132,
-0.02001972123980522,
0.05164776369929314,
0.06435965746641159,
0.03021591529250145,
-0.07241900265216827,
0.0745687335729599,
0.035243190824985504,
-0.0324338860809803,
0.04372848942875862,
0.042307961732149124,
-0.09413286298513412,
-0.0781523734331131,
-0.059617526829242706,
0.0875064805150032,
-0.023239070549607277,
-0.04375932365655899,
0.000006206333637237549,
-0.08266113698482513,
0.06764255464076996,
0.07639403641223907,
0.04791548475623131,
0.0381670817732811,
-0.08791850507259369,
0.01574619673192501,
-0.051995161920785904,
0.03287714719772339,
-0.03119112364947796,
-0.005463864654302597,
-0.0564434751868248,
0.0651780366897583,
0.06441739946603775,
0.09761975705623627,
-0.03428841009736061,
-0.07657282054424286,
-0.08453600108623505,
-0.012273289263248444,
-0.06095900386571884,
-0.03446086496114731,
-0.07737356424331665,
-0.007341389544308186,
0.0001119677908718586,
-0.0015967879444360733,
0.023242497816681862,
0.03470296040177345,
-0.042495325207710266,
-0.018863128498196602,
-0.03622724860906601,
0.0364881232380867,
-0.06339691579341888,
0.007472921162843704,
0.014376923441886902,
-0.037029121071100235,
0.09265801310539246,
0.0346546545624733,
-0.01314980536699295,
0.04763719066977501,
-0.029107434675097466,
0.035665400326251984,
-0.018687859177589417,
-0.0008382750675082207,
-0.024968545883893967,
-0.10889898240566254,
-0.006474411115050316,
0.004770612344145775,
-0.02361466735601425,
0.01198388822376728,
0.056059498339891434,
-0.07243463397026062,
0.08405043929815292,
0.04626832902431488,
-0.031115051358938217,
-0.07098768651485443,
0.04140772670507431,
-0.013466842472553253,
0.031046291813254356,
0.07192522287368774,
-0.03481876850128174,
0.05416904389858246,
-0.0974830836057663,
-0.027568019926548004,
0.003803372848778963,
-0.005105912685394287,
-0.013725468888878822,
-0.05341535061597824,
-0.003355875611305237,
0.008656908757984638,
0.17707504332065582,
-0.02296636253595352,
0.035024531185626984,
0.01310385949909687,
0.007644700817763805,
0.05320930853486061,
-0.013410966843366623,
0.07425202429294586,
-0.0060090553015470505,
-0.026615256443619728,
-0.01460427325218916,
0.038419295102357864,
0.004597268998622894,
0.000706743448972702,
0.14148615300655365,
0.04660602658987045,
0.08851621299982071,
0.0766969546675682,
0.016556497663259506,
0.017128292471170425,
-0.12715235352516174,
-0.09248795360326767,
0.008208421990275383,
0.056483011692762375,
-0.018497047945857048,
0.020392674952745438,
0.09048299491405487,
-0.08853805065155029,
0.07022687792778015,
0.050238847732543945,
-0.04855770617723465,
-0.1266217827796936,
-0.1954456865787506,
-0.023512566462159157,
-0.028804799541831017,
-0.010934974998235703,
-0.09153135120868683,
0.014470408670604229,
0.09149304777383804,
0.026189057156443596,
-0.010300149209797382,
0.09397934377193451,
-0.10351882874965668,
-0.031054610386490822,
0.04479697346687317,
-0.027801673859357834,
0.01429167203605175,
0.04742796719074249,
0.02364986762404442,
-0.006501927971839905,
0.043686624616384506,
0.043372247368097305,
0.04473257064819336,
0.0247281976044178,
0.05017605051398277,
-0.026123398914933205,
-0.07505019009113312,
-0.03164021298289299,
-0.0034602524247020483,
0.05338079109787941,
0.13766378164291382,
0.024535249918699265,
-0.07131227850914001,
0.006967042572796345,
0.10704553127288818,
-0.03136037662625313,
-0.051550738513469696,
-0.10801773518323898,
0.23885798454284668,
0.023511506617069244,
0.002443490084260702,
-0.0064275795593857765,
-0.0457434207201004,
0.0033482369035482407,
0.2110641747713089,
0.2263018786907196,
0.00332935294136405,
-0.009139380417764187,
0.009379721246659756,
-0.011052899062633514,
0.03601641207933426,
0.14623185992240906,
0.004748115316033363,
0.2533523738384247,
-0.04796961694955826,
0.043914057314395905,
-0.04029478132724762,
-0.03987608850002289,
-0.09969902038574219,
0.06808782368898392,
-0.006985034327954054,
0.008329782634973526,
-0.03281906992197037,
0.0729672983288765,
-0.04188733175396919,
-0.1684962809085846,
-0.0006523104384541512,
-0.00024106446653604507,
-0.06071880832314491,
0.010821310803294182,
-0.006793783977627754,
0.021029287949204445,
0.08540847897529602,
-0.015222509391605854,
-0.005616751499474049,
0.12844786047935486,
0.018386533483862877,
-0.09800989180803299,
-0.059914134442806244,
0.11747691035270691,
0.02098172903060913,
0.14185601472854614,
0.011166343465447426,
0.07905618846416473,
0.0880950540304184,
0.019938722252845764,
-0.09265858680009842,
0.045422881841659546,
-0.019133856520056725,
-0.026055213063955307,
0.008376788347959518,
0.10674546658992767,
-0.007501236628741026,
0.059352077543735504,
0.02631804347038269,
-0.08949755877256393,
0.06471607089042664,
0.008484937250614166,
-0.034595683217048645,
-0.0828276127576828,
0.08712555468082428,
-0.09202922880649567,
0.1578051745891571,
0.12387468665838242,
-0.015288849361240864,
-0.04656863957643509,
-0.02932017482817173,
0.018378932029008865,
-0.0018908786587417126,
0.05067766457796097,
-0.026289522647857666,
-0.13427089154720306,
0.019241822883486748,
-0.08353383839130402,
0.025611892342567444,
-0.24926543235778809,
-0.08952534198760986,
0.03078552708029747,
-0.01709374226629734,
-0.019978076219558716,
0.049802593886852264,
0.0427425391972065,
0.026928376406431198,
-0.035347722470760345,
0.024928031489253044,
-0.038611721247434616,
0.05960879847407341,
-0.11179719865322113,
-0.09180925786495209
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 1100k (uncased)
Seed 3 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1100k')
model = BertModel.from_pretrained("multiberts-seed-3-1100k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-1100k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 1100k (uncased)
Seed 3 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 1100k (uncased)\nSeed 3 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 1100k (uncased)\nSeed 3 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 1100k (uncased)\nSeed 3 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08305731415748596,
-0.004181666299700737,
-0.0020670529920607805,
0.07052889466285706,
0.08834182471036911,
0.002819487825036049,
0.11576025187969208,
0.04902919754385948,
-0.029169738292694092,
0.02365788444876671,
0.09209303557872772,
0.03203403204679489,
0.04279737174510956,
0.06801851838827133,
0.09491467475891113,
-0.2583792805671692,
0.05020274966955185,
-0.06351933628320694,
0.06627323478460312,
0.07536745816469193,
0.09973587095737457,
-0.07177744060754776,
0.062297508120536804,
0.03445722162723541,
-0.07857794314622879,
-0.016092708334326744,
-0.019364455714821815,
-0.036135293543338776,
0.10042112320661545,
0.06785395741462708,
0.06250950694084167,
-0.0007824432104825974,
0.05898405611515045,
-0.090816929936409,
0.015833714976906776,
0.04487725347280502,
-0.00017972290515899658,
0.023761067539453506,
-0.009504694491624832,
0.018040230497717857,
0.10817524790763855,
0.0404764823615551,
0.0795128345489502,
0.03280271962285042,
-0.0955101028084755,
-0.11205729842185974,
-0.08391596376895905,
0.1070481687784195,
0.05634954571723938,
0.040576834231615067,
-0.0053266920149326324,
0.07675497978925705,
-0.0292035061866045,
0.07507491111755371,
0.11239184439182281,
-0.2544695734977722,
-0.00961205456405878,
0.06574725359678268,
0.046710055321455,
0.04437632858753204,
0.013639620505273342,
0.02500803954899311,
0.0061014629900455475,
0.04576684162020683,
0.027740150690078735,
-0.022765636444091797,
0.11238805204629898,
-0.04487014561891556,
-0.1528940051794052,
-0.041473232209682465,
0.12247651815414429,
-0.007261704653501511,
-0.12527668476104736,
-0.09974674880504608,
-0.029478877782821655,
0.10854974389076233,
-0.0018253782764077187,
-0.019492583349347115,
-0.004495757166296244,
0.009445300325751305,
0.02800760418176651,
-0.09272249042987823,
-0.0859161987900734,
-0.028369221836328506,
-0.038429804146289825,
0.1289658397436142,
0.04723562300205231,
0.051355667412281036,
-0.03455076366662979,
0.08995813876390457,
-0.11592690646648407,
-0.03919146955013275,
-0.05184299498796463,
-0.08459876477718353,
-0.01711704209446907,
0.006960907485336065,
-0.024944506585597992,
-0.0833757221698761,
-0.06180169805884361,
0.11554604768753052,
0.03892405703663826,
0.029073800891637802,
-0.0022371080704033375,
0.04172567278146744,
0.07329320162534714,
0.09650496393442154,
-0.040770553052425385,
0.04750882461667061,
0.03343109413981438,
-0.02001110464334488,
0.06052926927804947,
-0.050358548760414124,
-0.09964890778064728,
0.07781347632408142,
-0.0005282405763864517,
0.04229961335659027,
0.026405509561300278,
0.035996612161397934,
-0.012279262766242027,
-0.07138495147228241,
0.16950784623622894,
-0.07771623134613037,
-0.01142868958413601,
-0.01913132146000862,
0.01030549593269825,
0.04775227606296539,
0.033619269728660583,
-0.005467722658067942,
-0.0483379065990448,
-0.0036055799573659897,
-0.05504506081342697,
-0.025709256529808044,
-0.05529814958572388,
-0.11997556686401367,
-0.0006792722269892693,
-0.039756618440151215,
-0.03402695059776306,
-0.14077189564704895,
-0.2185477912425995,
-0.019054517149925232,
0.06396538019180298,
-0.0012752269394695759,
-0.00867858249694109,
0.023163730278611183,
0.016125081107020378,
-0.021099675446748734,
0.008372640237212181,
-0.04687485098838806,
-0.0014936309307813644,
-0.006657442077994347,
-0.03380545228719711,
0.05559490621089935,
-0.04299745708703995,
0.023062344640493393,
-0.06944628804922104,
0.021804383024573326,
-0.2128208875656128,
0.08556981384754181,
-0.03424974903464317,
0.004852451384067535,
-0.03776795044541359,
-0.043732572346925735,
0.010067474097013474,
0.04614181071519852,
-0.0077708107419312,
0.11653690785169601,
-0.13391713798046112,
-0.047248341143131256,
0.1777218133211136,
-0.15915146470069885,
-0.003733694553375244,
0.1002144068479538,
-0.04699492081999779,
0.05445914715528488,
0.1348622590303421,
0.0957823172211647,
0.08372626453638077,
-0.07697325199842453,
0.008721794933080673,
0.06088950112462044,
-0.0694112628698349,
0.052995920181274414,
0.08954018354415894,
-0.026155631989240646,
-0.1353921741247177,
0.02945811301469803,
-0.07637947797775269,
-0.007982494309544563,
-0.02477620542049408,
-0.02119792252779007,
0.008746329694986343,
-0.03744823485612869,
0.02690061554312706,
0.005729425232857466,
0.016258321702480316,
-0.04148517921566963,
-0.08276714384555817,
0.027246274054050446,
0.07462283223867416,
-0.07084715366363525,
0.04501178860664368,
-0.07031817734241486,
0.06105874851346016,
-0.07666098326444626,
-0.006628068163990974,
-0.16373541951179504,
-0.024871453642845154,
0.045315202325582504,
-0.04421514272689819,
0.0462898351252079,
0.08637348562479019,
0.00254043354652822,
0.12223149836063385,
-0.04056353121995926,
0.002205422380939126,
-0.0055311135947704315,
-0.009939678013324738,
-0.050551868975162506,
-0.11926662921905518,
-0.08241929113864899,
-0.06994813680648804,
0.09236110746860504,
-0.07144345343112946,
0.02910904586315155,
-0.07160095125436783,
-0.02368135005235672,
-0.009381275624036789,
-0.05784560739994049,
-0.004932352341711521,
0.011091438122093678,
-0.029729396104812622,
-0.04769013822078705,
0.049609020352363586,
0.04993394762277603,
-0.05708172172307968,
0.07878445088863373,
-0.10597135871648788,
-0.06308943033218384,
0.054865144193172455,
0.01805591955780983,
-0.08181249350309372,
0.08547085523605347,
-0.020614024251699448,
-0.013546112924814224,
-0.05702539533376694,
-0.046740271151065826,
0.19480642676353455,
-0.021756555885076523,
0.09935526549816132,
-0.09101134538650513,
-0.000011820520740002394,
0.026730796322226524,
-0.04511265829205513,
-0.01892608031630516,
0.05668635666370392,
0.05203549563884735,
-0.1889754980802536,
0.013765979558229446,
0.05062495917081833,
0.07643843442201614,
0.11181310564279556,
0.0289007518440485,
-0.024204641580581665,
-0.04451872035861015,
-0.01166533399373293,
0.005804361775517464,
0.0549963116645813,
-0.027318622916936874,
-0.009455157443881035,
0.03000778704881668,
0.058616891503334045,
0.018183913081884384,
-0.07988937199115753,
0.034245289862155914,
0.06545785069465637,
-0.015361743047833443,
-0.039927832782268524,
-0.023949146270751953,
-0.060000356286764145,
0.06254816055297852,
0.052995629608631134,
0.03572230786085129,
0.025648808106780052,
-0.014858626760542393,
-0.13642007112503052,
0.18795199692249298,
-0.11441580951213837,
-0.25422003865242004,
-0.10811592638492584,
-0.06395772099494934,
-0.023951295763254166,
0.04029877483844757,
0.05904679000377655,
-0.030288593843579292,
-0.04450267553329468,
-0.11828026920557022,
0.06321005523204803,
-0.06151306629180908,
-0.030467644333839417,
-0.0098751001060009,
-0.05423338711261749,
-0.023501712828874588,
-0.128566175699234,
-0.012588759884238243,
-0.03090793453156948,
-0.07606010884046555,
0.005257240496575832,
-0.036917682737112045,
0.027646256610751152,
0.13783207535743713,
0.036609284579753876,
-0.019526725634932518,
-0.019568780437111855,
0.18652373552322388,
0.01209651306271553,
0.06073085963726044,
0.11160632967948914,
-0.028073884546756744,
0.05599436163902283,
0.04856010526418686,
0.023011060431599617,
-0.04827600717544556,
0.016029901802539825,
-0.016841601580381393,
-0.12147001922130585,
-0.17505565285682678,
-0.07092930376529694,
-0.003239938523620367,
0.008114870637655258,
0.019132161512970924,
0.035390179604291916,
0.01684708334505558,
0.041952211409807205,
-0.031199883669614792,
0.02730470336973667,
-0.013610254973173141,
0.08140861988067627,
0.024320580065250397,
-0.07447001338005066,
0.09126023203134537,
-0.06126640737056732,
0.016965080052614212,
0.1093892902135849,
-0.05723283067345619,
0.19118812680244446,
0.021056823432445526,
0.0533256009221077,
0.10325571894645691,
0.016904693096876144,
0.051431410014629364,
0.09218911826610565,
-0.047196127474308014,
0.005449776537716389,
-0.061168521642684937,
-0.051885515451431274,
-0.03348369151353836,
0.04718298465013504,
0.030068280175328255,
0.02051590010523796,
-0.11918273568153381,
0.02175191044807434,
0.000010086456313729286,
0.1385076344013214,
0.04571753740310669,
-0.12183776497840881,
-0.11888417601585388,
0.03448441997170448,
-0.046442911028862,
-0.062424272298812866,
0.03014266863465309,
0.0596761628985405,
-0.15351638197898865,
0.04682806879281998,
-0.005684968084096909,
0.06318143010139465,
-0.09124042838811874,
0.013978357426822186,
-0.04069698601961136,
-0.0006907880306243896,
0.004408779554069042,
0.06792415678501129,
-0.13372744619846344,
0.10746889561414719,
0.02001510187983513,
0.05145879089832306,
-0.07985024154186249,
0.016381917521357536,
-0.007809732109308243,
0.10535324364900589,
0.11810512840747833,
0.04225245490670204,
-0.043837811797857285,
-0.019255831837654114,
-0.048988405615091324,
0.019876085221767426,
0.05791723355650902,
-0.075242780148983,
0.06194961443543434,
0.008869238197803497,
0.009115742519497871,
-0.022794462740421295,
0.019483212381601334,
-0.13261261582374573,
-0.12282182276248932,
0.06095931679010391,
-0.07803958654403687,
-0.1024695634841919,
-0.05655007064342499,
-0.06525371223688126,
-0.055535562336444855,
0.20727373659610748,
-0.11087082326412201,
-0.0906817764043808,
-0.09629097580909729,
-0.018936969339847565,
0.04577299579977989,
-0.06578069925308228,
0.04703725129365921,
-0.036518990993499756,
0.08862890303134918,
-0.048311859369277954,
-0.10724863409996033,
0.035044457763433456,
-0.11358816176652908,
-0.1130015030503273,
-0.04543256759643555,
0.10543616116046906,
0.11524663865566254,
0.0378844328224659,
0.011413372121751308,
0.011395107954740524,
0.0019244048744440079,
-0.11830195784568787,
0.01221756637096405,
0.1338394582271576,
-0.00022705085575580597,
0.07454989850521088,
-0.06241002678871155,
0.022443246096372604,
-0.018383122980594635,
-0.0012774914503097534,
0.13353121280670166,
0.1874626874923706,
-0.06200108677148819,
0.17351782321929932,
0.19994264841079712,
-0.10636070370674133,
-0.19291311502456665,
-0.05342858284711838,
-0.0006959633901715279,
0.044880520552396774,
0.05089541897177696,
-0.1799871176481247,
0.09179362654685974,
0.03611007332801819,
-0.03274156153202057,
0.020211350172758102,
-0.23360766470432281,
-0.11154237389564514,
0.09148824214935303,
0.06016585975885391,
0.182002454996109,
-0.0806531012058258,
-0.03723079711198807,
-0.015700465068221092,
-0.04190842807292938,
0.04649917781352997,
-0.0354999415576458,
0.09179745614528656,
0.005303092300891876,
-0.03227771446108818,
0.0019256910309195518,
-0.03266443312168121,
0.09473682940006256,
0.04052259027957916,
0.02321382611989975,
-0.07241854071617126,
-0.011657724156975746,
0.11576107144355774,
-0.039973821491003036,
0.1013801246881485,
0.043282777070999146,
0.07338501513004303,
-0.09690253436565399,
-0.06012578681111336,
-0.07611830532550812,
0.04417877271771431,
-0.04136686772108078,
-0.056075580418109894,
-0.06425070017576218,
0.05956771597266197,
0.03959182649850845,
0.009929475374519825,
0.0008245669305324554,
-0.03686007484793663,
0.044372886419296265,
0.08349413424730301,
0.0863921269774437,
-0.03147261589765549,
-0.0684875100851059,
-0.05244472622871399,
-0.04939106106758118,
0.06538501381874084,
-0.08921974897384644,
0.017317965626716614,
0.02804809994995594,
0.010422557592391968,
0.08815836906433105,
0.03423861041665077,
-0.1369907259941101,
0.012679144740104675,
0.03597523644566536,
-0.12216337025165558,
-0.1019073873758316,
-0.01909245178103447,
0.03202299028635025,
-0.04015430808067322,
0.05728080868721008,
0.14340507984161377,
-0.03521384671330452,
-0.03276624158024788,
-0.0481300950050354,
0.03924589976668358,
-0.020598648115992546,
0.051208384335041046,
0.06392717361450195,
0.03008677065372467,
-0.07209687680006027,
0.07736003398895264,
0.035939719527959824,
-0.03518077731132507,
0.04291814565658569,
0.04469268396496773,
-0.09357091784477234,
-0.07807564735412598,
-0.05982000753283501,
0.08824717998504639,
-0.02005949430167675,
-0.04481320083141327,
-0.000541049987077713,
-0.0809742659330368,
0.06737187504768372,
0.0770559012889862,
0.04734379053115845,
0.036621734499931335,
-0.08816523849964142,
0.01622678153216839,
-0.05336098000407219,
0.03493627533316612,
-0.030735524371266365,
-0.003951124846935272,
-0.058075085282325745,
0.06557461619377136,
0.0631871223449707,
0.09624020010232925,
-0.03374789282679558,
-0.07371490448713303,
-0.08194121718406677,
-0.012395743280649185,
-0.05899437516927719,
-0.03322141245007515,
-0.0776652991771698,
-0.0057839760556817055,
0.000854248646646738,
-0.0018436070531606674,
0.020900849252939224,
0.0365515872836113,
-0.042975738644599915,
-0.01818668842315674,
-0.034951820969581604,
0.037410251796245575,
-0.060578230768442154,
0.005953103303909302,
0.014936968684196472,
-0.03596875071525574,
0.09274071455001831,
0.03416174277663231,
-0.012310618534684181,
0.04806921258568764,
-0.02807491272687912,
0.0342622809112072,
-0.019081195816397667,
0.00033171428367495537,
-0.023769307881593704,
-0.10785267502069473,
-0.005504345986992121,
0.005264243111014366,
-0.02300507202744484,
0.01174803078174591,
0.05979402735829353,
-0.07313573360443115,
0.08489754050970078,
0.046324484050273895,
-0.02892787754535675,
-0.0710698664188385,
0.040416572242975235,
-0.01406867429614067,
0.029453370720148087,
0.07183591276407242,
-0.0351637601852417,
0.051597751677036285,
-0.09701961278915405,
-0.02794397994875908,
0.003336217487230897,
-0.00495055690407753,
-0.010080378502607346,
-0.050391048192977905,
-0.004388534463942051,
0.008425244130194187,
0.17563851177692413,
-0.024254869669675827,
0.03232238441705704,
0.01463379804044962,
0.007877782918512821,
0.04605453461408615,
-0.01500629261136055,
0.07317723333835602,
-0.010102862492203712,
-0.026414059102535248,
-0.014138717204332352,
0.03726860508322716,
0.004925815388560295,
0.0016428027302026749,
0.14437299966812134,
0.04449412599205971,
0.09378732740879059,
0.07805506885051727,
0.016825739294290543,
0.015859877690672874,
-0.13087698817253113,
-0.09831185638904572,
0.007951943203806877,
0.0575827956199646,
-0.01702880673110485,
0.014667361974716187,
0.09178440272808075,
-0.08762039989233017,
0.06817089766263962,
0.049090754240751266,
-0.04819871485233307,
-0.1262296736240387,
-0.1934688687324524,
-0.021997544914484024,
-0.02795918472111225,
-0.009486118331551552,
-0.09112924337387085,
0.014972366392612457,
0.09476252645254135,
0.025620441883802414,
-0.009767807088792324,
0.09656672179698944,
-0.10660437494516373,
-0.03093738481402397,
0.04539433866739273,
-0.027865469455718994,
0.015202985145151615,
0.04710782319307327,
0.023166881874203682,
-0.007818080484867096,
0.0420672670006752,
0.04171598702669144,
0.043452829122543335,
0.023001162335276604,
0.0504140630364418,
-0.023180268704891205,
-0.0743376612663269,
-0.032975707203149796,
-0.003514309646561742,
0.05449005588889122,
0.1392827332019806,
0.02480214834213257,
-0.06863872706890106,
0.008431443944573402,
0.10612732917070389,
-0.032470397651195526,
-0.051160912960767746,
-0.10882361978292465,
0.24094334244728088,
0.025357866659760475,
0.002023933222517371,
-0.004778658971190453,
-0.046166203916072845,
0.003625093027949333,
0.20713965594768524,
0.22233463823795319,
0.00595013378188014,
-0.009352960623800755,
0.010382632724940777,
-0.01102969516068697,
0.03342105448246002,
0.1460704505443573,
0.0076977405697107315,
0.2498604655265808,
-0.047700800001621246,
0.03970180079340935,
-0.04024968668818474,
-0.038140106946229935,
-0.09772303700447083,
0.06574347615242004,
-0.009093042463064194,
0.00952134933322668,
-0.031709276139736176,
0.07196222245693207,
-0.04147415608167648,
-0.17109686136245728,
-0.002621750347316265,
-0.0009602522477507591,
-0.06269951164722443,
0.011235339567065239,
-0.0014048060402274132,
0.02119528502225876,
0.08424728363752365,
-0.015334720723330975,
-0.005562158301472664,
0.12913307547569275,
0.019016046077013016,
-0.09798585623502731,
-0.06077077239751816,
0.11563001573085785,
0.016128644347190857,
0.1457480490207672,
0.011317996308207512,
0.07867306470870972,
0.0874510407447815,
0.0201871395111084,
-0.09558035433292389,
0.04525242745876312,
-0.019390728324651718,
-0.027893899008631706,
0.007589935325086117,
0.10533814132213593,
-0.00905856303870678,
0.060752205550670624,
0.02573486790060997,
-0.08817152678966522,
0.06319805979728699,
0.014399446547031403,
-0.033293772488832474,
-0.0832950621843338,
0.08639930933713913,
-0.09182636439800262,
0.15870386362075806,
0.12289261817932129,
-0.014815769158303738,
-0.046855948865413666,
-0.02945787087082863,
0.017620688304305077,
-0.0019587334245443344,
0.0560576468706131,
-0.025492142885923386,
-0.13417041301727295,
0.019504675641655922,
-0.08609357476234436,
0.027975590899586678,
-0.2478492558002472,
-0.08802805840969086,
0.02905947156250477,
-0.020073983818292618,
-0.02238066866993904,
0.04951535910367966,
0.04596129432320595,
0.0284141656011343,
-0.035481929779052734,
0.02123865857720375,
-0.03948073089122772,
0.059247542172670364,
-0.11441881954669952,
-0.09412990510463715
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 1200k (uncased)
Seed 3 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1200k')
model = BertModel.from_pretrained("multiberts-seed-3-1200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-1200k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 1200k (uncased)
Seed 3 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 1200k (uncased)\nSeed 3 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 1200k (uncased)\nSeed 3 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 1200k (uncased)\nSeed 3 intermediate checkpoint 1200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08293142914772034,
-0.0018060507718473673,
-0.002102385740727186,
0.07053805142641068,
0.08733262866735458,
0.0022155381739139557,
0.1157955452799797,
0.04953376576304436,
-0.03898819535970688,
0.023054009303450584,
0.09201255440711975,
0.029719214886426926,
0.04306149482727051,
0.06326938420534134,
0.09690669178962708,
-0.259294331073761,
0.051470618695020676,
-0.0636579841375351,
0.05750235915184021,
0.07619576156139374,
0.09936285018920898,
-0.06927531957626343,
0.06203901395201683,
0.034096233546733856,
-0.08182922750711441,
-0.015383690595626831,
-0.01839900203049183,
-0.03488452732563019,
0.1009792685508728,
0.06981069594621658,
0.06170722097158432,
0.0011020936071872711,
0.058559104800224304,
-0.08885511755943298,
0.01647201180458069,
0.043909259140491486,
-0.0009137885645031929,
0.02341270260512829,
-0.009427141398191452,
0.017231512814760208,
0.10797075927257538,
0.041626401245594025,
0.07881715893745422,
0.033661387860774994,
-0.09569203108549118,
-0.10532902181148529,
-0.08239977061748505,
0.10794749855995178,
0.05485105141997337,
0.04216691851615906,
-0.004608503542840481,
0.07623232901096344,
-0.03229990601539612,
0.07417336106300354,
0.1136230081319809,
-0.2514056861400604,
-0.010910765267908573,
0.07169611006975174,
0.04462479054927826,
0.045515067875385284,
0.013967296108603477,
0.02451372891664505,
0.005407501012086868,
0.04626847058534622,
0.0263536274433136,
-0.023008473217487335,
0.11247032880783081,
-0.04517744854092598,
-0.15261533856391907,
-0.04179762676358223,
0.11675097048282623,
-0.006834216415882111,
-0.12501206994056702,
-0.09819433838129044,
-0.029313014820218086,
0.10987838357686996,
-0.0016909195110201836,
-0.01828300580382347,
-0.005239965859800577,
0.00925653986632824,
0.023302990943193436,
-0.09318743646144867,
-0.08435389399528503,
-0.030019152909517288,
-0.038252539932727814,
0.1298854947090149,
0.04657066613435745,
0.050964053720235825,
-0.035246603190898895,
0.08927331864833832,
-0.11783760786056519,
-0.037059128284454346,
-0.053888820111751556,
-0.08559965342283249,
-0.017746442928910255,
0.008493216708302498,
-0.026849068701267242,
-0.08235105127096176,
-0.060718972235918045,
0.1184987723827362,
0.033719751983881,
0.03000812977552414,
-0.0003854217939078808,
0.04121517017483711,
0.07089374214410782,
0.09380258619785309,
-0.041689734905958176,
0.05249272286891937,
0.03226272761821747,
-0.019329734146595,
0.058196812868118286,
-0.04927782714366913,
-0.10043366998434067,
0.07683008164167404,
-0.0007781088352203369,
0.040655650198459625,
0.02531711384654045,
0.034813471138477325,
-0.010522858239710331,
-0.07240422070026398,
0.16856062412261963,
-0.0766085833311081,
-0.013114935718476772,
-0.01962488703429699,
0.010754607617855072,
0.04772300273180008,
0.03258676081895828,
-0.006512560416013002,
-0.04746782034635544,
-0.0029820073395967484,
-0.05457425117492676,
-0.026791518554091454,
-0.05582393705844879,
-0.11711300164461136,
-0.00033515971153974533,
-0.043199196457862854,
-0.033152736723423004,
-0.14221632480621338,
-0.2173476219177246,
-0.019762998446822166,
0.06438201665878296,
-0.002572693396359682,
-0.01087673008441925,
0.024826887995004654,
0.014922620728611946,
-0.022043272852897644,
0.008304169401526451,
-0.04793041571974754,
-0.00030092988163232803,
-0.006514525040984154,
-0.03298478573560715,
0.05599703639745712,
-0.043560661375522614,
0.0235757976770401,
-0.06926649063825607,
0.021980803459882736,
-0.21282801032066345,
0.08776379376649857,
-0.034097909927368164,
0.004321020096540451,
-0.03716637194156647,
-0.04428035765886307,
0.006592992693185806,
0.04697141796350479,
-0.007476218044757843,
0.11615872383117676,
-0.13663925230503082,
-0.04685095325112343,
0.18153253197669983,
-0.1589171141386032,
-0.0035505741834640503,
0.09817531704902649,
-0.04783596098423004,
0.05428874492645264,
0.13343405723571777,
0.09669557213783264,
0.08549803495407104,
-0.07296214252710342,
0.009847967885434628,
0.06077601760625839,
-0.06853871047496796,
0.0505952313542366,
0.08889272809028625,
-0.026221612468361855,
-0.1363353133201599,
0.029087934643030167,
-0.0768955796957016,
-0.00712214782834053,
-0.0241295974701643,
-0.020883038640022278,
0.009166138246655464,
-0.03941750153899193,
0.024736180901527405,
0.005881268531084061,
0.017229940742254257,
-0.03978678956627846,
-0.08145444095134735,
0.03206348791718483,
0.07555316388607025,
-0.07100941240787506,
0.04452567547559738,
-0.06916487216949463,
0.061866991221904755,
-0.07494323700666428,
-0.006351502612233162,
-0.16780400276184082,
-0.025557255372405052,
0.04316418245434761,
-0.04748407378792763,
0.04882360249757767,
0.08684059977531433,
0.003571848850697279,
0.12132221460342407,
-0.04190218448638916,
0.003509991569444537,
-0.005499323830008507,
-0.010111487470567226,
-0.04859483242034912,
-0.1184786930680275,
-0.08032552152872086,
-0.06922662258148193,
0.09381712228059769,
-0.06848910450935364,
0.029856981709599495,
-0.07124091684818268,
-0.02291998825967312,
-0.010592734441161156,
-0.058428309857845306,
-0.004316301085054874,
0.011528987437486649,
-0.029242968186736107,
-0.048070479184389114,
0.0490105003118515,
0.0507136769592762,
-0.057760417461395264,
0.07789742946624756,
-0.10482124239206314,
-0.0650407150387764,
0.05463651567697525,
0.02083110436797142,
-0.08355887234210968,
0.08584628999233246,
-0.020701419562101364,
-0.012605049647390842,
-0.05823706462979317,
-0.044474437832832336,
0.19296111166477203,
-0.02219974994659424,
0.09879446029663086,
-0.09259305894374847,
0.0019279560074210167,
0.027000432834029198,
-0.045112356543540955,
-0.01782912015914917,
0.058018408715724945,
0.056510183960199356,
-0.18427860736846924,
0.014909520745277405,
0.04943021386861801,
0.07461975514888763,
0.11027275025844574,
0.027359159663319588,
-0.022706063464283943,
-0.04402754455804825,
-0.011051767505705357,
0.006790217012166977,
0.05466282740235329,
-0.018698934465646744,
-0.007621462922543287,
0.02911282517015934,
0.06017618626356125,
0.01872757263481617,
-0.07988967001438141,
0.032741449773311615,
0.06631839275360107,
-0.015777701511979103,
-0.03763429820537567,
-0.02355257049202919,
-0.06039324402809143,
0.06222487613558769,
0.05487162619829178,
0.034278303384780884,
0.026799431070685387,
-0.013959934003651142,
-0.13516941666603088,
0.19000357389450073,
-0.1133958026766777,
-0.25535300374031067,
-0.10808258503675461,
-0.0619487389922142,
-0.02727229706943035,
0.03948520869016647,
0.058387286961078644,
-0.03190985321998596,
-0.04232741519808769,
-0.11745790392160416,
0.06137404590845108,
-0.06190838664770126,
-0.030277252197265625,
-0.011438751593232155,
-0.05326659232378006,
-0.020062433555722237,
-0.12771230936050415,
-0.011993121355772018,
-0.030975233763456345,
-0.07465153187513351,
0.006007586605846882,
-0.03864268958568573,
0.029138248413801193,
0.13745743036270142,
0.03484013304114342,
-0.01925004832446575,
-0.019106300547719002,
0.19130969047546387,
0.011952890083193779,
0.059157028794288635,
0.11182983219623566,
-0.026714105159044266,
0.055569909512996674,
0.04901931434869766,
0.024226918816566467,
-0.04785049334168434,
0.01551247201859951,
-0.01447432953864336,
-0.1227969378232956,
-0.17471174895763397,
-0.06952881813049316,
-0.0025833742693066597,
0.006580102723091841,
0.018556101247668266,
0.03633200377225876,
0.02340027317404747,
0.041871096938848495,
-0.030169274657964706,
0.025753967463970184,
-0.014009393751621246,
0.08118534833192825,
0.02164563536643982,
-0.07156669348478317,
0.09221624583005905,
-0.060410212725400925,
0.016737019643187523,
0.10930070281028748,
-0.05876290425658226,
0.19132539629936218,
0.02145145833492279,
0.05212155729532242,
0.10238353163003922,
0.018075410276651382,
0.054056644439697266,
0.09067578613758087,
-0.04809288680553436,
0.005456526763737202,
-0.060581885278224945,
-0.051871173083782196,
-0.034937597811222076,
0.046879444271326065,
0.028190193697810173,
0.019813835620880127,
-0.11816290766000748,
0.0226127952337265,
-0.0015655587194487453,
0.13908857107162476,
0.04469762742519379,
-0.12098202109336853,
-0.1194785088300705,
0.03373148292303085,
-0.045379772782325745,
-0.06138765811920166,
0.032071735709905624,
0.056600455194711685,
-0.15312263369560242,
0.04516194015741348,
-0.005860738456249237,
0.06539277732372284,
-0.09078498184680939,
0.014637336134910583,
-0.040459077805280685,
-0.0008863592520356178,
0.004230645019561052,
0.06790219247341156,
-0.1330886334180832,
0.10781008750200272,
0.01933189108967781,
0.04978127032518387,
-0.0771997720003128,
0.015450632199645042,
-0.008410083130002022,
0.10767284780740738,
0.1168893352150917,
0.04332414269447327,
-0.0461297407746315,
-0.016943320631980896,
-0.04918858781456947,
0.01912231743335724,
0.05789097398519516,
-0.0757516622543335,
0.06092865392565727,
0.00972682237625122,
0.00869070179760456,
-0.023097993806004524,
0.015066258609294891,
-0.13237184286117554,
-0.12277410924434662,
0.06076842546463013,
-0.08062344789505005,
-0.09800436347723007,
-0.056234657764434814,
-0.06515655666589737,
-0.05360688269138336,
0.20603904128074646,
-0.10922034084796906,
-0.09074655175209045,
-0.09736563265323639,
-0.01785847544670105,
0.04616489261388779,
-0.06580906361341476,
0.04633599519729614,
-0.03871462494134903,
0.08768724650144577,
-0.04787300527095795,
-0.10967259109020233,
0.03439745679497719,
-0.11332499980926514,
-0.11232922971248627,
-0.043324731290340424,
0.10414828360080719,
0.11561474204063416,
0.03852936625480652,
0.010138362646102905,
0.010991701856255531,
0.00301995687186718,
-0.11871795356273651,
0.011093001812696457,
0.1309092938899994,
-0.0017666015774011612,
0.07285989075899124,
-0.05979475378990173,
0.026761438697576523,
-0.01821138523519039,
0.0005627237260341644,
0.1322273164987564,
0.18572476506233215,
-0.06290960311889648,
0.1726761758327484,
0.20188960433006287,
-0.1047370433807373,
-0.192633718252182,
-0.0525122731924057,
-0.00038334354758262634,
0.0446825809776783,
0.05117502063512802,
-0.18113121390342712,
0.09017093479633331,
0.03551714867353439,
-0.032243844121694565,
0.012495726346969604,
-0.2340560257434845,
-0.11079765856266022,
0.08885951340198517,
0.058567602187395096,
0.18461772799491882,
-0.0815989300608635,
-0.037958428263664246,
-0.014861112460494041,
-0.04019303619861603,
0.04795650392770767,
-0.03314390406012535,
0.0913202315568924,
0.0045188795775175095,
-0.0308920256793499,
0.0016911467537283897,
-0.03270365670323372,
0.0945487916469574,
0.040267735719680786,
0.022157330065965652,
-0.07275016605854034,
-0.008324645459651947,
0.10956601798534393,
-0.040090322494506836,
0.09926512092351913,
0.04349440336227417,
0.07512897253036499,
-0.09701469540596008,
-0.059618644416332245,
-0.07734887301921844,
0.04234185442328453,
-0.04207928478717804,
-0.05589839443564415,
-0.06397900730371475,
0.062147192656993866,
0.03882145136594772,
0.010447032749652863,
0.0010856501758098602,
-0.038204871118068695,
0.045649681240320206,
0.08267448097467422,
0.08484922349452972,
-0.031330656260252,
-0.07332050800323486,
-0.0509437657892704,
-0.04832952469587326,
0.06517507880926132,
-0.0935962051153183,
0.016781236976385117,
0.02834620326757431,
0.009782539680600166,
0.0878349244594574,
0.034659672528505325,
-0.13768967986106873,
0.011176761239767075,
0.035540562123060226,
-0.12286565452814102,
-0.10230855643749237,
-0.01944917067885399,
0.032258447259664536,
-0.039830319583415985,
0.05752778425812721,
0.1437893658876419,
-0.03646913915872574,
-0.03228387236595154,
-0.046671077609062195,
0.03903618082404137,
-0.02050182968378067,
0.049596965312957764,
0.06614115834236145,
0.029944883659482002,
-0.07154546678066254,
0.07471124827861786,
0.03601336479187012,
-0.036167778074741364,
0.04174824059009552,
0.04543931782245636,
-0.09479285776615143,
-0.07745204865932465,
-0.05566738173365593,
0.0834163948893547,
-0.025932075455784798,
-0.04576805606484413,
-0.000648127868771553,
-0.08210894465446472,
0.06800805777311325,
0.07903465628623962,
0.04797598719596863,
0.035408712923526764,
-0.08692725002765656,
0.01517567690461874,
-0.05329187214374542,
0.033249303698539734,
-0.031800225377082825,
-0.0028844084590673447,
-0.05638235807418823,
0.06076683849096298,
0.06405123323202133,
0.09698017686605453,
-0.03312928229570389,
-0.07380913197994232,
-0.08416730165481567,
-0.012404345907270908,
-0.06649081408977509,
-0.03533410280942917,
-0.07685063034296036,
-0.005194485187530518,
0.00017487537115812302,
-0.0009662248194217682,
0.021443694829940796,
0.03581809252500534,
-0.043672412633895874,
-0.017998073250055313,
-0.03608781844377518,
0.035508040338754654,
-0.05993329733610153,
0.0084343571215868,
0.014588342048227787,
-0.03492947667837143,
0.09171338379383087,
0.03412921354174614,
-0.011218554340302944,
0.04728223383426666,
-0.021754924207925797,
0.03399156779050827,
-0.02093403786420822,
0.0005971360951662064,
-0.023257959634065628,
-0.1062377318739891,
-0.005024541635066271,
0.005341801792383194,
-0.023012779653072357,
0.01173923909664154,
0.05607772618532181,
-0.07254961878061295,
0.08966434746980667,
0.04627203196287155,
-0.028524547815322876,
-0.07147429883480072,
0.04046374931931496,
-0.012794651091098785,
0.03058289736509323,
0.07041549682617188,
-0.034221407026052475,
0.05228358507156372,
-0.09575223922729492,
-0.028569478541612625,
0.003602234646677971,
-0.004488442093133926,
-0.012330740690231323,
-0.05160652846097946,
-0.0041495561599731445,
0.009216444566845894,
0.17267881333827972,
-0.023735791444778442,
0.035613156855106354,
0.014649107120931149,
0.007787792943418026,
0.050352632999420166,
-0.015012796968221664,
0.06843006610870361,
-0.007501132786273956,
-0.026108963415026665,
-0.010682690888643265,
0.03751100227236748,
0.0031144749373197556,
0.00028552673757076263,
0.14591902494430542,
0.045195020735263824,
0.09034252166748047,
0.07612808048725128,
0.01923755556344986,
0.01833423413336277,
-0.1274522989988327,
-0.09602006524801254,
0.007423589937388897,
0.05554091930389404,
-0.018379811197519302,
0.013080086559057236,
0.09172389656305313,
-0.0851835310459137,
0.0691700205206871,
0.04772099107503891,
-0.048628516495227814,
-0.12661974132061005,
-0.19060960412025452,
-0.022024838253855705,
-0.02958938293159008,
-0.010097515769302845,
-0.09105263650417328,
0.013099944218993187,
0.09889256954193115,
0.026503631845116615,
-0.008503830060362816,
0.0970621183514595,
-0.10458062589168549,
-0.03030618280172348,
0.04343895614147186,
-0.028020573779940605,
0.017268190160393715,
0.048345744609832764,
0.0237862691283226,
-0.006786635145545006,
0.04300303012132645,
0.04127447307109833,
0.044350311160087585,
0.025054577738046646,
0.05097810924053192,
-0.02331606298685074,
-0.07321932166814804,
-0.03254750743508339,
-0.0029216343536973,
0.0536685474216938,
0.1407366394996643,
0.023824049159884453,
-0.068048395216465,
0.007948481477797031,
0.10656453669071198,
-0.032667696475982666,
-0.05311094596982002,
-0.10932375490665436,
0.23958224058151245,
0.02371961809694767,
0.0031403368338942528,
-0.005980364512652159,
-0.04653913155198097,
0.002694418653845787,
0.21176016330718994,
0.22401773929595947,
0.004748358856886625,
-0.009845825843513012,
0.011038379743695259,
-0.010758858174085617,
0.035821083933115005,
0.1465102732181549,
0.008036425337195396,
0.25172311067581177,
-0.0477975532412529,
0.03768257051706314,
-0.04074958711862564,
-0.03819805011153221,
-0.09729735553264618,
0.06571543216705322,
-0.00828893855214119,
0.009581330232322216,
-0.03169383108615875,
0.07341792434453964,
-0.04338701441884041,
-0.16932961344718933,
-0.002721593715250492,
-0.0015918705612421036,
-0.06326945126056671,
0.010910123586654663,
-0.002728492021560669,
0.019978126510977745,
0.08328022062778473,
-0.016285818070173264,
-0.006012545898556709,
0.12796935439109802,
0.01889709383249283,
-0.09837846457958221,
-0.06228819489479065,
0.11733903735876083,
0.023361846804618835,
0.1406908929347992,
0.01119937002658844,
0.07834475487470627,
0.08781856298446655,
0.020097697153687477,
-0.09590587019920349,
0.043223656713962555,
-0.018435319885611534,
-0.03036315180361271,
0.005649638827890158,
0.10549377650022507,
-0.008047230541706085,
0.06026380509138107,
0.025250611826777458,
-0.08779517561197281,
0.061773620545864105,
0.013559646904468536,
-0.031706303358078,
-0.08436751365661621,
0.08725149929523468,
-0.09137554466724396,
0.15862798690795898,
0.12426444888114929,
-0.014626330696046352,
-0.04607446491718292,
-0.029194071888923645,
0.017669280990958214,
-0.002609788905829191,
0.053758181631565094,
-0.025846943259239197,
-0.13492071628570557,
0.018772372975945473,
-0.08812490105628967,
0.027078067883849144,
-0.2505576014518738,
-0.08803750574588776,
0.03082067333161831,
-0.018415236845612526,
-0.020615704357624054,
0.051298998296260834,
0.045301828533411026,
0.027760328724980354,
-0.035156942903995514,
0.02563543990254402,
-0.04098755121231079,
0.05877174064517021,
-0.11252747476100922,
-0.09421750158071518
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 120k (uncased)
Seed 3 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-120k')
model = BertModel.from_pretrained("multiberts-seed-3-120k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-120k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 120k (uncased)
Seed 3 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 120k (uncased)\nSeed 3 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 120k (uncased)\nSeed 3 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 120k (uncased)\nSeed 3 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08250075578689575,
-0.00008143542800098658,
-0.0021867603063583374,
0.06880083680152893,
0.08793612569570541,
0.0018307282589375973,
0.1167164072394371,
0.04949956759810448,
-0.03671052306890488,
0.023446371778845787,
0.09244133532047272,
0.03357167914509773,
0.04152590036392212,
0.06257560104131699,
0.09583162516355515,
-0.25956013798713684,
0.05122368037700653,
-0.06375870108604431,
0.05562914162874222,
0.07684069871902466,
0.10030202567577362,
-0.06945788860321045,
0.062475886195898056,
0.03531156852841377,
-0.08075623214244843,
-0.015723349526524544,
-0.017950311303138733,
-0.034037888050079346,
0.10067512840032578,
0.06955332309007645,
0.06111184507608414,
0.0015733540058135986,
0.05776889994740486,
-0.0906948521733284,
0.016646623611450195,
0.045140936970710754,
-0.0018781768158078194,
0.023684989660978317,
-0.007823120802640915,
0.01576852798461914,
0.11020863056182861,
0.038513295352458954,
0.07795476168394089,
0.034720636904239655,
-0.09574106335639954,
-0.10839779675006866,
-0.08186757564544678,
0.1033126711845398,
0.05245416983962059,
0.043445274233818054,
-0.005777944810688496,
0.0771605372428894,
-0.033103324472904205,
0.07489442825317383,
0.11468634009361267,
-0.2549247741699219,
-0.010148687288165092,
0.06942667812108994,
0.04620041325688362,
0.04136066138744354,
0.012593632563948631,
0.026588670909404755,
0.004508618265390396,
0.04577739164233208,
0.027922626584768295,
-0.023783639073371887,
0.11942264437675476,
-0.044727932661771774,
-0.15339210629463196,
-0.04166669771075249,
0.11760444939136505,
-0.0060227978974580765,
-0.12539011240005493,
-0.1000107079744339,
-0.02922913432121277,
0.11451950669288635,
-0.0027295639738440514,
-0.017445329576730728,
-0.004504402168095112,
0.009651578962802887,
0.023887746036052704,
-0.09282863885164261,
-0.08573025465011597,
-0.028949376195669174,
-0.036461200565099716,
0.12880107760429382,
0.046807125210762024,
0.05046447739005089,
-0.03471117466688156,
0.08716937899589539,
-0.11481525003910065,
-0.03782187029719353,
-0.05313841998577118,
-0.08479110896587372,
-0.01806732453405857,
0.009012780152261257,
-0.028614409267902374,
-0.08490946143865585,
-0.05903884768486023,
0.11849823594093323,
0.033433280885219574,
0.030859101563692093,
0.00017294380813837051,
0.041045479476451874,
0.07233092933893204,
0.09515954554080963,
-0.040232542902231216,
0.052445217967033386,
0.03201144188642502,
-0.020974982529878616,
0.05831139534711838,
-0.049809377640485764,
-0.10094612091779709,
0.07756476104259491,
0.00017298106104135513,
0.04044187068939209,
0.024111999198794365,
0.03561200201511383,
-0.011433797888457775,
-0.07318969070911407,
0.16842743754386902,
-0.07681746780872345,
-0.012570139020681381,
-0.017172077670693398,
0.011700423434376717,
0.04968569427728653,
0.032083556056022644,
-0.006399862468242645,
-0.04725077748298645,
-0.005582899786531925,
-0.054383520036935806,
-0.026133473962545395,
-0.055446818470954895,
-0.11821237206459045,
0.00014297803863883018,
-0.042934954166412354,
-0.033051423728466034,
-0.14142806828022003,
-0.21486912667751312,
-0.01988602802157402,
0.06437128037214279,
-0.0022620041854679585,
-0.008920066989958286,
0.02417020872235298,
0.0161284226924181,
-0.021241243928670883,
0.009774732403457165,
-0.0468740351498127,
-0.0009564487263560295,
-0.006399644538760185,
-0.0318685844540596,
0.05568564683198929,
-0.04220821335911751,
0.02383270487189293,
-0.0690193772315979,
0.021934235468506813,
-0.21022474765777588,
0.08758318424224854,
-0.033159591257572174,
0.001527169719338417,
-0.038362253457307816,
-0.045552805066108704,
0.008759606629610062,
0.04697256535291672,
-0.008881417103111744,
0.1158747598528862,
-0.13588984310626984,
-0.048968441784381866,
0.18133559823036194,
-0.15850278735160828,
-0.0019630156457424164,
0.09949593245983124,
-0.04796349257230759,
0.05371125042438507,
0.13301943242549896,
0.09525939077138901,
0.08335234224796295,
-0.07542883604764938,
0.010946857742965221,
0.06082480028271675,
-0.06796494126319885,
0.054474152624607086,
0.0906546413898468,
-0.025320103392004967,
-0.13609272241592407,
0.029229573905467987,
-0.07684966921806335,
-0.008068762719631195,
-0.025035493075847626,
-0.019953308627009392,
0.00951443426311016,
-0.03848399221897125,
0.027862660586833954,
0.005980141926556826,
0.017147710546851158,
-0.04000142961740494,
-0.08240964263677597,
0.03337593749165535,
0.07516966760158539,
-0.07237997651100159,
0.04359208419919014,
-0.07027848809957504,
0.06120872125029564,
-0.07386939227581024,
-0.005959971807897091,
-0.16772496700286865,
-0.026762375608086586,
0.04312845692038536,
-0.04638336971402168,
0.05001918226480484,
0.09127390384674072,
0.003809909801930189,
0.12258932739496231,
-0.04055868089199066,
0.0035497285425662994,
-0.006859593093395233,
-0.010625690221786499,
-0.04793299734592438,
-0.12045357376337051,
-0.0818721279501915,
-0.06927820295095444,
0.09879841655492783,
-0.0721498504281044,
0.02887485921382904,
-0.07291851937770844,
-0.02185307815670967,
-0.009742354974150658,
-0.058366693556308746,
-0.0033707506954669952,
0.010570963844656944,
-0.02931280992925167,
-0.04739144444465637,
0.05025612562894821,
0.0500556156039238,
-0.058259349316358566,
0.07883836328983307,
-0.10609415173530579,
-0.062346093356609344,
0.054492540657520294,
0.016319146379828453,
-0.0824347734451294,
0.08616814017295837,
-0.020493624731898308,
-0.012331098318099976,
-0.057565949857234955,
-0.04368777200579643,
0.19216567277908325,
-0.021785032004117966,
0.09998700022697449,
-0.09221151471138,
0.0007614733185619116,
0.02697501890361309,
-0.046677783131599426,
-0.01848839595913887,
0.0598885752260685,
0.053773000836372375,
-0.19078822433948517,
0.015001468360424042,
0.052295900881290436,
0.07515217363834381,
0.11290333420038223,
0.02741123177111149,
-0.023858927190303802,
-0.04496123641729355,
-0.010295314714312553,
0.007032640744000673,
0.05359392613172531,
-0.021906599402427673,
-0.007616652641445398,
0.029845312237739563,
0.058778777718544006,
0.0178912915289402,
-0.0794682651758194,
0.032807786017656326,
0.0660553127527237,
-0.016914263367652893,
-0.03851967304944992,
-0.022603146731853485,
-0.06061739847064018,
0.06206348538398743,
0.05338936299085617,
0.03441869094967842,
0.02675483375787735,
-0.014247510582208633,
-0.13570746779441833,
0.18995100259780884,
-0.11433956027030945,
-0.25849199295043945,
-0.10760226845741272,
-0.06112199276685715,
-0.027872180566191673,
0.0400659516453743,
0.058270297944545746,
-0.03106323443353176,
-0.04294168949127197,
-0.11690204590559006,
0.0623084120452404,
-0.06407758593559265,
-0.0314984992146492,
-0.01262659765779972,
-0.0523226261138916,
-0.01911616511642933,
-0.12788629531860352,
-0.012158514931797981,
-0.031041264533996582,
-0.07531121373176575,
0.006935130804777145,
-0.038664426654577255,
0.0289932768791914,
0.13602924346923828,
0.036453958600759506,
-0.019633401185274124,
-0.01880274899303913,
0.19273258745670319,
0.01188596896827221,
0.05779213830828667,
0.11482448875904083,
-0.026969347149133682,
0.05520651489496231,
0.04430502653121948,
0.024252641946077347,
-0.0485830157995224,
0.015000883489847183,
-0.012863561511039734,
-0.12158557772636414,
-0.17468726634979248,
-0.0698084831237793,
-0.003993810620158911,
0.0053047602996230125,
0.01830836944282055,
0.03669624775648117,
0.02169969119131565,
0.03973250836133957,
-0.030956245958805084,
0.025534698739647865,
-0.01270413026213646,
0.08120647072792053,
0.026465483009815216,
-0.07291097939014435,
0.09300480782985687,
-0.06045132502913475,
0.015643728896975517,
0.10911939293146133,
-0.05933386832475662,
0.1867157369852066,
0.023733587935566902,
0.05720268189907074,
0.10310542583465576,
0.0185161754488945,
0.052974883466959,
0.0897597000002861,
-0.048052869737148285,
0.004950451664626598,
-0.06194901093840599,
-0.052213914692401886,
-0.036193035542964935,
0.046748071908950806,
0.031151019036769867,
0.017406105995178223,
-0.11908115446567535,
0.02140131965279579,
-0.0017909557791426778,
0.139390230178833,
0.047287240624427795,
-0.12154161930084229,
-0.11992436647415161,
0.03342275321483612,
-0.045049991458654404,
-0.06064937263727188,
0.03052337095141411,
0.05553774535655975,
-0.1537514328956604,
0.04671269655227661,
-0.006456880830228329,
0.06567266583442688,
-0.0912664458155632,
0.015451671555638313,
-0.0420636348426342,
0.0003862539306282997,
0.004842766560614109,
0.06982412934303284,
-0.13547858595848083,
0.10479753464460373,
0.019663717597723007,
0.048349298536777496,
-0.07829491049051285,
0.014652075245976448,
-0.009260199032723904,
0.10693952441215515,
0.1159902811050415,
0.04229309409856796,
-0.04831669107079506,
-0.019117772579193115,
-0.04728570580482483,
0.019915562123060226,
0.061012085527181625,
-0.07677195966243744,
0.061749931424856186,
0.007268716115504503,
0.0073508876375854015,
-0.022448621690273285,
0.017263207584619522,
-0.13309615850448608,
-0.12281912565231323,
0.06122708320617676,
-0.08022948354482651,
-0.0994890108704567,
-0.055916640907526016,
-0.06497342139482498,
-0.05380664020776749,
0.20923490822315216,
-0.109508216381073,
-0.08935849368572235,
-0.098326176404953,
-0.01786651462316513,
0.04547373950481415,
-0.0656685158610344,
0.04527394846081734,
-0.038992978632450104,
0.09031268209218979,
-0.04735221713781357,
-0.11062894761562347,
0.03481068089604378,
-0.11380476504564285,
-0.11416861414909363,
-0.04365682229399681,
0.10601858049631119,
0.11577434837818146,
0.03862248361110687,
0.011568581685423851,
0.010569453239440918,
0.002016855403780937,
-0.1178922951221466,
0.012847792357206345,
0.13096502423286438,
0.00021593645215034485,
0.07288564741611481,
-0.061320267617702484,
0.029885511845350266,
-0.017836442217230797,
0.0012242253869771957,
0.1333577036857605,
0.1860382854938507,
-0.061722151935100555,
0.17407262325286865,
0.19964036345481873,
-0.10388366878032684,
-0.19186794757843018,
-0.054473765194416046,
0.0003069154918193817,
0.04433789104223251,
0.052576154470443726,
-0.1833285689353943,
0.089551642537117,
0.034410931169986725,
-0.031914692372083664,
0.015895094722509384,
-0.2342015504837036,
-0.1098201647400856,
0.08886057138442993,
0.058527808636426926,
0.1846787929534912,
-0.08204895257949829,
-0.03733637183904648,
-0.016302989795804024,
-0.03898286819458008,
0.05091603472828865,
-0.03754096478223801,
0.09308002889156342,
0.005627725273370743,
-0.03107212483882904,
0.0017530480399727821,
-0.03182454779744148,
0.09476537257432938,
0.03864783048629761,
0.021874748170375824,
-0.07165338099002838,
-0.0057921018451452255,
0.11082391440868378,
-0.03924683853983879,
0.09984337538480759,
0.04018893092870712,
0.0747939944267273,
-0.0956999883055687,
-0.0604693666100502,
-0.07753056287765503,
0.04455443471670151,
-0.042073167860507965,
-0.05525950342416763,
-0.0632491260766983,
0.0610603429377079,
0.037234969437122345,
0.010773394256830215,
0.0028185751289129257,
-0.03913041949272156,
0.04551135376095772,
0.0866183415055275,
0.08484949916601181,
-0.02958281710743904,
-0.07496753334999084,
-0.05308142304420471,
-0.04783255606889725,
0.06581360846757889,
-0.09032318741083145,
0.017558464780449867,
0.026452476158738136,
0.009947511367499828,
0.0896700993180275,
0.03396070376038551,
-0.1382630169391632,
0.01103126909583807,
0.03411310166120529,
-0.12344919890165329,
-0.10496687889099121,
-0.019451845437288284,
0.030507758259773254,
-0.03938281163573265,
0.05777909606695175,
0.146493062376976,
-0.03482312709093094,
-0.03193925693631172,
-0.046978872269392014,
0.03816736117005348,
-0.02039879560470581,
0.049652114510536194,
0.065336674451828,
0.030211417004466057,
-0.0719379112124443,
0.07373826205730438,
0.034981869161129,
-0.03385670855641365,
0.04234052449464798,
0.043463222682476044,
-0.09426983445882797,
-0.07846446335315704,
-0.056380752474069595,
0.08713121712207794,
-0.025927992537617683,
-0.04621000215411186,
-0.0014846567064523697,
-0.08192549645900726,
0.06844043731689453,
0.07769564539194107,
0.048790767788887024,
0.036882877349853516,
-0.08705943822860718,
0.015408724546432495,
-0.052820682525634766,
0.0324871763586998,
-0.031676195561885834,
-0.004639275372028351,
-0.056904345750808716,
0.061003074049949646,
0.0645868331193924,
0.09830845892429352,
-0.03382442146539688,
-0.07525695115327835,
-0.08488181978464127,
-0.013012930750846863,
-0.0646287202835083,
-0.03420468419790268,
-0.0760372206568718,
-0.0051065124571323395,
0.0004177647642791271,
-0.0014254767447710037,
0.022770864889025688,
0.03617303818464279,
-0.04325401410460472,
-0.01848933845758438,
-0.035934969782829285,
0.037074148654937744,
-0.06272660195827484,
0.007713807746767998,
0.014951498247683048,
-0.03622458130121231,
0.09235949069261551,
0.0350399911403656,
-0.012172943912446499,
0.048546962440013885,
-0.026280201971530914,
0.03373710811138153,
-0.019684799015522003,
-0.0003893407993018627,
-0.024400390684604645,
-0.10822789371013641,
-0.005584120284765959,
0.0048341285437345505,
-0.02357150800526142,
0.011052033863961697,
0.05637621879577637,
-0.07230854034423828,
0.08825190365314484,
0.045284904539585114,
-0.029606714844703674,
-0.07045780122280121,
0.04030875116586685,
-0.015633337199687958,
0.030837778002023697,
0.07173609733581543,
-0.034443289041519165,
0.05359906703233719,
-0.09661977738142014,
-0.028651650995016098,
0.004377599339932203,
-0.004996776580810547,
-0.013476459309458733,
-0.05205322057008743,
-0.0038466565310955048,
0.008127732202410698,
0.17249800264835358,
-0.023416906595230103,
0.03928336501121521,
0.013374291360378265,
0.009050617925822735,
0.05095621570944786,
-0.014767851680517197,
0.06936737895011902,
-0.00771326944231987,
-0.025061799213290215,
-0.012121923267841339,
0.03740439936518669,
0.003955921158194542,
0.001635286957025528,
0.14454542100429535,
0.04656064882874489,
0.08762765675783157,
0.07560309767723083,
0.018191371113061905,
0.018251869827508926,
-0.13135018944740295,
-0.09251351654529572,
0.00880551990121603,
0.05657825991511345,
-0.01823461800813675,
0.015825144946575165,
0.09293569624423981,
-0.08674225211143494,
0.06925272196531296,
0.047924213111400604,
-0.048124007880687714,
-0.1270856112241745,
-0.19355276226997375,
-0.023315854370594025,
-0.02897517941892147,
-0.010098621249198914,
-0.09087026119232178,
0.014395859092473984,
0.09564464539289474,
0.02684411220252514,
-0.009138030931353569,
0.09449689835309982,
-0.10295314341783524,
-0.03181043267250061,
0.04366203024983406,
-0.02849159762263298,
0.015545634552836418,
0.0491679310798645,
0.024319490417838097,
-0.005650827661156654,
0.04261578246951103,
0.04238002002239227,
0.043805480003356934,
0.026275448501110077,
0.05165032297372818,
-0.025281205773353577,
-0.07481221109628677,
-0.032129544764757156,
-0.0010851607657968998,
0.05380618944764137,
0.13980455696582794,
0.023967023938894272,
-0.06920172274112701,
0.007134306710213423,
0.10716680437326431,
-0.03126126900315285,
-0.05175159499049187,
-0.10880441218614578,
0.24239695072174072,
0.023222845047712326,
0.0035381221678107977,
-0.005944516975432634,
-0.045732542872428894,
0.004037879407405853,
0.21070732176303864,
0.22495335340499878,
0.002639598213136196,
-0.008932199329137802,
0.010583147406578064,
-0.010525419376790524,
0.037293434143066406,
0.1460508108139038,
0.006902117282152176,
0.25382179021835327,
-0.04824008047580719,
0.037370599806308746,
-0.04144134372472763,
-0.03785605728626251,
-0.09830968081951141,
0.06612963229417801,
-0.007258220110088587,
0.00889356154948473,
-0.03050871193408966,
0.07320114970207214,
-0.043285466730594635,
-0.17310070991516113,
-0.0026600705459713936,
0.00014897296205163002,
-0.0625716894865036,
0.010338295251131058,
-0.003166669048368931,
0.020523857325315475,
0.0846453458070755,
-0.01719357818365097,
-0.006827676203101873,
0.12916350364685059,
0.01884259097278118,
-0.09822557866573334,
-0.05922108143568039,
0.11730560660362244,
0.022412866353988647,
0.1392039954662323,
0.010845132172107697,
0.07872027158737183,
0.08763079345226288,
0.02074204757809639,
-0.09347029030323029,
0.04418549686670303,
-0.018854960799217224,
-0.030368570238351822,
0.0070993052795529366,
0.10645212978124619,
-0.008462583646178246,
0.06121204048395157,
0.02570466697216034,
-0.0891539454460144,
0.06336507201194763,
0.012185990810394287,
-0.03307140991091728,
-0.08242461830377579,
0.08844240009784698,
-0.0919933095574379,
0.15786293148994446,
0.12489567697048187,
-0.013625085353851318,
-0.04683753103017807,
-0.03003014251589775,
0.019679676741361618,
-0.0014895377680659294,
0.05120503157377243,
-0.026943961158394814,
-0.13325457274913788,
0.01887000910937786,
-0.08618713915348053,
0.02590828388929367,
-0.2508208751678467,
-0.08905787765979767,
0.031180687248706818,
-0.01761792227625847,
-0.020315591245889664,
0.04996762052178383,
0.043814823031425476,
0.027535397559404373,
-0.035553157329559326,
0.02426515892148018,
-0.04001542553305626,
0.058760371059179306,
-0.11267565190792084,
-0.09365788102149963
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 1300k (uncased)
Seed 3 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1300k')
model = BertModel.from_pretrained("multiberts-seed-3-1300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-1300k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 1300k (uncased)
Seed 3 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 1300k (uncased)\nSeed 3 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 1300k (uncased)\nSeed 3 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 1300k (uncased)\nSeed 3 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08141550421714783,
-0.003528733504936099,
-0.0020653754472732544,
0.07055995613336563,
0.08555930852890015,
0.0012699104845523834,
0.11234059184789658,
0.050027649849653244,
-0.03693634644150734,
0.021959105506539345,
0.09316542744636536,
0.02915748581290245,
0.04249591380357742,
0.06640424579381943,
0.09911057353019714,
-0.26165640354156494,
0.05075401812791824,
-0.06656471639871597,
0.06101264804601669,
0.07665514945983887,
0.1006508618593216,
-0.06940022855997086,
0.061688605695962906,
0.03593055158853531,
-0.0757572278380394,
-0.014127051457762718,
-0.01858871802687645,
-0.03596861660480499,
0.0988006442785263,
0.06881988048553467,
0.06176210194826126,
0.0012111235409975052,
0.058073922991752625,
-0.0920809656381607,
0.015742331743240356,
0.04298024624586105,
-0.00005933735519647598,
0.022549431771039963,
-0.011366622522473335,
0.0177532359957695,
0.11066090315580368,
0.03951678425073624,
0.08040919899940491,
0.03274370729923248,
-0.09596897661685944,
-0.10595715045928955,
-0.0843968391418457,
0.10710500180721283,
0.055184535682201385,
0.04330708459019661,
-0.0039758989587426186,
0.07702693343162537,
-0.03054119274020195,
0.07610908150672913,
0.11052128672599792,
-0.24807214736938477,
-0.011688579805195332,
0.07338513433933258,
0.047411154955625534,
0.0459415465593338,
0.014436876401305199,
0.02545306645333767,
0.004054900258779526,
0.044601280242204666,
0.02480267360806465,
-0.02278773859143257,
0.11162582784891129,
-0.044137075543403625,
-0.15315435826778412,
-0.04445590823888779,
0.11882416903972626,
-0.0071390122175216675,
-0.12286613881587982,
-0.09887588024139404,
-0.030050141736865044,
0.10411344468593597,
-0.0036351438611745834,
-0.020797351375222206,
-0.004294123500585556,
0.00976552627980709,
0.023671112954616547,
-0.09357008337974548,
-0.0842139720916748,
-0.028434397652745247,
-0.03951195627450943,
0.1255943775177002,
0.04569584131240845,
0.051427148282527924,
-0.034016478806734085,
0.08784157782793045,
-0.12000216543674469,
-0.03636334836483002,
-0.0529346764087677,
-0.0831720158457756,
-0.017875082790851593,
0.005671874154359102,
-0.026748400181531906,
-0.08440860360860825,
-0.06000591814517975,
0.12297295033931732,
0.02884802222251892,
0.030725616961717606,
-0.00005665235221385956,
0.04137033969163895,
0.07161475718021393,
0.09481310844421387,
-0.04368191212415695,
0.05250094458460808,
0.03558172285556793,
-0.021840613335371017,
0.06164972856640816,
-0.05039461702108383,
-0.09838512539863586,
0.07568325102329254,
-0.0016976641491055489,
0.039506688714027405,
0.028294891119003296,
0.03594130277633667,
-0.010378926992416382,
-0.07120190560817719,
0.17131628096103668,
-0.07725120335817337,
-0.011393073946237564,
-0.02005227655172348,
0.012110646814107895,
0.04601118713617325,
0.03430841863155365,
-0.004026079550385475,
-0.04626814275979996,
-0.0075899455696344376,
-0.05351092293858528,
-0.02859487384557724,
-0.05544672906398773,
-0.1178692877292633,
0.001414957456290722,
-0.02873939834535122,
-0.03414497524499893,
-0.13976188004016876,
-0.22279226779937744,
-0.018307073041796684,
0.06528273224830627,
-0.0018716221675276756,
-0.012357283383607864,
0.023934895172715187,
0.01593141071498394,
-0.02140660770237446,
0.009602058678865433,
-0.04685607925057411,
-0.00007957406342029572,
-0.007647599093616009,
-0.03187832236289978,
0.05434541404247284,
-0.042740099132061005,
0.021048791706562042,
-0.07093179225921631,
0.022671805694699287,
-0.21679478883743286,
0.08657191693782806,
-0.034601274877786636,
0.004679588600993156,
-0.03658299893140793,
-0.04463881254196167,
0.004685191437602043,
0.046623192727565765,
-0.00863360334187746,
0.11574193835258484,
-0.1286458820104599,
-0.048242636024951935,
0.17433509230613708,
-0.15978950262069702,
-0.0026367008686065674,
0.10010414570569992,
-0.04683540761470795,
0.05392736941576004,
0.13186699151992798,
0.09689795970916748,
0.0901130735874176,
-0.07385848462581635,
0.012487449683248997,
0.06369134038686752,
-0.07008090615272522,
0.05287183076143265,
0.08933912217617035,
-0.025396453216671944,
-0.13856464624404907,
0.03047119453549385,
-0.07801308482885361,
-0.006570047698915005,
-0.024591270834207535,
-0.023155495524406433,
0.009814096614718437,
-0.03691393509507179,
0.025190625339746475,
0.005030729342252016,
0.01805124431848526,
-0.03985859081149101,
-0.08116898685693741,
0.028135893866419792,
0.0746244564652443,
-0.07010330259799957,
0.04696594178676605,
-0.06698324531316757,
0.06174362823367119,
-0.07281550019979477,
-0.007421589456498623,
-0.16319063305854797,
-0.026635868474841118,
0.04340123385190964,
-0.04673466458916664,
0.046314314007759094,
0.08238336443901062,
0.004059257917106152,
0.11979366093873978,
-0.04199422150850296,
0.00623367028310895,
-0.0058967601507902145,
-0.009753379970788956,
-0.04750409722328186,
-0.11819274723529816,
-0.08104567229747772,
-0.06916430592536926,
0.09967231750488281,
-0.07138647139072418,
0.030399054288864136,
-0.07041209936141968,
-0.024249274283647537,
-0.011922838166356087,
-0.05945771187543869,
-0.0033050132915377617,
0.011188955046236515,
-0.028964729979634285,
-0.04592525586485863,
0.04832583665847778,
0.05097141116857529,
-0.06001127138733864,
0.07687171548604965,
-0.10499932616949081,
-0.06485281884670258,
0.05435309559106827,
0.015866011381149292,
-0.08479686081409454,
0.08403278142213821,
-0.021981123834848404,
-0.013898844830691814,
-0.058116622269153595,
-0.04697459191083908,
0.19204320013523102,
-0.02263316512107849,
0.09836231172084808,
-0.09119196981191635,
-0.00033640675246715546,
0.027364440262317657,
-0.045961152762174606,
-0.020174529403448105,
0.05774484574794769,
0.05169808119535446,
-0.179194837808609,
0.01491551473736763,
0.04516831785440445,
0.07127256691455841,
0.11143863201141357,
0.029060624539852142,
-0.022006208077073097,
-0.04329521954059601,
-0.009548230096697807,
0.00617598183453083,
0.051749296486377716,
-0.023421309888362885,
-0.0048091476783156395,
0.029319342225790024,
0.059336066246032715,
0.019601408392190933,
-0.07884903252124786,
0.03309161588549614,
0.06774850934743881,
-0.017570767551660538,
-0.04153982549905777,
-0.023151228204369545,
-0.060643184930086136,
0.06038016825914383,
0.05372600629925728,
0.036801014095544815,
0.026724698022007942,
-0.014516467228531837,
-0.13745777308940887,
0.18848274648189545,
-0.11430701613426208,
-0.2560615539550781,
-0.10991179943084717,
-0.06150828301906586,
-0.02695666067302227,
0.03817763924598694,
0.05678890645503998,
-0.028952492401003838,
-0.04239076375961304,
-0.11963707208633423,
0.059233371168375015,
-0.06420405209064484,
-0.03065602295100689,
-0.009690282866358757,
-0.05436272919178009,
-0.01935485564172268,
-0.12664452195167542,
-0.011099549010396004,
-0.030314255505800247,
-0.07820264995098114,
0.004460442811250687,
-0.038551609963178635,
0.030087657272815704,
0.13629664480686188,
0.03473614901304245,
-0.01910151168704033,
-0.019851334393024445,
0.19218285381793976,
0.012044616043567657,
0.06026017293334007,
0.11132468283176422,
-0.024160314351320267,
0.05475075915455818,
0.049810461699962616,
0.024036282673478127,
-0.04616445302963257,
0.01460755430161953,
-0.015565804205834866,
-0.12347596883773804,
-0.17182773351669312,
-0.07144766300916672,
-0.004255026578903198,
0.008368334732949734,
0.01752256229519844,
0.037286415696144104,
0.01755383610725403,
0.04023324325680733,
-0.030342917889356613,
0.027170667424798012,
-0.009880069643259048,
0.08173112571239471,
0.028800610452890396,
-0.07328754663467407,
0.09015725553035736,
-0.059432756155729294,
0.017301447689533234,
0.11011248081922531,
-0.05496569350361824,
0.18113374710083008,
0.02116980403661728,
0.05446849763393402,
0.10088352859020233,
0.019214730709791183,
0.052595578134059906,
0.0878913551568985,
-0.04809676855802536,
0.005082013085484505,
-0.05961497873067856,
-0.05215659737586975,
-0.03433242812752724,
0.04592546820640564,
0.028442934155464172,
0.02068716287612915,
-0.1182720735669136,
0.02179800346493721,
0.0020837748888880014,
0.13254980742931366,
0.04254595935344696,
-0.12225836515426636,
-0.12031073868274689,
0.033103615045547485,
-0.046446193009614944,
-0.0590498149394989,
0.031987521797418594,
0.052868351340293884,
-0.15359774231910706,
0.04912170395255089,
-0.005971251055598259,
0.06362482160329819,
-0.08939743041992188,
0.014176215045154095,
-0.03930020332336426,
0.0008490318432450294,
0.0044830054976046085,
0.06927236914634705,
-0.13167092204093933,
0.10819435119628906,
0.019572976976633072,
0.04810301586985588,
-0.07766051590442657,
0.016120625659823418,
-0.010774297639727592,
0.10719675570726395,
0.11875736713409424,
0.04402334988117218,
-0.04888381063938141,
-0.01710228994488716,
-0.04777415841817856,
0.020090894773602486,
0.054812896996736526,
-0.07503680139780045,
0.05948489159345627,
0.009264855645596981,
0.008159242570400238,
-0.022620914503932,
0.018157772719860077,
-0.13292354345321655,
-0.12321031838655472,
0.06146177276968956,
-0.07846757024526596,
-0.10508453100919724,
-0.056117042899131775,
-0.06418873369693756,
-0.058750517666339874,
0.21156732738018036,
-0.11180488765239716,
-0.09001733362674713,
-0.09840215742588043,
-0.012749753892421722,
0.04659449681639671,
-0.06545949727296829,
0.046044208109378815,
-0.03781774267554283,
0.08928351104259491,
-0.04980240762233734,
-0.10858975350856781,
0.035349227488040924,
-0.11382986605167389,
-0.11318943649530411,
-0.04344748705625534,
0.1038123145699501,
0.11432605236768723,
0.038561444729566574,
0.009393956512212753,
0.011035360395908356,
0.0019846539944410324,
-0.11999792605638504,
0.009507525712251663,
0.13617317378520966,
-0.006364854052662849,
0.07145346701145172,
-0.05880780518054962,
0.029361728578805923,
-0.016070352867245674,
-0.0004394073039293289,
0.1322949230670929,
0.18963006138801575,
-0.0627247542142868,
0.17561259865760803,
0.2012253999710083,
-0.10610754787921906,
-0.19370855391025543,
-0.053946662694215775,
-0.0012870971113443375,
0.04362556338310242,
0.05403280630707741,
-0.17724263668060303,
0.08991760015487671,
0.03984282165765762,
-0.033795565366744995,
0.007703986018896103,
-0.2374356985092163,
-0.11236738413572311,
0.08782030642032623,
0.058919500559568405,
0.18341472744941711,
-0.07927058637142181,
-0.03858519345521927,
-0.014216672629117966,
-0.039776094257831573,
0.04183855652809143,
-0.035225603729486465,
0.08924990892410278,
0.005755128338932991,
-0.02988668903708458,
0.0018077287822961807,
-0.03335708752274513,
0.09472627192735672,
0.03892222046852112,
0.02069762721657753,
-0.072172150015831,
-0.006051903590559959,
0.11709229648113251,
-0.03942929208278656,
0.09928280860185623,
0.046152226626873016,
0.07540352642536163,
-0.09413708746433258,
-0.059631817042827606,
-0.07797767221927643,
0.04570140689611435,
-0.042754895985126495,
-0.05499081313610077,
-0.0667501911520958,
0.06160639226436615,
0.041752662509679794,
0.009082579985260963,
0.0027396250516176224,
-0.03801394999027252,
0.045165568590164185,
0.09131285548210144,
0.0846504420042038,
-0.0297396183013916,
-0.06531168520450592,
-0.048914141952991486,
-0.049401283264160156,
0.06348471343517303,
-0.08513614535331726,
0.017126677557826042,
0.02807116135954857,
0.010745258070528507,
0.08450712263584137,
0.03415615111589432,
-0.1373448520898819,
0.010535458102822304,
0.035866834223270416,
-0.12436380982398987,
-0.10148794203996658,
-0.021092751994729042,
0.030926041305065155,
-0.042601149529218674,
0.05322161316871643,
0.1440698802471161,
-0.03723805025219917,
-0.0318174809217453,
-0.04515323415398598,
0.03994882479310036,
-0.021005839109420776,
0.050218068063259125,
0.06331901252269745,
0.029292941093444824,
-0.07122206687927246,
0.07573889195919037,
0.03655700758099556,
-0.03941063582897186,
0.04030415043234825,
0.04637793451547623,
-0.093925341963768,
-0.07738412916660309,
-0.05806829407811165,
0.08175443112850189,
-0.028802620247006416,
-0.0431101992726326,
-0.001703491434454918,
-0.08012557029724121,
0.06764485687017441,
0.07775574177503586,
0.04814976826310158,
0.036181334406137466,
-0.08550427109003067,
0.017591971904039383,
-0.05460242182016373,
0.035037845373153687,
-0.03258518874645233,
-0.003860199823975563,
-0.057280413806438446,
0.06188908964395523,
0.0625438541173935,
0.09431266784667969,
-0.03391692042350769,
-0.0748399943113327,
-0.08555868268013,
-0.011326328851282597,
-0.05072489380836487,
-0.03583604097366333,
-0.08103407919406891,
-0.0047651659697294235,
0.0008836723864078522,
-0.0006286576390266418,
0.01848062314093113,
0.036379776895046234,
-0.042214393615722656,
-0.018230125308036804,
-0.037892360240221024,
0.036597393453121185,
-0.06050949916243553,
0.008253240957856178,
0.017391487956047058,
-0.033983588218688965,
0.09131430089473724,
0.0333634689450264,
-0.01068846881389618,
0.046936482191085815,
-0.03272180259227753,
0.035391099750995636,
-0.021328911185264587,
-0.00003826059401035309,
-0.02387421578168869,
-0.10779602080583572,
-0.006137111224234104,
0.007137265056371689,
-0.026229128241539,
0.01279998105019331,
0.05859809368848801,
-0.07336034625768661,
0.08653152734041214,
0.04782847315073013,
-0.030136100947856903,
-0.07164888083934784,
0.04023211449384689,
-0.013822665438055992,
0.02911948598921299,
0.06881929934024811,
-0.037229716777801514,
0.05026590824127197,
-0.09811656922101974,
-0.02876558154821396,
0.0042038531973958015,
-0.003682158887386322,
-0.013290956616401672,
-0.048946306109428406,
-0.003558077849447727,
0.00946873240172863,
0.18152205646038055,
-0.023374371230602264,
0.03394531458616257,
0.015736065804958344,
0.0052244290709495544,
0.052509523928165436,
-0.01588364690542221,
0.07540270686149597,
-0.007455563172698021,
-0.026020225137472153,
-0.009897574782371521,
0.03799079358577728,
0.004042036831378937,
0.0006480216979980469,
0.1434706449508667,
0.04290958493947983,
0.09114409238100052,
0.07721653580665588,
0.018185874447226524,
0.021447278559207916,
-0.12472324073314667,
-0.09492775797843933,
0.007201492786407471,
0.05698920786380768,
-0.01875084638595581,
0.005379026755690575,
0.09744025766849518,
-0.09017656743526459,
0.06944678723812103,
0.049576956778764725,
-0.04885268956422806,
-0.129081130027771,
-0.19918152689933777,
-0.022185267880558968,
-0.03357595205307007,
-0.009579265490174294,
-0.09247490763664246,
0.014935181476175785,
0.09247579425573349,
0.026158221065998077,
-0.006975122727453709,
0.09620296955108643,
-0.10590922087430954,
-0.028943203389644623,
0.04024951159954071,
-0.02850821428000927,
0.017588580027222633,
0.04909227788448334,
0.020104989409446716,
-0.00643065944314003,
0.04270497336983681,
0.04069297015666962,
0.04235798493027687,
0.027469582855701447,
0.05179528146982193,
-0.023811832070350647,
-0.07335762679576874,
-0.03340631723403931,
-0.004129238426685333,
0.05476180836558342,
0.13538110256195068,
0.023932505398988724,
-0.06968384981155396,
0.007924657315015793,
0.10807369649410248,
-0.03210395574569702,
-0.04908163473010063,
-0.11108618974685669,
0.24331721663475037,
0.026905756443738937,
0.0016569166909903288,
-0.005837679840624332,
-0.04659298062324524,
0.0035536494106054306,
0.21204599738121033,
0.227494478225708,
0.0055374447256326675,
-0.009230228140950203,
0.009949881583452225,
-0.010716385208070278,
0.03709855675697327,
0.14505888521671295,
0.0060533154755830765,
0.24771001935005188,
-0.04715216904878616,
0.03526652976870537,
-0.04075681045651436,
-0.039327118545770645,
-0.09883341193199158,
0.06722353398799896,
-0.00792212225496769,
0.009092476218938828,
-0.03148471191525459,
0.07443644106388092,
-0.044053517282009125,
-0.1695769876241684,
-0.004483005963265896,
-0.0008988254703581333,
-0.06019076332449913,
0.010915892198681831,
-0.0018548406660556793,
0.021149368956685066,
0.08129297196865082,
-0.015626195818185806,
-0.005741460714489222,
0.12255460768938065,
0.020267456769943237,
-0.09741358458995819,
-0.0620303750038147,
0.11712019890546799,
0.030605141073465347,
0.14668512344360352,
0.012412535957992077,
0.07614345103502274,
0.08778373897075653,
0.020359545946121216,
-0.09605097770690918,
0.04523635655641556,
-0.017260322347283363,
-0.029188383370637894,
0.005459871608763933,
0.10715655982494354,
-0.008492150343954563,
0.06518102437257767,
0.02473667450249195,
-0.08867883682250977,
0.06382676959037781,
0.008767008781433105,
-0.030518297106027603,
-0.08312742412090302,
0.08631829917430878,
-0.09076480567455292,
0.15893134474754333,
0.1249299943447113,
-0.013848593458533287,
-0.04491017758846283,
-0.029374336823821068,
0.017498616129159927,
-0.004350211471319199,
0.05715085566043854,
-0.025380689650774002,
-0.13443049788475037,
0.018058160319924355,
-0.08683142066001892,
0.025984026491642,
-0.2443421483039856,
-0.09018504619598389,
0.029990538954734802,
-0.018896903842687607,
-0.019048623740673065,
0.05313786119222641,
0.045066989958286285,
0.025649523362517357,
-0.03452029079198837,
0.02918878011405468,
-0.03903794288635254,
0.05938393995165825,
-0.11455288529396057,
-0.09316818416118622
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 1400k (uncased)
Seed 3 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1400k')
model = BertModel.from_pretrained("multiberts-seed-3-1400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-1400k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 1400k (uncased)
Seed 3 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 1400k (uncased)\nSeed 3 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 1400k (uncased)\nSeed 3 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 1400k (uncased)\nSeed 3 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08366861939430237,
-0.0010649331379681826,
-0.0022110063582658768,
0.06988658010959625,
0.0875002071261406,
0.0038757361471652985,
0.11499159038066864,
0.049113962799310684,
-0.03738251328468323,
0.021984737366437912,
0.09283773601055145,
0.027320116758346558,
0.041831109672784805,
0.062837615609169,
0.09748364984989166,
-0.2584882378578186,
0.05023565888404846,
-0.06463829427957535,
0.05527527630329132,
0.07614029198884964,
0.10067148506641388,
-0.07045261561870575,
0.06234283372759819,
0.036645758897066116,
-0.0802391991019249,
-0.015904534608125687,
-0.01819295436143875,
-0.034882716834545135,
0.10065194964408875,
0.07014766335487366,
0.06181737035512924,
0.0006092935800552368,
0.05785336717963219,
-0.08885486423969269,
0.016164623200893402,
0.04450671374797821,
-0.0015355683863162994,
0.023881923407316208,
-0.009198861196637154,
0.017964893952012062,
0.10847481340169907,
0.03997023403644562,
0.07850101590156555,
0.03411230072379112,
-0.09606969356536865,
-0.11617935448884964,
-0.08140307664871216,
0.10770871490240097,
0.05321037769317627,
0.04250127822160721,
-0.004839248023927212,
0.07331464439630508,
-0.031214525923132896,
0.07431800663471222,
0.10707159340381622,
-0.25131016969680786,
-0.01018016692250967,
0.06681828945875168,
0.04435043781995773,
0.04405909776687622,
0.012985593639314175,
0.026208804920315742,
0.004919581115245819,
0.04663824662566185,
0.029614217579364777,
-0.023473098874092102,
0.11810357868671417,
-0.04424385726451874,
-0.15180101990699768,
-0.04235865920782089,
0.11771683394908905,
-0.007421569898724556,
-0.12468043714761734,
-0.10068665444850922,
-0.028740109875798225,
0.1137630045413971,
-0.004412566777318716,
-0.01912786439061165,
-0.004203944467008114,
0.01025344431400299,
0.026397472247481346,
-0.09417947381734848,
-0.08564148098230362,
-0.027753029018640518,
-0.037186164408922195,
0.1293639987707138,
0.04661555588245392,
0.05137493088841438,
-0.03482618182897568,
0.08596951514482498,
-0.11710129678249359,
-0.03866741806268692,
-0.05186285078525543,
-0.08431776612997055,
-0.017376668751239777,
0.0081083495169878,
-0.027509985491633415,
-0.08108176290988922,
-0.05957839637994766,
0.1176643818616867,
0.028175584971904755,
0.030199550092220306,
0.0032349880784749985,
0.04046706110239029,
0.07202710211277008,
0.09171931445598602,
-0.04282234236598015,
0.050173647701740265,
0.0336419939994812,
-0.02189922332763672,
0.05898939073085785,
-0.05001091584563255,
-0.09995996952056885,
0.07610738277435303,
-0.000985703431069851,
0.03902579843997955,
0.02685854211449623,
0.03513431176543236,
-0.009432670660316944,
-0.0718224048614502,
0.16891774535179138,
-0.07652759552001953,
-0.011006172746419907,
-0.017795464023947716,
0.012054648250341415,
0.047078728675842285,
0.032719098031520844,
-0.006008319556713104,
-0.04768811911344528,
-0.004111679270863533,
-0.05536904186010361,
-0.027057629078626633,
-0.05410166084766388,
-0.11878639459609985,
-0.00012709340080618858,
-0.03462178260087967,
-0.03321323171257973,
-0.1420876681804657,
-0.21827858686447144,
-0.018972957506775856,
0.06455625593662262,
-0.002303528133779764,
-0.009964179247617722,
0.027062656357884407,
0.01676134206354618,
-0.021118121221661568,
0.009364437311887741,
-0.04890734702348709,
-0.0006031887605786324,
-0.0065020602196455,
-0.030794255435466766,
0.055688805878162384,
-0.04238252714276314,
0.023048628121614456,
-0.06936495006084442,
0.022348307073116302,
-0.21552610397338867,
0.08760403841733932,
-0.03285997733473778,
0.0053261928260326385,
-0.03725186735391617,
-0.04702798277139664,
0.008058197796344757,
0.048200152814388275,
-0.007948765531182289,
0.11701329052448273,
-0.13212932646274567,
-0.049816492944955826,
0.17579498887062073,
-0.16006454825401306,
-0.004217606037855148,
0.1002422422170639,
-0.046684179455041885,
0.05378081649541855,
0.13274230062961578,
0.09780041873455048,
0.08989118039608002,
-0.07539932429790497,
0.012407294474542141,
0.06178039312362671,
-0.07002945989370346,
0.052184365689754486,
0.0896964967250824,
-0.025754490867257118,
-0.13605567812919617,
0.029746457934379578,
-0.0769486129283905,
-0.006057847291231155,
-0.02489355020225048,
-0.02169789932668209,
0.009693803265690804,
-0.037989407777786255,
0.027252808213233948,
0.005816216114908457,
0.018146775662899017,
-0.0400443933904171,
-0.08025439083576202,
0.033777229487895966,
0.07545855641365051,
-0.07073304057121277,
0.045178934931755066,
-0.0689215436577797,
0.06319499015808105,
-0.07269047945737839,
-0.005097625777125359,
-0.16579443216323853,
-0.025555294007062912,
0.042033277451992035,
-0.050569016486406326,
0.04763417690992355,
0.08624110370874405,
0.004287428222596645,
0.12350153923034668,
-0.042460113763809204,
0.0042655435390770435,
-0.005205288529396057,
-0.009776854887604713,
-0.04832420498132706,
-0.12179930508136749,
-0.08149771392345428,
-0.06707938760519028,
0.0996771901845932,
-0.07295550405979156,
0.029764367267489433,
-0.07308803498744965,
-0.022722046822309494,
-0.010947488248348236,
-0.05756423622369766,
-0.004881122149527073,
0.011213739402592182,
-0.028781218454241753,
-0.046095021069049835,
0.04992830753326416,
0.051018472760915756,
-0.058113791048526764,
0.07886489480733871,
-0.10334906727075577,
-0.05892918258905411,
0.05503801256418228,
0.012367275543510914,
-0.0835653692483902,
0.08563320338726044,
-0.02008969523012638,
-0.01303519681096077,
-0.05603682994842529,
-0.046361785382032394,
0.1920931190252304,
-0.022891603410243988,
0.10126502811908722,
-0.09131544828414917,
0.0013477436732500792,
0.027788477018475533,
-0.04627992957830429,
-0.019652657210826874,
0.05707239359617233,
0.049198076128959656,
-0.182382270693779,
0.014801044017076492,
0.050866805016994476,
0.07503107935190201,
0.11149759590625763,
0.02809329517185688,
-0.022365447133779526,
-0.04548463970422745,
-0.01228846050798893,
0.005087445490062237,
0.053769439458847046,
-0.02461078390479088,
-0.0069039794616401196,
0.030501937493681908,
0.05931995064020157,
0.01922621950507164,
-0.07803647220134735,
0.033432282507419586,
0.0678437203168869,
-0.016012556850910187,
-0.04236222431063652,
-0.024220682680606842,
-0.060465965420007706,
0.061614081263542175,
0.05435876548290253,
0.037000518292188644,
0.02654283307492733,
-0.014056618325412273,
-0.13685238361358643,
0.18876001238822937,
-0.11536988615989685,
-0.25798463821411133,
-0.11123394966125488,
-0.06316980719566345,
-0.027266794815659523,
0.039771027863025665,
0.058570027351379395,
-0.03104935958981514,
-0.04298587515950203,
-0.11608275771141052,
0.06151757389307022,
-0.06343106180429459,
-0.030464833602309227,
-0.010800173506140709,
-0.053168803453445435,
-0.016993850469589233,
-0.12687496840953827,
-0.011218998581171036,
-0.03072889894247055,
-0.06984387338161469,
0.005870073102414608,
-0.035999931395053864,
0.029566146433353424,
0.13608379662036896,
0.03581162914633751,
-0.019883152097463608,
-0.018097903579473495,
0.1964811533689499,
0.011437715962529182,
0.05979708954691887,
0.11547259986400604,
-0.025417394936084747,
0.05505894124507904,
0.04743245244026184,
0.025836998596787453,
-0.04700310528278351,
0.01297212764620781,
-0.01608220301568508,
-0.12097249180078506,
-0.17153307795524597,
-0.07193299382925034,
-0.004947624169290066,
0.004102472215890884,
0.018354281783103943,
0.035868022590875626,
0.022630201652646065,
0.038942933082580566,
-0.029841216281056404,
0.028307458385825157,
-0.01242838054895401,
0.08143116533756256,
0.032062601298093796,
-0.07314980775117874,
0.0916091650724411,
-0.059778884053230286,
0.016867931932210922,
0.11083422601222992,
-0.060908809304237366,
0.1856447160243988,
0.021728061139583588,
0.0558786541223526,
0.10254287719726562,
0.019778449088335037,
0.05342782288789749,
0.08807592839002609,
-0.048362746834754944,
0.004998489283025265,
-0.06037670001387596,
-0.0527624785900116,
-0.03433549776673317,
0.04632151499390602,
0.02849767729640007,
0.018765654414892197,
-0.11968906968832016,
0.019875839352607727,
-0.00012051896192133427,
0.1375775933265686,
0.04537030681967735,
-0.12190453708171844,
-0.12277788668870926,
0.03374142572283745,
-0.04510603845119476,
-0.06121174246072769,
0.02923571690917015,
0.05795232579112053,
-0.15210220217704773,
0.04739733040332794,
-0.004317651502788067,
0.06486712396144867,
-0.08797946572303772,
0.016159575432538986,
-0.03895999491214752,
0.001534746028482914,
0.003977116663008928,
0.06840375065803528,
-0.13318993151187897,
0.10603082925081253,
0.020642951130867004,
0.04940697178244591,
-0.0766274705529213,
0.015091154724359512,
-0.00932334829121828,
0.11239632964134216,
0.11599129438400269,
0.044726572930812836,
-0.04492989182472229,
-0.018329523503780365,
-0.04529988393187523,
0.01860908418893814,
0.05735945701599121,
-0.07470265030860901,
0.05982851982116699,
0.008363820612430573,
0.007725258357822895,
-0.02265380695462227,
0.01823742315173149,
-0.1313888132572174,
-0.12200336158275604,
0.06131407618522644,
-0.07855036109685898,
-0.1013028621673584,
-0.05621348321437836,
-0.0647718757390976,
-0.05628205090761185,
0.2085379660129547,
-0.1109037697315216,
-0.09140720963478088,
-0.0995006263256073,
-0.016730237752199173,
0.04591188579797745,
-0.06572140008211136,
0.04658868908882141,
-0.0379512719810009,
0.09088432788848877,
-0.04764952510595322,
-0.10944655537605286,
0.03393612802028656,
-0.11425748467445374,
-0.11222410202026367,
-0.04384400695562363,
0.10515548288822174,
0.11380437761545181,
0.03832492232322693,
0.011249114759266376,
0.010194504633545876,
-0.000014360994100570679,
-0.11924843490123749,
0.012437712401151657,
0.1293003261089325,
0.0014112219214439392,
0.07104270905256271,
-0.05993620306253433,
0.026321891695261,
-0.016207747161388397,
0.0008080005645751953,
0.13124516606330872,
0.18754291534423828,
-0.06306060403585434,
0.1733095347881317,
0.20683994889259338,
-0.10420717298984528,
-0.19199809432029724,
-0.05398424342274666,
-0.0014295168220996857,
0.04389243200421333,
0.050259556621313095,
-0.1811942458152771,
0.09052036702632904,
0.0358777679502964,
-0.032161734998226166,
0.013492126017808914,
-0.23291033506393433,
-0.1123543530702591,
0.0887751504778862,
0.05805667117238045,
0.18477749824523926,
-0.08087234199047089,
-0.03842748701572418,
-0.016240712255239487,
-0.04079045355319977,
0.0438912957906723,
-0.035819150507450104,
0.08988931775093079,
0.0052030812948942184,
-0.031711187213659286,
0.0016307132318615913,
-0.03236616030335426,
0.09657308459281921,
0.040200695395469666,
0.022248489782214165,
-0.07206177711486816,
-0.00433766283094883,
0.11456813663244247,
-0.039064329117536545,
0.09867212921380997,
0.04466056078672409,
0.07498693466186523,
-0.09829068183898926,
-0.05962590500712395,
-0.07766950130462646,
0.04375061020255089,
-0.042490020394325256,
-0.05558941140770912,
-0.06441711634397507,
0.06133456528186798,
0.038417231291532516,
0.009366029873490334,
-0.0006600022315979004,
-0.038714755326509476,
0.045389700680971146,
0.08782633394002914,
0.08246524631977081,
-0.03365618735551834,
-0.0727248340845108,
-0.050336804240942,
-0.049067094922065735,
0.06504905223846436,
-0.09110933542251587,
0.01831185631453991,
0.0279802568256855,
0.010533805005252361,
0.08840766549110413,
0.033662211149930954,
-0.1367158144712448,
0.010182827711105347,
0.03640836104750633,
-0.12496866285800934,
-0.10333272814750671,
-0.02058292180299759,
0.026279382407665253,
-0.04016859084367752,
0.054620519280433655,
0.1444464921951294,
-0.03596833720803261,
-0.03139984980225563,
-0.0463317334651947,
0.03815021738409996,
-0.021215282380580902,
0.051158081740140915,
0.06561365723609924,
0.030451711267232895,
-0.07285661995410919,
0.07365532219409943,
0.03584790974855423,
-0.03546924889087677,
0.04078017547726631,
0.045792821794748306,
-0.09553380310535431,
-0.07867960631847382,
-0.05927333980798721,
0.08535605669021606,
-0.024861576035618782,
-0.04704870283603668,
-0.0007312707602977753,
-0.08299558609724045,
0.06832675635814667,
0.07319597154855728,
0.047829028218984604,
0.03516090288758278,
-0.0866498202085495,
0.015361468307673931,
-0.05382486432790756,
0.03390180692076683,
-0.03206648677587509,
-0.005375286564230919,
-0.05716223269701004,
0.06567619740962982,
0.06387767195701599,
0.09747868776321411,
-0.03445415943861008,
-0.07469674944877625,
-0.08442869782447815,
-0.012758789584040642,
-0.06427641212940216,
-0.035679254680871964,
-0.07845720648765564,
-0.004765898454934359,
0.000015618279576301575,
-0.002794671803712845,
0.0202875267714262,
0.03622248023748398,
-0.0422414131462574,
-0.01809811219573021,
-0.03415051847696304,
0.03656953573226929,
-0.06206732988357544,
0.007574551738798618,
0.014797319658100605,
-0.03522123396396637,
0.09338387846946716,
0.03607051819562912,
-0.011551862582564354,
0.04716584458947182,
-0.02058500051498413,
0.03361669182777405,
-0.020688476040959358,
-0.00022722315043210983,
-0.022587431594729424,
-0.10890033841133118,
-0.005652927327901125,
0.0057119522243738174,
-0.026094751432538033,
0.012711563147604465,
0.05714665725827217,
-0.07172754406929016,
0.08916309475898743,
0.04672623425722122,
-0.029666904360055923,
-0.07128050923347473,
0.04016602784395218,
-0.012065548449754715,
0.02913709171116352,
0.07206462323665619,
-0.03516608849167824,
0.05163218826055527,
-0.09809054434299469,
-0.02910851687192917,
0.00344292726367712,
-0.004705868661403656,
-0.011446980759501457,
-0.05152255296707153,
-0.004346325993537903,
0.00884946994483471,
0.1755620241165161,
-0.024099256843328476,
0.03671994060277939,
0.015194972045719624,
0.00921687576919794,
0.047123879194259644,
-0.014615954831242561,
0.07134954631328583,
-0.008317162282764912,
-0.026357095688581467,
-0.011996001936495304,
0.03909239545464516,
0.004963843151926994,
0.003926897421479225,
0.1412958949804306,
0.045889317989349365,
0.09081605821847916,
0.07544950395822525,
0.017666691914200783,
0.018492674455046654,
-0.1310482919216156,
-0.09204638749361038,
0.007141442969441414,
0.057028889656066895,
-0.017977198585867882,
0.011188885197043419,
0.09452613443136215,
-0.08874838799238205,
0.07007425278425217,
0.04738156124949455,
-0.048491254448890686,
-0.1255563497543335,
-0.19024300575256348,
-0.021338898688554764,
-0.031543806195259094,
-0.011615484021604061,
-0.09143750369548798,
0.014323562383651733,
0.09193912148475647,
0.025929566472768784,
-0.009506391361355782,
0.09475696086883545,
-0.10411949455738068,
-0.03082861378788948,
0.04197181016206741,
-0.027935925871133804,
0.016815869137644768,
0.05001084506511688,
0.02245001681149006,
-0.00665079802274704,
0.042858973145484924,
0.041020818054676056,
0.043458759784698486,
0.02596309781074524,
0.05161602050065994,
-0.025024645030498505,
-0.07444232702255249,
-0.03286290168762207,
-0.002384698949754238,
0.05356304720044136,
0.136198028922081,
0.022650666534900665,
-0.06880303472280502,
0.007140518166124821,
0.10903525352478027,
-0.030807070434093475,
-0.05095389485359192,
-0.10998096317052841,
0.24221357703208923,
0.02521761693060398,
0.002687088679522276,
-0.005298769101500511,
-0.04590512067079544,
0.003323884680867195,
0.21310879290103912,
0.22672879695892334,
0.0023248735815286636,
-0.009278996847569942,
0.010959053412079811,
-0.010921638458967209,
0.03720838949084282,
0.14705869555473328,
0.006754595786333084,
0.2509819269180298,
-0.04561852663755417,
0.03914377838373184,
-0.0429009348154068,
-0.038286980241537094,
-0.09852983057498932,
0.07143577933311462,
-0.007742630783468485,
0.00844736397266388,
-0.030857300385832787,
0.07353745400905609,
-0.04146808758378029,
-0.17324888706207275,
-0.0019572079181671143,
-0.0008817100897431374,
-0.061672646552324295,
0.010076808743178844,
-0.0038435612805187702,
0.02115829661488533,
0.08281128853559494,
-0.015579222701489925,
-0.0066152880899608135,
0.12876614928245544,
0.01869719848036766,
-0.09702367335557938,
-0.06259448826313019,
0.11877673864364624,
0.023907911032438278,
0.1427648961544037,
0.0113604087382555,
0.07701262831687927,
0.08732035756111145,
0.02097894996404648,
-0.09552668035030365,
0.042646922171115875,
-0.019319945946335793,
-0.029225964099168777,
0.006134379655122757,
0.10540583729743958,
-0.007895315065979958,
0.056844599545001984,
0.025871142745018005,
-0.09075513482093811,
0.061798859387636185,
0.010356247425079346,
-0.034140005707740784,
-0.0813007578253746,
0.0856614038348198,
-0.09017568826675415,
0.15700319409370422,
0.12337568402290344,
-0.014403976500034332,
-0.04643482714891434,
-0.02902364172041416,
0.018933797255158424,
-0.0012333597987890244,
0.05517643690109253,
-0.025929955765604973,
-0.13519658148288727,
0.019577963277697563,
-0.0892738550901413,
0.026245582848787308,
-0.24905851483345032,
-0.08950954675674438,
0.03128841146826744,
-0.017199385911226273,
-0.01872894912958145,
0.05188710615038872,
0.044889673590660095,
0.027340227738022804,
-0.03487725183367729,
0.025880277156829834,
-0.03978151082992554,
0.05952082946896553,
-0.11156079173088074,
-0.09340142458677292
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 140k (uncased)
Seed 3 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-140k')
model = BertModel.from_pretrained("multiberts-seed-3-140k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-140k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 140k (uncased)
Seed 3 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 140k (uncased)\nSeed 3 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 140k (uncased)\nSeed 3 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 140k (uncased)\nSeed 3 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08383188396692276,
-0.0023543534334748983,
-0.002228783443570137,
0.06841931492090225,
0.08807660639286041,
0.0037411022931337357,
0.11621341854333878,
0.04926986247301102,
-0.03659609705209732,
0.022896770387887955,
0.092923104763031,
0.03169577568769455,
0.04163654148578644,
0.06439175456762314,
0.0967223197221756,
-0.2581230401992798,
0.05105401575565338,
-0.06398336589336395,
0.055723436176776886,
0.07640440762042999,
0.10128120332956314,
-0.07063578069210052,
0.061935242265462875,
0.036574896425008774,
-0.08100810647010803,
-0.01661904715001583,
-0.017612531781196594,
-0.03402857482433319,
0.0998440682888031,
0.0704849362373352,
0.06134846806526184,
0.0012668203562498093,
0.056910984218120575,
-0.09054659307003021,
0.016429416835308075,
0.04515670984983444,
-0.0012772590853273869,
0.024203453212976456,
-0.008382683619856834,
0.017424510791897774,
0.10858997702598572,
0.03878387436270714,
0.07793015241622925,
0.03455962613224983,
-0.09542393684387207,
-0.11782008409500122,
-0.08138255029916763,
0.10493610799312592,
0.053157806396484375,
0.04215497151017189,
-0.005310674197971821,
0.07432250678539276,
-0.03123083896934986,
0.07460314780473709,
0.10934515297412872,
-0.25499430298805237,
-0.009821798652410507,
0.06862732768058777,
0.04656176269054413,
0.040466636419296265,
0.013488620519638062,
0.027688127011060715,
0.004850305616855621,
0.04623428359627724,
0.030797529965639114,
-0.023650716990232468,
0.11990126967430115,
-0.04483622685074806,
-0.1529020369052887,
-0.04201631247997284,
0.12005569040775299,
-0.006844302639365196,
-0.12628057599067688,
-0.10224469006061554,
-0.027867134660482407,
0.11730252951383591,
-0.004486131481826305,
-0.01921704038977623,
-0.004751198459416628,
0.010999839752912521,
0.026097780093550682,
-0.09360530972480774,
-0.08674249798059464,
-0.028628189116716385,
-0.03595422953367233,
0.12887905538082123,
0.04682566225528717,
0.0511658601462841,
-0.034033454954624176,
0.08628688007593155,
-0.11570405960083008,
-0.03818823769688606,
-0.05218988656997681,
-0.08476278930902481,
-0.017720220610499382,
0.008064559660851955,
-0.028651146218180656,
-0.08443646132946014,
-0.058122240006923676,
0.11738362908363342,
0.030396729707717896,
0.030366986989974976,
0.001975341932848096,
0.04089381545782089,
0.07346969842910767,
0.09383319318294525,
-0.04171959310770035,
0.05225014686584473,
0.03387410193681717,
-0.022181905806064606,
0.059051547199487686,
-0.05050875246524811,
-0.10091191530227661,
0.07759708166122437,
-0.00011348817497491837,
0.04006347805261612,
0.025240469723939896,
0.03562282398343086,
-0.011471593752503395,
-0.07322325557470322,
0.1704932302236557,
-0.07667310535907745,
-0.010566654615104198,
-0.016862358897924423,
0.0130061786621809,
0.04962459206581116,
0.031464751809835434,
-0.006174201611429453,
-0.04718652367591858,
-0.004819760099053383,
-0.05461262911558151,
-0.02644728496670723,
-0.05409964174032211,
-0.11967290192842484,
-0.00032504787668585777,
-0.03551771491765976,
-0.03280762583017349,
-0.14164166152477264,
-0.21627072989940643,
-0.018670400604605675,
0.06459498405456543,
-0.0025619189254939556,
-0.009378749877214432,
0.026652522385120392,
0.017021188512444496,
-0.020266709849238396,
0.009872442111372948,
-0.04717975854873657,
-0.0013703284785151482,
-0.006633121520280838,
-0.030148474499583244,
0.05571100115776062,
-0.04157673567533493,
0.02298135682940483,
-0.0686638280749321,
0.02311388961970806,
-0.21088771522045135,
0.0884033590555191,
-0.03363099694252014,
0.002329861745238304,
-0.038081005215644836,
-0.046800367534160614,
0.009853055700659752,
0.048151254653930664,
-0.008389933034777641,
0.11599385738372803,
-0.13399949669837952,
-0.04993325471878052,
0.18046389520168304,
-0.15996035933494568,
-0.0034155920147895813,
0.10045231878757477,
-0.047266073524951935,
0.05402885749936104,
0.13277728855609894,
0.0953083485364914,
0.08804810792207718,
-0.07725930213928223,
0.01268643420189619,
0.06211351603269577,
-0.06924065202474594,
0.055202171206474304,
0.09037339687347412,
-0.024592123925685883,
-0.13705886900424957,
0.029913075268268585,
-0.07688753306865692,
-0.00689435750246048,
-0.025369735434651375,
-0.020803621038794518,
0.008772684261202812,
-0.03767949715256691,
0.028703004121780396,
0.005293542519211769,
0.017550434917211533,
-0.03978823125362396,
-0.08110839128494263,
0.03383093327283859,
0.0753941759467125,
-0.07254375517368317,
0.044047750532627106,
-0.06974179297685623,
0.06335379928350449,
-0.07361549139022827,
-0.004708077758550644,
-0.1661132574081421,
-0.025817912071943283,
0.04396747425198555,
-0.04851150885224342,
0.04758460819721222,
0.08927136659622192,
0.0032994949724525213,
0.12384052574634552,
-0.041107937693595886,
0.003908650949597359,
-0.0060720425099134445,
-0.0103335902094841,
-0.04795129597187042,
-0.12233802676200867,
-0.08311186730861664,
-0.06805965304374695,
0.09918743371963501,
-0.07409873604774475,
0.02947814017534256,
-0.07286018133163452,
-0.023043900728225708,
-0.010964196175336838,
-0.057587116956710815,
-0.0036650588735938072,
0.011092782020568848,
-0.029392756521701813,
-0.04656264930963516,
0.050009168684482574,
0.05058514326810837,
-0.05913788452744484,
0.07915691286325455,
-0.10523047298192978,
-0.05984918400645256,
0.05452078580856323,
0.011999620124697685,
-0.08286599814891815,
0.08436787128448486,
-0.02003449574112892,
-0.013029083609580994,
-0.05595076084136963,
-0.04583314433693886,
0.19138649106025696,
-0.022929903119802475,
0.10095137357711792,
-0.09139308333396912,
0.0006510473904199898,
0.02741122990846634,
-0.0468195378780365,
-0.020031392574310303,
0.05679948627948761,
0.050411973148584366,
-0.186782106757164,
0.014933805912733078,
0.053166814148426056,
0.07553960382938385,
0.11286261677742004,
0.027590904384851456,
-0.023428231477737427,
-0.04571264237165451,
-0.010429305024445057,
0.006004675291478634,
0.05418981984257698,
-0.02553391084074974,
-0.0070680174976587296,
0.031025027856230736,
0.05871731787919998,
0.018724339082837105,
-0.07863231003284454,
0.03336593136191368,
0.06759446114301682,
-0.017221245914697647,
-0.038405641913414,
-0.02370532415807247,
-0.06074538826942444,
0.061742208898067474,
0.053633540868759155,
0.03721046820282936,
0.02705451101064682,
-0.013936636038124561,
-0.13686777651309967,
0.18982616066932678,
-0.11563105881214142,
-0.26016199588775635,
-0.10927404463291168,
-0.061652034521102905,
-0.027008017525076866,
0.04005436971783638,
0.05838359147310257,
-0.030270326882600784,
-0.04329569637775421,
-0.11684666574001312,
0.06274419277906418,
-0.06523194164037704,
-0.03088366985321045,
-0.011180141940712929,
-0.05360810458660126,
-0.01747962273657322,
-0.12750846147537231,
-0.011530373245477676,
-0.02942715957760811,
-0.07224920392036438,
0.0067857662215828896,
-0.03588208928704262,
0.0293135903775692,
0.1358410269021988,
0.03658668324351311,
-0.019961120560765266,
-0.018578436225652695,
0.1960591822862625,
0.012563308700919151,
0.05884250998497009,
0.1165926456451416,
-0.026068855077028275,
0.05493069440126419,
0.04358142614364624,
0.0250117015093565,
-0.048275820910930634,
0.013272080570459366,
-0.015173411928117275,
-0.12123778462409973,
-0.17187446355819702,
-0.07134241610765457,
-0.004716134630143642,
0.00518037611618638,
0.017145119607448578,
0.035890258848667145,
0.023081034421920776,
0.039498601108789444,
-0.031003912910819054,
0.027581864967942238,
-0.012064680457115173,
0.0818919986486435,
0.03191881626844406,
-0.07389374822378159,
0.09286699444055557,
-0.059787821024656296,
0.017135709524154663,
0.11079209297895432,
-0.05997775122523308,
0.183652862906456,
0.023171287029981613,
0.05614230036735535,
0.10376536846160889,
0.019723284989595413,
0.05324464291334152,
0.08885842561721802,
-0.04892478138208389,
0.004599764943122864,
-0.061086490750312805,
-0.05263448506593704,
-0.0348350815474987,
0.0461038276553154,
0.02955460362136364,
0.017271436750888824,
-0.11998023837804794,
0.019501063972711563,
-0.0012118697632104158,
0.13775843381881714,
0.04774163290858269,
-0.1209716796875,
-0.12294900417327881,
0.03360944986343384,
-0.04413428157567978,
-0.06172353774309158,
0.0283998791128397,
0.0577046163380146,
-0.15331628918647766,
0.04799281805753708,
-0.00555438082665205,
0.06503123790025711,
-0.08913365006446838,
0.015590468421578407,
-0.03987602889537811,
0.002104848623275757,
0.004762421362102032,
0.06966397166252136,
-0.13638046383857727,
0.10269153863191605,
0.02057758904993534,
0.04865550994873047,
-0.07808122038841248,
0.01600014418363571,
-0.010718165896832943,
0.1100943312048912,
0.1139756292104721,
0.04327676445245743,
-0.047685783356428146,
-0.020661484450101852,
-0.0457674078643322,
0.01908857747912407,
0.0593889057636261,
-0.07565908879041672,
0.06046927720308304,
0.007567767519503832,
0.007816333323717117,
-0.022245116531848907,
0.020563840866088867,
-0.13389532268047333,
-0.12305998802185059,
0.06161489710211754,
-0.07741671055555344,
-0.10045488178730011,
-0.05625198781490326,
-0.06456442922353745,
-0.05400500446557999,
0.21247562766075134,
-0.11272644996643066,
-0.09069925546646118,
-0.09917443990707397,
-0.016029108315706253,
0.045498333871364594,
-0.06474348902702332,
0.04634253680706024,
-0.03953677788376808,
0.09139453619718552,
-0.04712296277284622,
-0.1107141375541687,
0.03435235470533371,
-0.11491109430789948,
-0.11464297026395798,
-0.04409702122211456,
0.10659573972225189,
0.11441158503293991,
0.03778771311044693,
0.01295745000243187,
0.009378818795084953,
0.0014371220022439957,
-0.11920621991157532,
0.0134599469602108,
0.13060060143470764,
0.0015259217470884323,
0.0713963434100151,
-0.061371274292469025,
0.027762584388256073,
-0.016064366325736046,
0.00046800635755062103,
0.1329345852136612,
0.18753501772880554,
-0.0628543496131897,
0.1745619773864746,
0.20206782221794128,
-0.10412086546421051,
-0.1903441846370697,
-0.05567866563796997,
-0.00006629247218370438,
0.04436280578374863,
0.051665470004081726,
-0.18376009166240692,
0.08981133997440338,
0.03528127446770668,
-0.03189609944820404,
0.017315536737442017,
-0.23420462012290955,
-0.11098191142082214,
0.08784841001033783,
0.059163592755794525,
0.18482667207717896,
-0.0828687846660614,
-0.03789152950048447,
-0.016704296693205833,
-0.03846848011016846,
0.04960795119404793,
-0.0356910303235054,
0.09142056107521057,
0.005636507645249367,
-0.03293917328119278,
0.0017555039376020432,
-0.03167048469185829,
0.09740802645683289,
0.038008108735084534,
0.022404339164495468,
-0.07171173393726349,
-0.0018132813274860382,
0.11615423858165741,
-0.038283269852399826,
0.0997333899140358,
0.040469519793987274,
0.07487408816814423,
-0.09781306982040405,
-0.060906630009412766,
-0.07670516520738602,
0.04385710135102272,
-0.04269576445221901,
-0.05539785325527191,
-0.06396885216236115,
0.059680212289094925,
0.03731442987918854,
0.01015961542725563,
-0.0006961226463317871,
-0.03867650777101517,
0.04544030874967575,
0.08726857602596283,
0.08362307399511337,
-0.030399229377508163,
-0.07303774356842041,
-0.052647314965724945,
-0.04870683699846268,
0.06617957353591919,
-0.0893900915980339,
0.0186960119754076,
0.026375090703368187,
0.009259958751499653,
0.08909881114959717,
0.0332280732691288,
-0.13789880275726318,
0.01125027984380722,
0.03512430563569069,
-0.1251266598701477,
-0.10391377657651901,
-0.01939883828163147,
0.028234507888555527,
-0.03942311182618141,
0.05633503198623657,
0.14572183787822723,
-0.034157730638980865,
-0.032001737505197525,
-0.04712270200252533,
0.0381089486181736,
-0.02083457075059414,
0.051684506237506866,
0.06452368944883347,
0.030669737607240677,
-0.07209451496601105,
0.07225153595209122,
0.03539751470088959,
-0.03348881006240845,
0.04246098920702934,
0.04231688752770424,
-0.09430648386478424,
-0.07901475578546524,
-0.05893627554178238,
0.08499657362699509,
-0.023178866133093834,
-0.04783276468515396,
0.00021434389054775238,
-0.08283787965774536,
0.06818373501300812,
0.07364088296890259,
0.04824453964829445,
0.03749499097466469,
-0.08656693249940872,
0.015306947752833366,
-0.05359657108783722,
0.03308631107211113,
-0.031625933945178986,
-0.005356099456548691,
-0.05819356441497803,
0.06478153169155121,
0.06444794684648514,
0.0980507954955101,
-0.03470544517040253,
-0.07556606829166412,
-0.08473063260316849,
-0.01384820882230997,
-0.06564675271511078,
-0.03410055488348007,
-0.0777655839920044,
-0.005409106612205505,
0.0006792913191020489,
-0.002083258703351021,
0.022490017116069794,
0.036688096821308136,
-0.041736796498298645,
-0.018526142463088036,
-0.03547772392630577,
0.03694926202297211,
-0.06333298236131668,
0.006923346780240536,
0.014260653406381607,
-0.03632976487278938,
0.09356750547885895,
0.03584253787994385,
-0.011707911267876625,
0.047262270003557205,
-0.026884138584136963,
0.035292018204927444,
-0.019119910895824432,
-0.0005168719217181206,
-0.02469441294670105,
-0.10915202647447586,
-0.005249311216175556,
0.005968991667032242,
-0.026102324947714806,
0.012040490284562111,
0.05834928900003433,
-0.07204720377922058,
0.08699092268943787,
0.045971132814884186,
-0.02962011843919754,
-0.06971779465675354,
0.04032639414072037,
-0.013572754338383675,
0.030185243114829063,
0.07150113582611084,
-0.03566998615860939,
0.05268004536628723,
-0.09809109568595886,
-0.029174882918596268,
0.003277712734416127,
-0.004239268600940704,
-0.012372639030218124,
-0.051632314920425415,
-0.004276816733181477,
0.008420753292739391,
0.17642702162265778,
-0.02345290780067444,
0.03880961239337921,
0.013460916467010975,
0.007955757901072502,
0.053073957562446594,
-0.013741225004196167,
0.07182499766349792,
-0.00846081878989935,
-0.024939196184277534,
-0.012598413974046707,
0.03893931210041046,
0.004905017092823982,
0.002958841621875763,
0.14047881960868835,
0.04562756046652794,
0.08485458046197891,
0.07522396743297577,
0.016483407467603683,
0.017488818615674973,
-0.13075467944145203,
-0.09289362281560898,
0.007345008663833141,
0.05735614150762558,
-0.018545527011156082,
0.014991912990808487,
0.09249421954154968,
-0.08878899365663528,
0.06936849653720856,
0.048353828489780426,
-0.0489053875207901,
-0.1261022388935089,
-0.19529841840267181,
-0.02260274812579155,
-0.030571790412068367,
-0.011604994535446167,
-0.09156377613544464,
0.014044139534235,
0.08992104232311249,
0.025753116235136986,
-0.010307285003364086,
0.093299999833107,
-0.10081478953361511,
-0.03109685704112053,
0.04331947863101959,
-0.02844376303255558,
0.015294300392270088,
0.050611164420843124,
0.02324780263006687,
-0.005493020638823509,
0.04340029135346413,
0.041963737457990646,
0.043573688715696335,
0.026648297905921936,
0.0520196259021759,
-0.025730863213539124,
-0.07528923451900482,
-0.03262362256646156,
-0.0023266603238880634,
0.0541774220764637,
0.1362057477235794,
0.02338697388768196,
-0.06932903081178665,
0.006851485930383205,
0.10771174728870392,
-0.0292695090174675,
-0.04969559982419014,
-0.10967060923576355,
0.2419445514678955,
0.023248525336384773,
0.0030847489833831787,
-0.0049309092573821545,
-0.045300714671611786,
0.003648044541478157,
0.21099834144115448,
0.2253672480583191,
0.0019051263807341456,
-0.008815567940473557,
0.01010616309940815,
-0.011061710305511951,
0.0375368706882,
0.1479218304157257,
0.006269784644246101,
0.2520432770252228,
-0.04652472585439682,
0.03774324804544449,
-0.04316755384206772,
-0.03810953348875046,
-0.09952174127101898,
0.0683656632900238,
-0.007247487548738718,
0.008333273231983185,
-0.030110834166407585,
0.07326196134090424,
-0.04094669967889786,
-0.17167635262012482,
-0.0021533388644456863,
0.001090307137928903,
-0.06164469197392464,
0.01000126451253891,
-0.00518738804385066,
0.020749172195792198,
0.08317733556032181,
-0.016675729304552078,
-0.006368674337863922,
0.13016557693481445,
0.018400046974420547,
-0.09725897014141083,
-0.059476837515830994,
0.11754509806632996,
0.02059527114033699,
0.1410730481147766,
0.010908794589340687,
0.07927808165550232,
0.087201789021492,
0.0212225541472435,
-0.09382276237010956,
0.04346327856183052,
-0.018989745527505875,
-0.029328059405088425,
0.007682742550969124,
0.10607074201107025,
-0.007787385489791632,
0.05894112586975098,
0.0264262817800045,
-0.09020251035690308,
0.0628834143280983,
0.009221438318490982,
-0.03451570123434067,
-0.08108492195606232,
0.08688772469758987,
-0.09161078929901123,
0.15664972364902496,
0.12380421161651611,
-0.015214688144624233,
-0.04734478145837784,
-0.02963215298950672,
0.020555060356855392,
-0.0015177843160927296,
0.05440541356801987,
-0.026732753962278366,
-0.13508185744285583,
0.019793802872300148,
-0.08791008591651917,
0.02639203518629074,
-0.24819549918174744,
-0.08953733742237091,
0.03139115124940872,
-0.016696317121386528,
-0.01921561360359192,
0.05124799907207489,
0.043069351464509964,
0.027332868427038193,
-0.0357188954949379,
0.024134956300258636,
-0.039991892874240875,
0.059783414006233215,
-0.11203160881996155,
-0.09276404976844788
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 1500k (uncased)
Seed 3 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1500k')
model = BertModel.from_pretrained("multiberts-seed-3-1500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-1500k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 1500k (uncased)
Seed 3 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 1500k (uncased)\nSeed 3 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 1500k (uncased)\nSeed 3 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 1500k (uncased)\nSeed 3 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0846174880862236,
-0.0024095631670206785,
-0.002283118898048997,
0.07050732523202896,
0.08901354670524597,
0.002814446110278368,
0.11464057862758636,
0.049581293016672134,
-0.03612197935581207,
0.021717317402362823,
0.09336735308170319,
0.02951933816075325,
0.04347660765051842,
0.06369222700595856,
0.095078244805336,
-0.25354671478271484,
0.05070720613002777,
-0.06551338732242584,
0.04857159033417702,
0.07534445822238922,
0.10017655789852142,
-0.07190375030040741,
0.06267841160297394,
0.035139553248882294,
-0.08233741670846939,
-0.015643075108528137,
-0.018078377470374107,
-0.03397110104560852,
0.09916643798351288,
0.07250949740409851,
0.06055732071399689,
0.0023412201553583145,
0.057800643146038055,
-0.09173929691314697,
0.016895856708288193,
0.043548956513404846,
-0.0011264635249972343,
0.024308159947395325,
-0.007242551073431969,
0.01733810268342495,
0.10264654457569122,
0.04127950593829155,
0.07730095088481903,
0.03552772477269173,
-0.09554152190685272,
-0.10413677990436554,
-0.08118029683828354,
0.10980267077684402,
0.053962670266628265,
0.03961370140314102,
-0.003655882552266121,
0.06908031553030014,
-0.03262883052229881,
0.07528998702764511,
0.1100279837846756,
-0.2466220259666443,
-0.01188002247363329,
0.07391893118619919,
0.043846264481544495,
0.045693278312683105,
0.015360048040747643,
0.027847956866025925,
0.005788084119558334,
0.045382238924503326,
0.02932770550251007,
-0.02390259876847267,
0.11913891136646271,
-0.04339905455708504,
-0.15190353989601135,
-0.0435318797826767,
0.1181904524564743,
-0.007779952138662338,
-0.1249634400010109,
-0.09995972365140915,
-0.027485597878694534,
0.11363382637500763,
-0.0034971116110682487,
-0.020663592964410782,
-0.004902794491499662,
0.0099596306681633,
0.027891525998711586,
-0.09595605731010437,
-0.08393074572086334,
-0.030464988201856613,
-0.03392212092876434,
0.13105325400829315,
0.04629943519830704,
0.052526358515024185,
-0.035268668085336685,
0.08502362668514252,
-0.12029923498630524,
-0.03697605058550835,
-0.05350995808839798,
-0.08445186167955399,
-0.018202753737568855,
0.0065634953789412975,
-0.027320202440023422,
-0.08968296647071838,
-0.05856776610016823,
0.12053598463535309,
0.02970508486032486,
0.029612112790346146,
0.0006225388497114182,
0.040509361773729324,
0.07277052104473114,
0.09182137250900269,
-0.04400048032402992,
0.05208243429660797,
0.03401349484920502,
-0.017946895211935043,
0.05549042299389839,
-0.050344932824373245,
-0.10057193040847778,
0.07688511162996292,
-0.0022029289975762367,
0.04203108698129654,
0.025897806510329247,
0.03367239236831665,
-0.0104500912129879,
-0.0735834538936615,
0.1696765422821045,
-0.07496307790279388,
-0.012642373330891132,
-0.019220100715756416,
0.011445287615060806,
0.04158269613981247,
0.03184270113706589,
-0.004589251708239317,
-0.0480828694999218,
-0.0007349215447902679,
-0.05424550920724869,
-0.025842122733592987,
-0.055014509707689285,
-0.12018487602472305,
-0.0008284910582005978,
-0.04082751274108887,
-0.03204180300235748,
-0.14248071610927582,
-0.22059646248817444,
-0.018579406663775444,
0.0646897554397583,
-0.004056272096931934,
-0.011706682853400707,
0.025781873613595963,
0.014940360561013222,
-0.021709296852350235,
0.010145684704184532,
-0.045951470732688904,
0.00008565466850996017,
-0.00795632041990757,
-0.030151214450597763,
0.05598293989896774,
-0.03999637812376022,
0.0240484606474638,
-0.06679488718509674,
0.02152712270617485,
-0.21216371655464172,
0.0909571498632431,
-0.03500670939683914,
0.0057985130697488785,
-0.03716084361076355,
-0.04397297650575638,
0.007064038887619972,
0.04780692234635353,
-0.0057166218757629395,
0.11895264685153961,
-0.1369764506816864,
-0.048917703330516815,
0.17621028423309326,
-0.15968018770217896,
-0.005799137055873871,
0.09960993379354477,
-0.04705417528748512,
0.052967824041843414,
0.1310097575187683,
0.09550991654396057,
0.08967448770999908,
-0.0745491310954094,
0.014289106242358685,
0.061264943331480026,
-0.07040542364120483,
0.05126158893108368,
0.09020106494426727,
-0.024768352508544922,
-0.14056362211704254,
0.030044883489608765,
-0.0769498348236084,
-0.005568069405853748,
-0.02677716501057148,
-0.02228735014796257,
0.00869036465883255,
-0.041422054171562195,
0.02591380849480629,
0.0043102893978357315,
0.01794094406068325,
-0.039797745645046234,
-0.07839437574148178,
0.032284244894981384,
0.07386212050914764,
-0.06848874688148499,
0.04312437027692795,
-0.06890471279621124,
0.06317976117134094,
-0.07668658345937729,
-0.0046955253928899765,
-0.16710782051086426,
-0.01825736276805401,
0.04317463934421539,
-0.048128899186849594,
0.04807419329881668,
0.08677609264850616,
0.004190795589238405,
0.1218867227435112,
-0.041931748390197754,
0.004238007124513388,
-0.005686268210411072,
-0.010940296575427055,
-0.04884935915470123,
-0.11928202211856842,
-0.07951930165290833,
-0.06734436005353928,
0.08778577297925949,
-0.0692640170454979,
0.030781134963035583,
-0.0739520937204361,
-0.02460218407213688,
-0.009819965809583664,
-0.058428436517715454,
-0.0033468659967184067,
0.012488429434597492,
-0.028153883293271065,
-0.04566334933042526,
0.0492868646979332,
0.05002214387059212,
-0.0552535355091095,
0.07611330598592758,
-0.09909682720899582,
-0.057915154844522476,
0.05311204493045807,
0.01684114895761013,
-0.0829208716750145,
0.08696579933166504,
-0.019516978412866592,
-0.011479047127068043,
-0.059120699763298035,
-0.04465848580002785,
0.19016386568546295,
-0.02099062129855156,
0.1009010523557663,
-0.09321512281894684,
0.002530388068407774,
0.02921183966100216,
-0.044712938368320465,
-0.020232345908880234,
0.05778602883219719,
0.055724624544382095,
-0.18022216856479645,
0.014116425067186356,
0.047052450478076935,
0.07674416899681091,
0.11018718779087067,
0.029723208397626877,
-0.02121698297560215,
-0.046244971454143524,
-0.010747111402451992,
0.006819674279540777,
0.05463112145662308,
-0.02715688943862915,
-0.006760363932698965,
0.029455848038196564,
0.059031859040260315,
0.019502922892570496,
-0.07834096252918243,
0.03237781301140785,
0.06943638622760773,
-0.016984492540359497,
-0.04137859120965004,
-0.02364167757332325,
-0.06008056178689003,
0.06199412792921066,
0.053389787673950195,
0.03767639398574829,
0.025388168171048164,
-0.015005894936621189,
-0.13614949584007263,
0.18781223893165588,
-0.1149689257144928,
-0.26508545875549316,
-0.11036446690559387,
-0.05602392554283142,
-0.02850056253373623,
0.03945603221654892,
0.058273546397686005,
-0.03183847665786743,
-0.043971531093120575,
-0.11636069416999817,
0.06553883850574493,
-0.0643431544303894,
-0.030459968373179436,
-0.00748501718044281,
-0.05229441821575165,
-0.0181865431368351,
-0.12770622968673706,
-0.011062974110245705,
-0.030513156205415726,
-0.07174938917160034,
0.005322683602571487,
-0.03332149237394333,
0.03137572482228279,
0.13400328159332275,
0.03423010930418968,
-0.019027380272746086,
-0.01847725361585617,
0.19993457198143005,
0.013047982007265091,
0.05774863809347153,
0.11466865986585617,
-0.026030004024505615,
0.054142843931913376,
0.05003862828016281,
0.02629835531115532,
-0.04864829033613205,
0.014310997910797596,
-0.013761377893388271,
-0.12124412506818771,
-0.17191840708255768,
-0.06987535953521729,
-0.005024563521146774,
0.00639109592884779,
0.017662610858678818,
0.03503255546092987,
0.024435341358184814,
0.03989749774336815,
-0.029705340042710304,
0.027161456644535065,
-0.0123494453728199,
0.07952426373958588,
0.02606908231973648,
-0.07388615608215332,
0.09296391904354095,
-0.05824774503707886,
0.01833229884505272,
0.11029190570116043,
-0.060233235359191895,
0.18530283868312836,
0.02246427908539772,
0.055766962468624115,
0.10357049107551575,
0.01830434799194336,
0.0542912557721138,
0.09133221209049225,
-0.050324663519859314,
0.004238433204591274,
-0.0593445748090744,
-0.05082973092794418,
-0.037003230303525925,
0.04497527331113815,
0.030445817857980728,
0.02089650183916092,
-0.12087715417146683,
0.02605476789176464,
0.0003009561914950609,
0.1402168869972229,
0.04187529906630516,
-0.1150604709982872,
-0.12137565016746521,
0.03327924758195877,
-0.04472032189369202,
-0.06132439523935318,
0.030278213322162628,
0.06242883577942848,
-0.15148955583572388,
0.0439690537750721,
-0.006535970605909824,
0.06685957312583923,
-0.08611033856868744,
0.015866512432694435,
-0.04078162461519241,
0.002286669798195362,
0.003357912879437208,
0.06741716712713242,
-0.14256542921066284,
0.10427431017160416,
0.020970070734620094,
0.04971646890044212,
-0.07711531221866608,
0.016291063278913498,
-0.010464940220117569,
0.10930448025465012,
0.11727380752563477,
0.04465789720416069,
-0.04521496221423149,
-0.022925682365894318,
-0.04648208990693092,
0.018172219395637512,
0.05746488273143768,
-0.07626495510339737,
0.058344922959804535,
0.00872622150927782,
0.007792545948177576,
-0.022777846083045006,
0.022508613765239716,
-0.1369430124759674,
-0.12313637137413025,
0.062155988067388535,
-0.08223509043455124,
-0.1030663475394249,
-0.056942954659461975,
-0.06373806297779083,
-0.055177442729473114,
0.20560598373413086,
-0.11471950262784958,
-0.09089554846286774,
-0.10003583878278732,
-0.015065427869558334,
0.04512668773531914,
-0.06586155295372009,
0.0469452328979969,
-0.0395236611366272,
0.08992740511894226,
-0.050590045750141144,
-0.10936014354228973,
0.03204278275370598,
-0.11664106696844101,
-0.11226728558540344,
-0.04319728910923004,
0.10356352478265762,
0.11451680213212967,
0.038221731781959534,
0.00985964760184288,
0.009115220978856087,
0.004168838262557983,
-0.11887340992689133,
0.013323282822966576,
0.12944546341896057,
-0.004852553829550743,
0.07318324595689774,
-0.057812318205833435,
0.02593240514397621,
-0.017001071944832802,
-0.0010624527931213379,
0.13192804157733917,
0.18641763925552368,
-0.06219318509101868,
0.17462027072906494,
0.20138047635555267,
-0.10509196668863297,
-0.19436025619506836,
-0.05170387774705887,
0.0005062157288193703,
0.04505741596221924,
0.04960131645202637,
-0.18227650225162506,
0.08927014470100403,
0.03443671017885208,
-0.03099934570491314,
0.011322442442178726,
-0.23681548237800598,
-0.11330468952655792,
0.08719557523727417,
0.05963096767663956,
0.18608856201171875,
-0.08003749698400497,
-0.038558438420295715,
-0.013871999457478523,
-0.040648214519023895,
0.048782024532556534,
-0.028055807575583458,
0.08874829113483429,
0.0045023467391729355,
-0.0349472314119339,
0.0030260607600212097,
-0.03274490311741829,
0.09499116241931915,
0.038751788437366486,
0.024953946471214294,
-0.07080289721488953,
-0.008615698665380478,
0.1067291796207428,
-0.03990506753325462,
0.09872682392597198,
0.04134587198495865,
0.07636097073554993,
-0.09374956041574478,
-0.05978649854660034,
-0.07659821957349777,
0.04262273758649826,
-0.04328909516334534,
-0.055235542356967926,
-0.06337065249681473,
0.062308840453624725,
0.03801735118031502,
0.010339363478124142,
0.002962745726108551,
-0.03824346885085106,
0.04477506875991821,
0.0906262993812561,
0.08483131229877472,
-0.04097719490528107,
-0.0706552267074585,
-0.05012677609920502,
-0.047761015594005585,
0.06627686321735382,
-0.09117195010185242,
0.020313043147325516,
0.02870858646929264,
0.010250517167150974,
0.09052465856075287,
0.03365708515048027,
-0.13545991480350494,
0.00943788792937994,
0.03589402139186859,
-0.12263175845146179,
-0.10843164473772049,
-0.02269763872027397,
0.02794497087597847,
-0.04016923904418945,
0.052216943353414536,
0.1445651650428772,
-0.03700083866715431,
-0.031778790056705475,
-0.048593588173389435,
0.03945140540599823,
-0.019421551376581192,
0.05403976887464523,
0.06404690444469452,
0.03196849673986435,
-0.07300133258104324,
0.07578183710575104,
0.03495208919048309,
-0.032951392233371735,
0.040445998311042786,
0.042962491512298584,
-0.09362348914146423,
-0.0780409425497055,
-0.05638197064399719,
0.08718915283679962,
-0.025043318048119545,
-0.04500138759613037,
-0.0010380242019891739,
-0.08325707912445068,
0.06947318464517593,
0.07952351868152618,
0.04720371216535568,
0.033529024571180344,
-0.08563249558210373,
0.015969224274158478,
-0.0525476336479187,
0.03727055341005325,
-0.031292401254177094,
-0.006124045699834824,
-0.05676286667585373,
0.06544140726327896,
0.06297452002763748,
0.09634878486394882,
-0.03365304321050644,
-0.07337914407253265,
-0.08592062443494797,
-0.01256779208779335,
-0.06871312856674194,
-0.03389560058712959,
-0.07833920419216156,
-0.004066402092576027,
0.0006995592266321182,
-0.002847520634531975,
0.021077198907732964,
0.03682779520750046,
-0.04287538677453995,
-0.017961731180548668,
-0.035311009734869,
0.03638513386249542,
-0.06126420199871063,
0.007088528946042061,
0.014848473481833935,
-0.03422476723790169,
0.091794952750206,
0.03247866779565811,
-0.012670407071709633,
0.04649040102958679,
-0.02668435126543045,
0.03397528454661369,
-0.021426092833280563,
0.0010574001353234053,
-0.02316666580736637,
-0.10921302437782288,
-0.002653696108609438,
0.005931917577981949,
-0.027556486427783966,
0.012405824847519398,
0.055184684693813324,
-0.07205232977867126,
0.08887262642383575,
0.0466843880712986,
-0.026595141738653183,
-0.07034028321504593,
0.041789229959249496,
-0.010956412181258202,
0.030225276947021484,
0.07066665589809418,
-0.03441191092133522,
0.05291741341352463,
-0.09783235937356949,
-0.028175700455904007,
0.0037977672182023525,
-0.004354208707809448,
-0.019495481625199318,
-0.05096381902694702,
-0.004106086678802967,
0.008099667727947235,
0.17404380440711975,
-0.019935928285121918,
0.03685417026281357,
0.014619747176766396,
0.011198887601494789,
0.04939639940857887,
-0.014014115557074547,
0.06724964082241058,
-0.007441006600856781,
-0.026615304872393608,
-0.0125232869759202,
0.0402623750269413,
0.005727196112275124,
-0.000621279701590538,
0.14488324522972107,
0.04576374217867851,
0.08801661431789398,
0.07406296581029892,
0.01832200586795807,
0.016391659155488014,
-0.12466453015804291,
-0.09163407236337662,
0.005321074277162552,
0.056078482419252396,
-0.01949220336973667,
0.007728129625320435,
0.09187567979097366,
-0.0893462747335434,
0.07236715406179428,
0.04851335659623146,
-0.048291947692632675,
-0.1268765926361084,
-0.19314005970954895,
-0.021653251722455025,
-0.030907990410923958,
-0.01042176317423582,
-0.09372670948505402,
0.01319174189120531,
0.09127674251794815,
0.02564680017530918,
-0.00960628129541874,
0.09081165492534637,
-0.1044268012046814,
-0.027466263622045517,
0.04362129792571068,
-0.02850440703332424,
0.01703808084130287,
0.048177704215049744,
0.02355116978287697,
-0.00543847493827343,
0.04432328790426254,
0.041376419365406036,
0.0433524064719677,
0.02326859161257744,
0.05049005523324013,
-0.02402752824127674,
-0.07450481504201889,
-0.031896334141492844,
-0.005874698515981436,
0.052044861018657684,
0.14161436259746552,
0.023608660325407982,
-0.0669010728597641,
0.008027363568544388,
0.11110540479421616,
-0.029194563627243042,
-0.05012855306267738,
-0.11199464648962021,
0.23235589265823364,
0.02557370811700821,
0.007124757394194603,
-0.004036473575979471,
-0.04605156183242798,
0.0008369497954845428,
0.21599310636520386,
0.22560952603816986,
0.002873282879590988,
-0.009448126889765263,
0.010906105861067772,
-0.010952688753604889,
0.037543680518865585,
0.14509841799736023,
0.006537940353155136,
0.24755553901195526,
-0.046542808413505554,
0.041238635778427124,
-0.04071321338415146,
-0.040391720831394196,
-0.09300804138183594,
0.06671150028705597,
-0.00662339897826314,
0.009660931304097176,
-0.03061686083674431,
0.07191866636276245,
-0.04475610703229904,
-0.1696026623249054,
-0.002048739232122898,
-0.0019081817008554935,
-0.060655102133750916,
0.008054094389081001,
-0.005605078302323818,
0.02012036368250847,
0.08156345039606094,
-0.015006483532488346,
-0.006167496554553509,
0.12869635224342346,
0.017664458602666855,
-0.095693439245224,
-0.06650973856449127,
0.11731049418449402,
0.025778574869036674,
0.14084850251674652,
0.012099271640181541,
0.07390932738780975,
0.08733208477497101,
0.021401282399892807,
-0.09651423990726471,
0.03975875303149223,
-0.019627153873443604,
-0.027690228074789047,
0.004893075209110975,
0.1086297482252121,
-0.0073716905899345875,
0.06435004621744156,
0.025541409850120544,
-0.09061920642852783,
0.060791417956352234,
0.008151408284902573,
-0.033585164695978165,
-0.08152224868535995,
0.08820690214633942,
-0.09029895812273026,
0.15536974370479584,
0.1249908059835434,
-0.013942043296992779,
-0.045439645648002625,
-0.02996264584362507,
0.02073788084089756,
-0.0030860137194395065,
0.062324654310941696,
-0.026602763682603836,
-0.13680985569953918,
0.020922785624861717,
-0.0904199481010437,
0.025734558701515198,
-0.24867117404937744,
-0.08975578844547272,
0.030706971883773804,
-0.017074603587388992,
-0.01771073415875435,
0.05267654359340668,
0.04927056282758713,
0.028814727440476418,
-0.034674398601055145,
0.022782286629080772,
-0.04015881195664406,
0.06012488901615143,
-0.1072574108839035,
-0.0911354348063469
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 1600k (uncased)
Seed 3 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1600k')
model = BertModel.from_pretrained("multiberts-seed-3-1600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-1600k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 1600k (uncased)
Seed 3 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 1600k (uncased)\nSeed 3 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 1600k (uncased)\nSeed 3 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 1600k (uncased)\nSeed 3 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08387549966573715,
-0.008683320134878159,
-0.0022045422811061144,
0.07211405783891678,
0.08787048608064651,
0.00272859213873744,
0.11074896156787872,
0.049066971987485886,
-0.03763047233223915,
0.021333375945687294,
0.09259456396102905,
0.0297698974609375,
0.0438605397939682,
0.06404896825551987,
0.09685231745243073,
-0.2514588534832001,
0.04848325997591019,
-0.06644721329212189,
0.05301320552825928,
0.07581570744514465,
0.10010048747062683,
-0.07216879725456238,
0.0638204962015152,
0.03531162440776825,
-0.08645160496234894,
-0.012926232069730759,
-0.01701273024082184,
-0.03573194146156311,
0.1009356826543808,
0.06920713186264038,
0.06306266784667969,
0.003679923713207245,
0.06002219393849373,
-0.08523723483085632,
0.015979696065187454,
0.044709041714668274,
0.00005792919546365738,
0.022959599271416664,
-0.006724987179040909,
0.018846124410629272,
0.11043281108140945,
0.03903906047344208,
0.07806002348661423,
0.03497297689318657,
-0.09605168551206589,
-0.11163956671953201,
-0.08123224973678589,
0.10155688226222992,
0.05207984894514084,
0.041379403322935104,
-0.004826012067496777,
0.07028039544820786,
-0.030119221657514572,
0.07572005689144135,
0.107304647564888,
-0.24773159623146057,
-0.011469279415905476,
0.07164862006902695,
0.04384293034672737,
0.04997294396162033,
0.015914782881736755,
0.026399292051792145,
0.005731742829084396,
0.045821819454431534,
0.030489817261695862,
-0.024532020092010498,
0.11450846493244171,
-0.0429820753633976,
-0.15102031826972961,
-0.04325336217880249,
0.12241849303245544,
-0.009528197348117828,
-0.12549495697021484,
-0.09686198830604553,
-0.028943300247192383,
0.11267361044883728,
-0.002802877686917782,
-0.020459400489926338,
-0.004363176878541708,
0.010826967656612396,
0.023628558963537216,
-0.0945238471031189,
-0.08640147745609283,
-0.03072289749979973,
-0.037845149636268616,
0.12431897222995758,
0.046702928841114044,
0.05439206585288048,
-0.034389153122901917,
0.08505350351333618,
-0.11508940905332565,
-0.03778744116425514,
-0.05049458146095276,
-0.08537554740905762,
-0.019514143466949463,
0.008685117587447166,
-0.02557333931326866,
-0.08461418747901917,
-0.05948147550225258,
0.1144559308886528,
0.025231007486581802,
0.031923551112413406,
-0.00539808813482523,
0.04005087912082672,
0.07059678435325623,
0.09186522662639618,
-0.04063974693417549,
0.05199505016207695,
0.03414080664515495,
-0.022258076816797256,
0.0575062558054924,
-0.050133801996707916,
-0.09875138103961945,
0.07840269804000854,
-0.005622350610792637,
0.04061516374349594,
0.025716079398989677,
0.03518984094262123,
-0.011131062172353268,
-0.07157130539417267,
0.17022472620010376,
-0.07731613516807556,
-0.011923152953386307,
-0.018686681985855103,
0.01205604150891304,
0.04317948967218399,
0.03263099119067192,
-0.007759569212794304,
-0.04662644490599632,
-0.005588649772107601,
-0.05388585478067398,
-0.02880944311618805,
-0.05581095069646835,
-0.11995379626750946,
-0.0002523348666727543,
-0.037885650992393494,
-0.03325069323182106,
-0.14197781682014465,
-0.2196473330259323,
-0.016633369028568268,
0.06262828409671783,
-0.0036357506178319454,
-0.00893485639244318,
0.026119457557797432,
0.01448376290500164,
-0.022747285664081573,
0.01013646088540554,
-0.03941363841295242,
-0.0007278863340616226,
-0.007350698113441467,
-0.02745968848466873,
0.05732973664999008,
-0.03821477293968201,
0.022722439840435982,
-0.06835546344518661,
0.02123180218040943,
-0.21065041422843933,
0.08784043043851852,
-0.03621889278292656,
0.004283782094717026,
-0.036619432270526886,
-0.04529689997434616,
0.007427336648106575,
0.04883100092411041,
-0.007395320106297731,
0.11639945209026337,
-0.13506919145584106,
-0.050107426941394806,
0.17719653248786926,
-0.16104456782341003,
-0.0038609579205513,
0.10107724368572235,
-0.04665285348892212,
0.05408275127410889,
0.1308390349149704,
0.09802091121673584,
0.08119447529315948,
-0.07342860102653503,
0.015091522596776485,
0.060550298541784286,
-0.07204917073249817,
0.052130937576293945,
0.08868670463562012,
-0.02413308434188366,
-0.14007848501205444,
0.03134149685502052,
-0.0734480619430542,
-0.008544700220227242,
-0.02557653747498989,
-0.021722065284848213,
0.008040618151426315,
-0.03861291706562042,
0.0261024571955204,
0.0062000202015042305,
0.016262223944067955,
-0.043814122676849365,
-0.08127067983150482,
0.023737268522381783,
0.07304152846336365,
-0.06898721307516098,
0.04448842629790306,
-0.07016480714082718,
0.059327445924282074,
-0.07208341360092163,
-0.004825049079954624,
-0.165754497051239,
-0.02147318236529827,
0.044255174696445465,
-0.042340345680713654,
0.047609321773052216,
0.08902910351753235,
0.0030330948065966368,
0.1223529726266861,
-0.04125143587589264,
0.004835259169340134,
-0.005344970151782036,
-0.010256143286824226,
-0.046179041266441345,
-0.12130186706781387,
-0.07945738732814789,
-0.0681225061416626,
0.09182365983724594,
-0.06759922951459885,
0.029883278533816338,
-0.06925282627344131,
-0.026036210358142853,
-0.01058303564786911,
-0.058829233050346375,
-0.005306487902998924,
0.0114937424659729,
-0.027697984129190445,
-0.0465858057141304,
0.04877608269453049,
0.05071602016687393,
-0.05497661232948303,
0.07701826840639114,
-0.10212661325931549,
-0.06343168765306473,
0.056348107755184174,
0.012295177206397057,
-0.08395521342754364,
0.09315573424100876,
-0.020955070853233337,
-0.01245651114732027,
-0.05748732015490532,
-0.04607534408569336,
0.19015982747077942,
-0.02427617460489273,
0.09970755875110626,
-0.09156660735607147,
0.002468857914209366,
0.02960146963596344,
-0.04407956078648567,
-0.019747357815504074,
0.05915199965238571,
0.05163443461060524,
-0.1800565868616104,
0.013653092086315155,
0.05064141005277634,
0.07658331096172333,
0.10813891887664795,
0.02864321507513523,
-0.02169712260365486,
-0.04484143108129501,
-0.013033502735197544,
0.007473496720194817,
0.05429305508732796,
-0.0304509736597538,
-0.007885271683335304,
0.03051799163222313,
0.05857978016138077,
0.01942511461675167,
-0.07919357717037201,
0.03413980454206467,
0.06839488446712494,
-0.016923483461141586,
-0.037513475865125656,
-0.023910287767648697,
-0.060115665197372437,
0.0628720298409462,
0.0516265444457531,
0.03562122583389282,
0.02596684731543064,
-0.01513415202498436,
-0.1363247185945511,
0.18731462955474854,
-0.11636263132095337,
-0.26166975498199463,
-0.10958577692508698,
-0.06048522889614105,
-0.024978576228022575,
0.04131700098514557,
0.05866670608520508,
-0.03389383852481842,
-0.04495844617486,
-0.11754393577575684,
0.06369780749082565,
-0.06817400455474854,
-0.030427467077970505,
-0.0065761469304561615,
-0.05252436548471451,
-0.018849628046154976,
-0.1262575387954712,
-0.01239902712404728,
-0.03059041127562523,
-0.07835802435874939,
0.007771261036396027,
-0.034999653697013855,
0.030863311141729355,
0.13382814824581146,
0.03391040489077568,
-0.019665926694869995,
-0.018383946269750595,
0.18947350978851318,
0.01383480615913868,
0.06031171232461929,
0.11615713685750961,
-0.024487975984811783,
0.053870003670454025,
0.04761567711830139,
0.024702997878193855,
-0.048032961785793304,
0.01520899310708046,
-0.016399962827563286,
-0.12449364364147186,
-0.17582562565803528,
-0.0695275366306305,
-0.004830452613532543,
0.010629333555698395,
0.015382207930088043,
0.034547798335552216,
0.017611954361200333,
0.040093015879392624,
-0.027812659740447998,
0.026883069425821304,
-0.013373523950576782,
0.08015687763690948,
0.032094478607177734,
-0.07588275521993637,
0.09387117624282837,
-0.05933487415313721,
0.01822753995656967,
0.11217128485441208,
-0.06295251101255417,
0.18502375483512878,
0.022160356864333153,
0.05851561203598976,
0.10354581475257874,
0.0209793783724308,
0.05223526060581207,
0.08895561844110489,
-0.0454685278236866,
0.005944948643445969,
-0.06082445755600929,
-0.051193952560424805,
-0.038373999297618866,
0.04541676118969917,
0.024234578013420105,
0.018570125102996826,
-0.12000949680805206,
0.026900026947259903,
0.0005568554624915123,
0.13293519616127014,
0.042291708290576935,
-0.12080994248390198,
-0.12178843468427658,
0.033479996025562286,
-0.04233384132385254,
-0.06045078486204147,
0.030703913420438766,
0.06212151050567627,
-0.15122036635875702,
0.04449240863323212,
-0.004796808585524559,
0.06318828463554382,
-0.08580610156059265,
0.016509581357240677,
-0.0405837781727314,
0.0024965256452560425,
0.0043306658044457436,
0.06728190183639526,
-0.13668356835842133,
0.1061997041106224,
0.020882347598671913,
0.04870595782995224,
-0.0785282552242279,
0.01753837801516056,
-0.01022997684776783,
0.10623196512460709,
0.11851340532302856,
0.043348558247089386,
-0.058059874922037125,
-0.020522985607385635,
-0.04555040970444679,
0.020005200058221817,
0.057687658816576004,
-0.07297203689813614,
0.05812644958496094,
0.009466400370001793,
0.008600188419222832,
-0.022128242999315262,
0.024632621556520462,
-0.13330626487731934,
-0.12290014326572418,
0.06052393093705177,
-0.08040788024663925,
-0.1108318567276001,
-0.0569208599627018,
-0.06255706399679184,
-0.0536343976855278,
0.21643656492233276,
-0.11272990703582764,
-0.09167617559432983,
-0.0991491749882698,
-0.013491049408912659,
0.044892147183418274,
-0.06473209708929062,
0.044850341975688934,
-0.03852253779768944,
0.09026205539703369,
-0.04885898530483246,
-0.11020062863826752,
0.033609118312597275,
-0.11567874997854233,
-0.11299622058868408,
-0.04455479606986046,
0.10587947070598602,
0.1145261749625206,
0.038467202335596085,
0.010547216050326824,
0.01096515916287899,
0.0016421843320131302,
-0.11800532788038254,
0.017373958602547646,
0.12882235646247864,
-0.0007020626217126846,
0.07082003355026245,
-0.05823130160570145,
0.023731552064418793,
-0.014709601178765297,
-0.0020983032882213593,
0.12947909533977509,
0.1856064796447754,
-0.06215287744998932,
0.17486335337162018,
0.201081782579422,
-0.10583773255348206,
-0.19411462545394897,
-0.05076911672949791,
-0.0026872120797634125,
0.04393098130822182,
0.05249400809407234,
-0.18048851191997528,
0.09121007472276688,
0.03552260622382164,
-0.031605347990989685,
0.009532734751701355,
-0.23290883004665375,
-0.11196336150169373,
0.08652184903621674,
0.06095118448138237,
0.18396729230880737,
-0.08071795105934143,
-0.03743938356637955,
-0.01458577997982502,
-0.03934284299612045,
0.044530607759952545,
-0.03407292068004608,
0.08845575898885727,
0.004462582990527153,
-0.02998972497880459,
0.002360672689974308,
-0.03318334370851517,
0.09342747926712036,
0.04000692814588547,
0.022820914164185524,
-0.07146190106868744,
-0.0044362302869558334,
0.11439681053161621,
-0.037246719002723694,
0.09689474105834961,
0.043966397643089294,
0.0762903168797493,
-0.09358671307563782,
-0.05949625372886658,
-0.07619953155517578,
0.04407082498073578,
-0.04335479438304901,
-0.055682238191366196,
-0.06500260531902313,
0.06097044795751572,
0.03705058619379997,
0.008587319403886795,
-0.002975285053253174,
-0.037345919758081436,
0.04485062137246132,
0.09487815201282501,
0.08301705121994019,
-0.036953847855329514,
-0.06446480005979538,
-0.048579514026641846,
-0.049072884023189545,
0.06276784092187881,
-0.09218622744083405,
0.018322234973311424,
0.028010211884975433,
0.008593705482780933,
0.08621154725551605,
0.03455861285328865,
-0.13794942200183868,
0.009886099025607109,
0.036777473986148834,
-0.12549492716789246,
-0.10215403884649277,
-0.02012333832681179,
0.029189016669988632,
-0.03911667317152023,
0.05190473794937134,
0.14359918236732483,
-0.034792132675647736,
-0.031234797090291977,
-0.04872681945562363,
0.04103759303689003,
-0.0184909887611866,
0.053609125316143036,
0.06213449314236641,
0.030064500868320465,
-0.07318387925624847,
0.07531113922595978,
0.03631700202822685,
-0.03542798012495041,
0.04056340456008911,
0.04522057995200157,
-0.09488421678543091,
-0.07794177532196045,
-0.05566206946969032,
0.08407934755086899,
-0.023568997159600258,
-0.044992607086896896,
-0.0001335740089416504,
-0.08447366952896118,
0.06797623634338379,
0.07022540271282196,
0.047475315630435944,
0.03504018113017082,
-0.08471283316612244,
0.016809305176138878,
-0.05326201766729355,
0.03654743731021881,
-0.03264473378658295,
-0.00521654449403286,
-0.052803561091423035,
0.06845924258232117,
0.06253056228160858,
0.09695356339216232,
-0.033919915556907654,
-0.07103773206472397,
-0.08305702358484268,
-0.011730867438018322,
-0.06273991614580154,
-0.03294846788048744,
-0.0769486054778099,
-0.007244622800499201,
0.0007865359075367451,
-0.002552611753344536,
0.019684288650751114,
0.03801010549068451,
-0.042027004063129425,
-0.018911637365818024,
-0.035599298775196075,
0.036946527659893036,
-0.06248827278614044,
0.006976217031478882,
0.015841586515307426,
-0.03553342446684837,
0.09087106585502625,
0.03463330864906311,
-0.011435406282544136,
0.048608724027872086,
-0.026829766109585762,
0.0344165563583374,
-0.021647322922945023,
0.0022279019467532635,
-0.022458374500274658,
-0.1069389283657074,
-0.0031650823075324297,
0.007170265540480614,
-0.02949460595846176,
0.011413857340812683,
0.056057773530483246,
-0.07190261036157608,
0.09171120822429657,
0.045688360929489136,
-0.028600409626960754,
-0.07026825100183487,
0.040794532746076584,
-0.009526340290904045,
0.030266016721725464,
0.06825072318315506,
-0.03636116161942482,
0.050452038645744324,
-0.09917737543582916,
-0.029262147843837738,
0.0038882046937942505,
-0.005524694919586182,
-0.012339778244495392,
-0.05171775072813034,
-0.002629515714943409,
0.007396727800369263,
0.1759432554244995,
-0.018577318638563156,
0.037216585129499435,
0.015629692003130913,
0.0066524650901556015,
0.052791062742471695,
-0.014780854806303978,
0.06618058681488037,
-0.01063414104282856,
-0.026391036808490753,
-0.009800087660551071,
0.03675556182861328,
0.006416358053684235,
0.003967909142374992,
0.14702114462852478,
0.0445278100669384,
0.08731000870466232,
0.07287441194057465,
0.01788291335105896,
0.016498476266860962,
-0.12625205516815186,
-0.0928068459033966,
0.004538587294518948,
0.055777501314878464,
-0.018505137413740158,
0.007566243410110474,
0.09140463173389435,
-0.08824288845062256,
0.07056094706058502,
0.045764509588479996,
-0.04866156727075577,
-0.12707895040512085,
-0.18905220925807953,
-0.02294050343334675,
-0.028788862749934196,
-0.010268518701195717,
-0.0930122435092926,
0.014877036213874817,
0.09228719025850296,
0.026167143136262894,
-0.010622045956552029,
0.0932730883359909,
-0.1018536239862442,
-0.02718024142086506,
0.04173833131790161,
-0.027774883434176445,
0.014069953002035618,
0.05223099887371063,
0.02169959433376789,
-0.005760012194514275,
0.0465083085000515,
0.04130951687693596,
0.042931556701660156,
0.02380378544330597,
0.04935309290885925,
-0.024560952559113503,
-0.07255673408508301,
-0.03429147228598595,
-0.005622280761599541,
0.05374278128147125,
0.13680647313594818,
0.02411050908267498,
-0.06818059831857681,
0.008889100514352322,
0.10788360983133316,
-0.029754821211099625,
-0.046933915466070175,
-0.10787539184093475,
0.23534676432609558,
0.024372126907110214,
0.002796224318444729,
-0.00255783274769783,
-0.045329075306653976,
0.0022035054862499237,
0.2150907665491104,
0.22682911157608032,
0.0046135433949530125,
-0.010261492803692818,
0.008988509885966778,
-0.01129576563835144,
0.036706600338220596,
0.14659392833709717,
0.006845543161034584,
0.24519973993301392,
-0.04647962003946304,
0.037555333226919174,
-0.0421849749982357,
-0.03919753059744835,
-0.09765782952308655,
0.06743065267801285,
-0.006615811493247747,
0.008853225037455559,
-0.030077168717980385,
0.07046534866094589,
-0.041118066757917404,
-0.17745891213417053,
0.0026551922783255577,
-0.0012944727204740047,
-0.05880311504006386,
0.011563168838620186,
-0.0020291004329919815,
0.019728315994143486,
0.08270898461341858,
-0.017341159284114838,
-0.004431832581758499,
0.1239585429430008,
0.018747452646493912,
-0.09768274426460266,
-0.060259006917476654,
0.11634630709886551,
0.02046920359134674,
0.14400255680084229,
0.011676982045173645,
0.07774528861045837,
0.08685245364904404,
0.021666668355464935,
-0.09650003165006638,
0.04494835436344147,
-0.01952427811920643,
-0.02875383384525776,
0.006355792749673128,
0.10827361047267914,
-0.009324058890342712,
0.05911743640899658,
0.02645612135529518,
-0.0869685560464859,
0.06198130548000336,
0.008804894983768463,
-0.03229404240846634,
-0.07984555512666702,
0.08443664759397507,
-0.09235946089029312,
0.1558772623538971,
0.1250096559524536,
-0.013133479282259941,
-0.04454975575208664,
-0.031095271930098534,
0.01850126124918461,
-0.0026429067365825176,
0.06180662289261818,
-0.02507905289530754,
-0.1361132264137268,
0.01894412562251091,
-0.08485209196805954,
0.025782611221075058,
-0.24865731596946716,
-0.08943898230791092,
0.029652930796146393,
-0.01708076521754265,
-0.0202227383852005,
0.052806973457336426,
0.04569213092327118,
0.027552971616387367,
-0.03439516946673393,
0.030698193237185478,
-0.03910812363028526,
0.06060883775353432,
-0.10974471271038055,
-0.09120151400566101
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 160k (uncased)
Seed 3 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-160k')
model = BertModel.from_pretrained("multiberts-seed-3-160k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-160k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 160k (uncased)
Seed 3 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 160k (uncased)\nSeed 3 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 160k (uncased)\nSeed 3 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 160k (uncased)\nSeed 3 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0828985720872879,
-0.00453871488571167,
-0.0022279368713498116,
0.06944122910499573,
0.08854271471500397,
0.0017330297268927097,
0.11499623209238052,
0.04922621697187424,
-0.03463110700249672,
0.022974492982029915,
0.09268687665462494,
0.03193061053752899,
0.04286027327179909,
0.06708667427301407,
0.09657499194145203,
-0.2563989758491516,
0.04924413561820984,
-0.06449586898088455,
0.05638957768678665,
0.07565237581729889,
0.10133585333824158,
-0.07103565335273743,
0.06342798471450806,
0.035703253000974655,
-0.08347070962190628,
-0.013645598664879799,
-0.018500173464417458,
-0.03356073051691055,
0.09952735900878906,
0.07026784121990204,
0.0616428479552269,
0.0027758218348026276,
0.05728701874613762,
-0.08922877907752991,
0.01589738205075264,
0.0449005663394928,
-0.001040852628648281,
0.023441102355718613,
-0.005258772522211075,
0.016616709530353546,
0.11203273385763168,
0.038137998431921005,
0.07731857150793076,
0.034735288470983505,
-0.095054030418396,
-0.11024871468544006,
-0.08109575510025024,
0.10000777989625931,
0.05204775929450989,
0.04149599000811577,
-0.0056555308401584625,
0.07416155189275742,
-0.03115670569241047,
0.07576166093349457,
0.11271880567073822,
-0.2536294758319855,
-0.010802336037158966,
0.07017700374126434,
0.045809656381607056,
0.0429488942027092,
0.01545244175940752,
0.027089158073067665,
0.00559866800904274,
0.04595408961176872,
0.028769616037607193,
-0.02372019737958908,
0.11821277439594269,
-0.044907085597515106,
-0.15239694714546204,
-0.04224444553256035,
0.121334508061409,
-0.009256122633814812,
-0.12549304962158203,
-0.10051688551902771,
-0.0285821370780468,
0.11680155992507935,
-0.0021086856722831726,
-0.01961413398385048,
-0.004222471732646227,
0.010772624984383583,
0.025944741442799568,
-0.09361077845096588,
-0.08648619800806046,
-0.02912304922938347,
-0.03683546558022499,
0.12455970048904419,
0.04699943959712982,
0.05183514580130577,
-0.03398532792925835,
0.0866832286119461,
-0.11511288583278656,
-0.038527052849531174,
-0.05176733434200287,
-0.0841715931892395,
-0.01698308438062668,
0.008686727844178677,
-0.026949288323521614,
-0.08802114427089691,
-0.058633286505937576,
0.11320530623197556,
0.030738525092601776,
0.02992955967783928,
-0.00411208625882864,
0.04108038917183876,
0.0733843520283699,
0.0949057936668396,
-0.039019957184791565,
0.05271231383085251,
0.03319397568702698,
-0.021654929965734482,
0.05799164995551109,
-0.05035018548369408,
-0.10081788152456284,
0.0785839855670929,
-0.0015171421691775322,
0.04151587933301926,
0.026030249893665314,
0.03468380495905876,
-0.01293278019875288,
-0.07288260757923126,
0.16962268948554993,
-0.07731457054615021,
-0.01140215527266264,
-0.017933066934347153,
0.012581618502736092,
0.047885872423648834,
0.03266838937997818,
-0.006920190062373877,
-0.047467444092035294,
-0.006674258969724178,
-0.05393330007791519,
-0.026261258870363235,
-0.05598542094230652,
-0.11942069977521896,
-0.0006484580226242542,
-0.04132799804210663,
-0.033414945006370544,
-0.14277227222919464,
-0.2168811410665512,
-0.01814517378807068,
0.06265757232904434,
-0.0031250459142029285,
-0.008528510108590126,
0.023978380486369133,
0.016412803903222084,
-0.021883979439735413,
0.009733510203659534,
-0.04232349991798401,
-0.0012616729363799095,
-0.007070360705256462,
-0.029386943206191063,
0.05588926374912262,
-0.0391911119222641,
0.02361079305410385,
-0.0694364681839943,
0.021822471171617508,
-0.20847666263580322,
0.08896004408597946,
-0.03395882248878479,
0.0021552518010139465,
-0.0377829372882843,
-0.04474395513534546,
0.010415568947792053,
0.04848725348711014,
-0.008719446137547493,
0.11668770760297775,
-0.13745954632759094,
-0.05002880096435547,
0.1794118881225586,
-0.16015632450580597,
-0.0012189671397209167,
0.10073013603687286,
-0.04692862182855606,
0.053995709866285324,
0.1330648511648178,
0.09647412598133087,
0.08345533162355423,
-0.07485319674015045,
0.012196975760161877,
0.06166548654437065,
-0.06783869862556458,
0.05513043329119682,
0.09013105928897858,
-0.024827994406223297,
-0.13781297206878662,
0.030740901827812195,
-0.07642072439193726,
-0.0079554608091712,
-0.02578473463654518,
-0.020138071849942207,
0.008201152086257935,
-0.03833601996302605,
0.027411162853240967,
0.006151643581688404,
0.01690526120364666,
-0.041870392858982086,
-0.08174604177474976,
0.030100185424089432,
0.07408195734024048,
-0.07097747921943665,
0.04373166710138321,
-0.07051892578601837,
0.06060880050063133,
-0.07408642023801804,
-0.0049141775816679,
-0.1665515899658203,
-0.023910898715257645,
0.044978320598602295,
-0.04233689606189728,
0.04812995716929436,
0.09355037659406662,
0.0028205611743032932,
0.12316206842660904,
-0.039893701672554016,
0.003382056253030896,
-0.005621049553155899,
-0.011095314286649227,
-0.047668203711509705,
-0.12181754410266876,
-0.0820707157254219,
-0.0697224959731102,
0.09326359629631042,
-0.0709940567612648,
0.028739165514707565,
-0.07021583616733551,
-0.022083941847085953,
-0.009197982028126717,
-0.058622248470783234,
-0.0033274125307798386,
0.011549419723451138,
-0.02958839014172554,
-0.04730179160833359,
0.048966169357299805,
0.04989493265748024,
-0.05745644122362137,
0.07887901365756989,
-0.10456333309412003,
-0.06003866344690323,
0.0546262264251709,
0.014450454153120518,
-0.08132995665073395,
0.09023579955101013,
-0.020425064489245415,
-0.012465023435652256,
-0.057311296463012695,
-0.04446638002991676,
0.1920647919178009,
-0.02368970215320587,
0.10033982992172241,
-0.09182408452033997,
0.00068295793607831,
0.026588676497340202,
-0.046640828251838684,
-0.02092011645436287,
0.06025383248925209,
0.04865346848964691,
-0.18684136867523193,
0.015148244798183441,
0.05305098742246628,
0.07680361717939377,
0.11407153308391571,
0.028456825762987137,
-0.023353463038802147,
-0.045396141707897186,
-0.009816091507673264,
0.007976159453392029,
0.05250398814678192,
-0.02615324780344963,
-0.008658399805426598,
0.030399473384022713,
0.05791465938091278,
0.018838824704289436,
-0.07905276864767075,
0.03413800895214081,
0.06740475445985794,
-0.017833635210990906,
-0.03481011465191841,
-0.023197108879685402,
-0.06070141866803169,
0.062405817210674286,
0.05161313712596893,
0.03646984323859215,
0.026279699057340622,
-0.014451241120696068,
-0.13632896542549133,
0.18880029022693634,
-0.11593759059906006,
-0.26195859909057617,
-0.10704411566257477,
-0.05992068350315094,
-0.02656562812626362,
0.04144563525915146,
0.05799654871225357,
-0.032112665474414825,
-0.04417970031499863,
-0.11800424754619598,
0.06438232958316803,
-0.06678847223520279,
-0.03083214722573757,
-0.010579759255051613,
-0.051888711750507355,
-0.01830025389790535,
-0.12769916653633118,
-0.012307317927479744,
-0.02973971515893936,
-0.07868219912052155,
0.006725391373038292,
-0.03615128993988037,
0.029175765812397003,
0.13314248621463776,
0.03618159517645836,
-0.019361037760972977,
-0.018511809408664703,
0.19027945399284363,
0.014585522934794426,
0.057469166815280914,
0.11720411479473114,
-0.025685619562864304,
0.053835708647966385,
0.04421007260680199,
0.02328735962510109,
-0.04870648682117462,
0.014502854086458683,
-0.015597463585436344,
-0.12300650030374527,
-0.17489665746688843,
-0.06994830071926117,
-0.004610345233231783,
0.008683010935783386,
0.017432039603590965,
0.03636641800403595,
0.01739688031375408,
0.039499811828136444,
-0.03071299009025097,
0.02860300801694393,
-0.013684634119272232,
0.0801059678196907,
0.027862902730703354,
-0.07524866610765457,
0.09392049908638,
-0.06014984846115112,
0.01705217733979225,
0.11125599592924118,
-0.06065380200743675,
0.18464994430541992,
0.022954028099775314,
0.05661230534315109,
0.1036461740732193,
0.019449293613433838,
0.05265382304787636,
0.08895973861217499,
-0.047532305121421814,
0.005261962302029133,
-0.06164781376719475,
-0.052397720515728,
-0.03805692493915558,
0.046800702810287476,
0.03130669891834259,
0.019236069172620773,
-0.11938738077878952,
0.023505471646785736,
-0.0005593481473624706,
0.13459466397762299,
0.04540321230888367,
-0.12062935531139374,
-0.1204613596200943,
0.035088956356048584,
-0.04383506998419762,
-0.06116192042827606,
0.029582079499959946,
0.05976802483201027,
-0.15323218703269958,
0.046274032443761826,
-0.0058847470209002495,
0.06416679918766022,
-0.0912785530090332,
0.015955805778503418,
-0.042530979961156845,
0.0026962272822856903,
0.005652524530887604,
0.06949561834335327,
-0.14182022213935852,
0.10371112823486328,
0.02043313905596733,
0.04868973791599274,
-0.07935171574354172,
0.01731564663350582,
-0.010974742472171783,
0.10678284615278244,
0.11596070230007172,
0.0424887016415596,
-0.05277516320347786,
-0.02015531435608864,
-0.04642505943775177,
0.02009357139468193,
0.058937665075063705,
-0.07703186571598053,
0.05979359894990921,
0.008027439005672932,
0.008619211614131927,
-0.023234952241182327,
0.02105557546019554,
-0.13515064120292664,
-0.12382867932319641,
0.06151295453310013,
-0.07989487051963806,
-0.10517079383134842,
-0.057321205735206604,
-0.0641414001584053,
-0.051379673182964325,
0.2155369520187378,
-0.11418495327234268,
-0.09142880886793137,
-0.09880372136831284,
-0.013756807893514633,
0.043901216238737106,
-0.06488751620054245,
0.04448281228542328,
-0.03888153284788132,
0.09118933230638504,
-0.04899892956018448,
-0.1115485429763794,
0.03414542227983475,
-0.11630351096391678,
-0.11459162831306458,
-0.04423065483570099,
0.10657037049531937,
0.11551116406917572,
0.03887302428483963,
0.011963051743805408,
0.009833013638854027,
0.0015843790024518967,
-0.1169554591178894,
0.015138696879148483,
0.13300570845603943,
-0.0014672353863716125,
0.07296310365200043,
-0.0592932403087616,
0.02402949333190918,
-0.01542874425649643,
-0.0010241605341434479,
0.13153016567230225,
0.18637248873710632,
-0.061514124274253845,
0.17664393782615662,
0.19831737875938416,
-0.1056165099143982,
-0.19172239303588867,
-0.05374831333756447,
0.0006197160109877586,
0.04491378739476204,
0.05427255854010582,
-0.18352334201335907,
0.08933105319738388,
0.03346505016088486,
-0.03189936280250549,
0.01610802859067917,
-0.23492062091827393,
-0.11056700348854065,
0.08851106464862823,
0.05926300212740898,
0.18479293584823608,
-0.08221614360809326,
-0.0384053997695446,
-0.016575803980231285,
-0.0378136932849884,
0.048086218535900116,
-0.03427935391664505,
0.09059686213731766,
0.005237627774477005,
-0.0309258084744215,
0.0024502919986844063,
-0.031050939112901688,
0.09531083703041077,
0.03770533204078674,
0.022656947374343872,
-0.0716441422700882,
-0.00407036580145359,
0.11170844733715057,
-0.03795407712459564,
0.09843460470438004,
0.03929849714040756,
0.07464638352394104,
-0.09636630862951279,
-0.06072702258825302,
-0.07581914961338043,
0.0457962229847908,
-0.04207310080528259,
-0.05575472116470337,
-0.06335274130105972,
0.059496037662029266,
0.03636138141155243,
0.01045480277389288,
-0.0013509932905435562,
-0.03899697959423065,
0.045097436755895615,
0.0918823629617691,
0.08480218797922134,
-0.03184451907873154,
-0.06869922578334808,
-0.052193790674209595,
-0.04784342274069786,
0.06550239771604538,
-0.08935226500034332,
0.017792826518416405,
0.026192843914031982,
0.008224420249462128,
0.08752076327800751,
0.03355824947357178,
-0.13982531428337097,
0.011358199641108513,
0.03495268523693085,
-0.1253119856119156,
-0.10330739617347717,
-0.020258259028196335,
0.032460011541843414,
-0.03817445784807205,
0.0550915002822876,
0.1468687355518341,
-0.03486260771751404,
-0.03215625509619713,
-0.04824834689497948,
0.03948678821325302,
-0.01988287828862667,
0.05190800130367279,
0.06324119865894318,
0.029959898442029953,
-0.0722329393029213,
0.07405208051204681,
0.03560329228639603,
-0.0322059690952301,
0.04287973418831825,
0.04247533529996872,
-0.09316499531269073,
-0.07827755808830261,
-0.05716410651803017,
0.08787323534488678,
-0.022806519642472267,
-0.04641703888773918,
-0.000555235892534256,
-0.08346019685268402,
0.06831415742635727,
0.07255043834447861,
0.0485941581428051,
0.0373217910528183,
-0.08662298321723938,
0.01638861373066902,
-0.0523579865694046,
0.03414050489664078,
-0.03125898540019989,
-0.0047533102333545685,
-0.05607939511537552,
0.06556329131126404,
0.06324701011180878,
0.09789466112852097,
-0.0342525839805603,
-0.07296177744865417,
-0.08427547663450241,
-0.012533219531178474,
-0.060070306062698364,
-0.0325203537940979,
-0.07640479505062103,
-0.006595195271074772,
0.0012887758202850819,
-0.0026607532054185867,
0.021942470222711563,
0.03792044520378113,
-0.04197444021701813,
-0.01875307969748974,
-0.036889489740133286,
0.03831636905670166,
-0.06317847222089767,
0.00636649876832962,
0.01496795006096363,
-0.03639334812760353,
0.0918484777212143,
0.0343976654112339,
-0.012159015983343124,
0.04830814525485039,
-0.0321216955780983,
0.03420235216617584,
-0.01946498267352581,
0.0003711502067744732,
-0.02397824451327324,
-0.10761342197656631,
-0.004552291240543127,
0.007096497341990471,
-0.026675136759877205,
0.011009320616722107,
0.05826140195131302,
-0.07255178689956665,
0.08852989971637726,
0.045795194804668427,
-0.028709184378385544,
-0.06949204206466675,
0.04220075160264969,
-0.01300225593149662,
0.030769651755690575,
0.07004943490028381,
-0.036630868911743164,
0.05230797082185745,
-0.09796833992004395,
-0.029044989496469498,
0.003813681658357382,
-0.005507908761501312,
-0.014327602460980415,
-0.05181596428155899,
-0.003205474466085434,
0.007828624919056892,
0.17588765919208527,
-0.01944364234805107,
0.03954797238111496,
0.013621186837553978,
0.008042119443416595,
0.054742783308029175,
-0.014448558911681175,
0.06829410791397095,
-0.010283073410391808,
-0.02381829358637333,
-0.011840401217341423,
0.03724393621087074,
0.005569782108068466,
0.0025790799409151077,
0.14379096031188965,
0.0447123721241951,
0.08494368195533752,
0.07542300969362259,
0.0164736807346344,
0.01726897619664669,
-0.1301342248916626,
-0.09311309456825256,
0.0071800509467720985,
0.05731207877397537,
-0.018270358443260193,
0.012831216678023338,
0.09187722206115723,
-0.08872172236442566,
0.06907469034194946,
0.04792638123035431,
-0.047735121101140976,
-0.1271006166934967,
-0.19527983665466309,
-0.023609424009919167,
-0.02913467399775982,
-0.010442553088068962,
-0.09178264439105988,
0.01625652238726616,
0.0931113138794899,
0.025739753618836403,
-0.01027760747820139,
0.09327532351016998,
-0.10165953636169434,
-0.02983609214425087,
0.04410335421562195,
-0.02907523885369301,
0.014740108512341976,
0.049526844173669815,
0.02334880270063877,
-0.005999835208058357,
0.045208897441625595,
0.04167378693819046,
0.043439075350761414,
0.02559671550989151,
0.050834089517593384,
-0.024371393024921417,
-0.07443735003471375,
-0.03267650306224823,
-0.0044441428035497665,
0.053199511021375656,
0.13826221227645874,
0.02447490394115448,
-0.0699709877371788,
0.007902519777417183,
0.10630694776773453,
-0.029576368629932404,
-0.04795664921402931,
-0.10749892145395279,
0.23804783821105957,
0.023275021463632584,
0.003924275282770395,
-0.004391986411064863,
-0.044919226318597794,
0.003182264044880867,
0.20906026661396027,
0.22528547048568726,
0.0033758264034986496,
-0.009101508185267448,
0.008417513221502304,
-0.010932914912700653,
0.03727853670716286,
0.1464492231607437,
0.006290728226304054,
0.25044748187065125,
-0.04776386916637421,
0.03761995583772659,
-0.04189547151327133,
-0.03854681923985481,
-0.09921480715274811,
0.06567039340734482,
-0.007036289665848017,
0.0094614177942276,
-0.029579179361462593,
0.07098744064569473,
-0.04129201918840408,
-0.1746210753917694,
0.0007209870964288712,
-0.00033860746771097183,
-0.06063187122344971,
0.010204123333096504,
-0.0006073052063584328,
0.02005627192556858,
0.0836125910282135,
-0.018302377313375473,
-0.005536950659006834,
0.12708088755607605,
0.018361933529376984,
-0.09813312441110611,
-0.05819549411535263,
0.11537129431962967,
0.020271072164177895,
0.14090009033679962,
0.01063064020127058,
0.0780465230345726,
0.08637196570634842,
0.021163644269108772,
-0.09446890652179718,
0.044031206518411636,
-0.019402647390961647,
-0.029120707884430885,
0.007537504658102989,
0.10769718885421753,
-0.008648844435811043,
0.06195487454533577,
0.026943564414978027,
-0.0879872515797615,
0.06348004192113876,
0.01092904806137085,
-0.033156730234622955,
-0.08103206753730774,
0.08753557503223419,
-0.09298408031463623,
0.15583014488220215,
0.12405189871788025,
-0.013680140487849712,
-0.047477804124355316,
-0.030816007405519485,
0.020984793081879616,
-0.0027118371799588203,
0.05958588793873787,
-0.025781841948628426,
-0.133721724152565,
0.01883290335536003,
-0.08267717063426971,
0.026920251548290253,
-0.24861901998519897,
-0.08944029361009598,
0.029298195615410805,
-0.01779218018054962,
-0.02155790477991104,
0.051487285643815994,
0.043864112347364426,
0.027722204104065895,
-0.03525859862565994,
0.02373351901769638,
-0.03937293961644173,
0.05945010483264923,
-0.11115121841430664,
-0.09234045445919037
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 1700k (uncased)
Seed 3 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1700k')
model = BertModel.from_pretrained("multiberts-seed-3-1700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-1700k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 1700k (uncased)
Seed 3 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 1700k (uncased)\nSeed 3 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 1700k (uncased)\nSeed 3 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 1700k (uncased)\nSeed 3 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08279568701982498,
-0.005572676658630371,
-0.002114684786647558,
0.06910314410924911,
0.0850619450211525,
0.002080498728901148,
0.11096978187561035,
0.048724789172410965,
-0.031734637916088104,
0.021894048899412155,
0.09474478662014008,
0.03147835657000542,
0.04376458004117012,
0.065775565803051,
0.09669850766658783,
-0.25613105297088623,
0.05022379755973816,
-0.06345034390687943,
0.05987752974033356,
0.07671502232551575,
0.10068957507610321,
-0.07213543355464935,
0.06263472139835358,
0.03598777577280998,
-0.08315851539373398,
-0.014981759712100029,
-0.018245872110128403,
-0.03656568005681038,
0.09970725327730179,
0.0701647400856018,
0.060478873550891876,
0.0031763818114995956,
0.057501330971717834,
-0.08824417740106583,
0.016025828197598457,
0.04461098462343216,
-0.0008313143625855446,
0.02293018437922001,
-0.00800558552145958,
0.0170749444514513,
0.11435369402170181,
0.04112264886498451,
0.07923620194196701,
0.035213928669691086,
-0.09520545601844788,
-0.11395435780286789,
-0.08109060674905777,
0.10748016089200974,
0.053813062608242035,
0.04016122594475746,
-0.005245523527264595,
0.07482053339481354,
-0.03094521351158619,
0.07587134838104248,
0.10682517290115356,
-0.2526708245277405,
-0.011697902344167233,
0.07221897691488266,
0.046663351356983185,
0.04829113930463791,
0.0147725073620677,
0.02651866339147091,
0.0056455098092556,
0.04515787214040756,
0.03126605972647667,
-0.02301274985074997,
0.12008728086948395,
-0.04423783719539642,
-0.15265120565891266,
-0.0435645766556263,
0.11874429136514664,
-0.008192351087927818,
-0.12394346296787262,
-0.1016034260392189,
-0.028348958119750023,
0.11183825135231018,
-0.0024096164852380753,
-0.02168622799217701,
-0.006178122013807297,
0.011201668530702591,
0.02577277459204197,
-0.09469181299209595,
-0.086296945810318,
-0.030183251947164536,
-0.035130780190229416,
0.12696681916713715,
0.04726158827543259,
0.052876170724630356,
-0.035441234707832336,
0.08697034418582916,
-0.11178655177354813,
-0.03696656972169876,
-0.05227745324373245,
-0.08590072393417358,
-0.018492309376597404,
0.0075969635508954525,
-0.02457282692193985,
-0.08453265577554703,
-0.06135875731706619,
0.11602993309497833,
0.03338400274515152,
0.031725477427244186,
-0.004769038408994675,
0.04160449653863907,
0.07334413379430771,
0.09420652687549591,
-0.03989502787590027,
0.050481513142585754,
0.03247210383415222,
-0.020114433020353317,
0.05750127509236336,
-0.050234414637088776,
-0.09814893454313278,
0.07781190425157547,
-0.001401771791279316,
0.042192764580249786,
0.027416490018367767,
0.03356584906578064,
-0.009828981012105942,
-0.07354440540075302,
0.17317889630794525,
-0.07461364567279816,
-0.011674434877932072,
-0.018432525917887688,
0.012957371771335602,
0.0461372435092926,
0.03284567594528198,
-0.006511197425425053,
-0.04708411544561386,
-0.006007068790495396,
-0.05559782683849335,
-0.02849564328789711,
-0.054799869656562805,
-0.1173768937587738,
-0.0014942092821002007,
-0.03894829750061035,
-0.034313805401325226,
-0.14229580760002136,
-0.2187809944152832,
-0.01808304898440838,
0.06281952559947968,
-0.0031890869140625,
-0.00933778751641512,
0.02556207776069641,
0.018233591690659523,
-0.022314343601465225,
0.008973753079771996,
-0.04341898486018181,
-0.0005405684933066368,
-0.006621526554226875,
-0.033120520412921906,
0.05684005469083786,
-0.03733333200216293,
0.02280608005821705,
-0.0694248229265213,
0.021710403263568878,
-0.2113582193851471,
0.08770401775836945,
-0.03541393205523491,
0.003903651610016823,
-0.03816850110888481,
-0.04506740719079971,
0.008914774283766747,
0.04809269681572914,
-0.0105695566162467,
0.11514199525117874,
-0.1332288533449173,
-0.04966416209936142,
0.18231424689292908,
-0.1610938459634781,
-0.004065670073032379,
0.10187704861164093,
-0.046108849346637726,
0.05310424789786339,
0.133224219083786,
0.0950625017285347,
0.08510352671146393,
-0.07259055972099304,
0.013321210630238056,
0.06025039404630661,
-0.07045448571443558,
0.05092344433069229,
0.088645339012146,
-0.025210576131939888,
-0.1437249779701233,
0.030066680163145065,
-0.07615476846694946,
-0.007281376980245113,
-0.02540978230535984,
-0.021799791604280472,
0.008624086156487465,
-0.03817432373762131,
0.02377179265022278,
0.005940627306699753,
0.016771715134382248,
-0.043227117508649826,
-0.08155866712331772,
0.022363834083080292,
0.07516740262508392,
-0.0699523538351059,
0.0442931242287159,
-0.07033950835466385,
0.058073241263628006,
-0.07029534876346588,
-0.004063895903527737,
-0.1643531322479248,
-0.02022496983408928,
0.04519759863615036,
-0.04527126997709274,
0.04738122224807739,
0.087117999792099,
0.0028661745600402355,
0.12311897426843643,
-0.041029468178749084,
0.0026415791362524033,
-0.0036261342465877533,
-0.009504633024334908,
-0.04756728187203407,
-0.1221746951341629,
-0.0814555287361145,
-0.06936165690422058,
0.09310130774974823,
-0.0713614970445633,
0.02878665179014206,
-0.07251740992069244,
-0.02272207848727703,
-0.008911080658435822,
-0.05898401141166687,
-0.004544449038803577,
0.011225116439163685,
-0.028020301833748817,
-0.04570480436086655,
0.04841839522123337,
0.04984472319483757,
-0.05865193530917168,
0.07923298329114914,
-0.10566344112157822,
-0.06370039284229279,
0.054526396095752716,
0.014753642491996288,
-0.08250792324542999,
0.08815211802721024,
-0.02155771665275097,
-0.01347112376242876,
-0.05659341812133789,
-0.04559483751654625,
0.18852485716342926,
-0.023406434804201126,
0.0995287299156189,
-0.09241433441638947,
0.0009821170242503285,
0.027956852689385414,
-0.044074974954128265,
-0.021839290857315063,
0.05817531794309616,
0.051651731133461,
-0.19109292328357697,
0.01376420259475708,
0.05209162086248398,
0.07794008404016495,
0.11312195658683777,
0.02999502420425415,
-0.02160382829606533,
-0.04522453248500824,
-0.011475175619125366,
0.006227345671504736,
0.05223572254180908,
-0.026552531868219376,
-0.008142907172441483,
0.029423221945762634,
0.058625753968954086,
0.02000962756574154,
-0.07885144650936127,
0.034031447023153305,
0.06778022646903992,
-0.015874888747930527,
-0.03574356064200401,
-0.021758705377578735,
-0.06092676520347595,
0.062020447105169296,
0.05252118408679962,
0.03431028127670288,
0.025415197014808655,
-0.014527504332363605,
-0.13686038553714752,
0.18840113282203674,
-0.11499650776386261,
-0.26157593727111816,
-0.10717090964317322,
-0.05967561900615692,
-0.024911893531680107,
0.03984235227108002,
0.058983370661735535,
-0.03284294158220291,
-0.044097840785980225,
-0.11800992488861084,
0.06083519011735916,
-0.0664997398853302,
-0.029803352430462837,
-0.009393809363245964,
-0.05277388542890549,
-0.021605227142572403,
-0.12726931273937225,
-0.012422701343894005,
-0.032546576112508774,
-0.07543198764324188,
0.006500503979623318,
-0.037481363862752914,
0.02831851877272129,
0.13593432307243347,
0.037527862936258316,
-0.018735148012638092,
-0.019255809485912323,
0.1894640028476715,
0.013579964637756348,
0.058796465396881104,
0.11363649368286133,
-0.025522388517856598,
0.05370399355888367,
0.04425553232431412,
0.023765025660395622,
-0.04867887869477272,
0.014243824407458305,
-0.016713719815015793,
-0.1235114187002182,
-0.17373694479465485,
-0.07107894122600555,
-0.003966184798628092,
0.010100670158863068,
0.020497610792517662,
0.0359356552362442,
0.021177418529987335,
0.03961224853992462,
-0.031086374074220657,
0.025188375264406204,
-0.0119929239153862,
0.08069220930337906,
0.025439340621232986,
-0.07544472068548203,
0.09356223046779633,
-0.05966998264193535,
0.018941722810268402,
0.1107940748333931,
-0.060185737907886505,
0.18679125607013702,
0.024746766313910484,
0.054876819252967834,
0.10462529212236404,
0.018530268222093582,
0.05293234437704086,
0.08691594004631042,
-0.04716210812330246,
0.005251064896583557,
-0.06134013086557388,
-0.05208374559879303,
-0.03730889782309532,
0.04775642603635788,
0.03112109750509262,
0.018287252634763718,
-0.11911655962467194,
0.023710813373327255,
0.000664848368614912,
0.13675877451896667,
0.04157916083931923,
-0.11980099976062775,
-0.12150561809539795,
0.03514880686998367,
-0.044446371495723724,
-0.06092311441898346,
0.03173026069998741,
0.058069974184036255,
-0.15211190283298492,
0.04645116627216339,
-0.0053656259551644325,
0.06434102356433868,
-0.08843278884887695,
0.015263631008565426,
-0.043296121060848236,
0.0009468300268054008,
0.005081361625343561,
0.06662282347679138,
-0.1329275667667389,
0.10854963213205338,
0.019942503422498703,
0.04933806508779526,
-0.07994752377271652,
0.0157389547675848,
-0.010524623095989227,
0.10882409662008286,
0.11802050471305847,
0.043462514877319336,
-0.05532802641391754,
-0.018416382372379303,
-0.045675069093704224,
0.019482959061861038,
0.05626070871949196,
-0.07593479007482529,
0.05796636641025543,
0.00934834685176611,
0.0076726577244699,
-0.02264484390616417,
0.018232151865959167,
-0.13378986716270447,
-0.12158341705799103,
0.06098301336169243,
-0.07804115116596222,
-0.10670679807662964,
-0.057722851634025574,
-0.06295541673898697,
-0.052157893776893616,
0.21347954869270325,
-0.11434409022331238,
-0.09009066969156265,
-0.09750787168741226,
-0.014747638255357742,
0.04503091052174568,
-0.0655815601348877,
0.0455155074596405,
-0.03839258477091789,
0.09003965556621552,
-0.0490645095705986,
-0.10928735136985779,
0.03518837317824364,
-0.11702214926481247,
-0.11332949995994568,
-0.04392533004283905,
0.10645308345556259,
0.1163475289940834,
0.037630919367074966,
0.011901576071977615,
0.01000450924038887,
0.0015177186578512192,
-0.11891933530569077,
0.016641216352581978,
0.13208851218223572,
-0.004155388101935387,
0.07285436242818832,
-0.05650746077299118,
0.01965361088514328,
-0.01648968644440174,
-0.00244140625,
0.13098004460334778,
0.18801794946193695,
-0.06308214366436005,
0.17622390389442444,
0.20451857149600983,
-0.10495410859584808,
-0.19055545330047607,
-0.05309470742940903,
-0.003969953395426273,
0.04340461641550064,
0.05153850093483925,
-0.18114715814590454,
0.08932561427354813,
0.03537699580192566,
-0.03298737108707428,
0.011362109333276749,
-0.23656877875328064,
-0.11197752505540848,
0.09280343353748322,
0.059753891080617905,
0.18678677082061768,
-0.07887977361679077,
-0.03790318965911865,
-0.013775533065199852,
-0.03604431450366974,
0.04841841757297516,
-0.032067716121673584,
0.08924953639507294,
0.003952136263251305,
-0.03528757393360138,
0.002549240365624428,
-0.03305415064096451,
0.09359242022037506,
0.038868557661771774,
0.02287931740283966,
-0.07019919157028198,
-0.002464666962623596,
0.10884680598974228,
-0.03810889646410942,
0.0980667769908905,
0.041424158960580826,
0.07608662545681,
-0.09417217969894409,
-0.06077593192458153,
-0.07735341787338257,
0.043725647032260895,
-0.04172322899103165,
-0.05591808259487152,
-0.06454543769359589,
0.06204249709844589,
0.0384647510945797,
0.009186336770653725,
-0.0017455574125051498,
-0.03916207700967789,
0.047085028141736984,
0.09166313707828522,
0.08294639736413956,
-0.03752828761935234,
-0.06782501935958862,
-0.05199786648154259,
-0.04947582632303238,
0.06652823090553284,
-0.0889272689819336,
0.017293188720941544,
0.026134708896279335,
0.008861420676112175,
0.08778423815965652,
0.03352309390902519,
-0.1390163004398346,
0.009637095034122467,
0.03632660582661629,
-0.12437880784273148,
-0.10536961257457733,
-0.019751716405153275,
0.03378576785326004,
-0.03721432387828827,
0.05324737727642059,
0.14335714280605316,
-0.03539795055985451,
-0.03176324814558029,
-0.04819812625646591,
0.040884166955947876,
-0.018523961305618286,
0.04940321668982506,
0.062185585498809814,
0.030476219952106476,
-0.07166649401187897,
0.07420948147773743,
0.03697289526462555,
-0.03725208342075348,
0.0410158671438694,
0.04640756919980049,
-0.09398200362920761,
-0.07838718593120575,
-0.06148109212517738,
0.0831463411450386,
-0.025556402280926704,
-0.04539832845330238,
0.001689951866865158,
-0.08329584449529648,
0.06990431994199753,
0.07244584709405899,
0.047507550567388535,
0.03588605299592018,
-0.08716712892055511,
0.016730207949876785,
-0.05246766656637192,
0.034680843353271484,
-0.033082544803619385,
-0.005436906591057777,
-0.054207369685173035,
0.06507837772369385,
0.06506644189357758,
0.09636946767568588,
-0.03375746309757233,
-0.07355564832687378,
-0.08395940065383911,
-0.011318573728203773,
-0.06186579912900925,
-0.03486248105764389,
-0.08058550208806992,
-0.005592056550085545,
0.0002583269961178303,
-0.002474309876561165,
0.02014986425638199,
0.03691472113132477,
-0.04198674112558365,
-0.017626825720071793,
-0.03453805297613144,
0.038442302495241165,
-0.0628247857093811,
0.006745082326233387,
0.01591072604060173,
-0.035911910235881805,
0.09248919785022736,
0.03402675688266754,
-0.011845648288726807,
0.047600097954273224,
-0.025528710335493088,
0.03446946293115616,
-0.020892731845378876,
0.0007514304015785456,
-0.02358616515994072,
-0.10935871303081512,
-0.004189405124634504,
0.006209343671798706,
-0.028814248740673065,
0.012704967521131039,
0.057026393711566925,
-0.07280468940734863,
0.08779694139957428,
0.047663554549217224,
-0.029569946229457855,
-0.07023729383945465,
0.04129378870129585,
-0.012129245325922966,
0.02857065759599209,
0.06880185753107071,
-0.0359768271446228,
0.050467707216739655,
-0.0980658233165741,
-0.02854662388563156,
0.0028658562805503607,
-0.005403164774179459,
-0.012081287801265717,
-0.0497613251209259,
-0.004076200537383556,
0.006903749890625477,
0.17574264109134674,
-0.02361002191901207,
0.034864261746406555,
0.016352206468582153,
0.006708009168505669,
0.04925138130784035,
-0.01543373428285122,
0.06935903429985046,
-0.009536861442029476,
-0.02588913030922413,
-0.010088269598782063,
0.03664824366569519,
0.005542142316699028,
0.004471141844987869,
0.14677128195762634,
0.044496145099401474,
0.09137880057096481,
0.07448918372392654,
0.017288248986005783,
0.016741018742322922,
-0.1323663890361786,
-0.0948566272854805,
0.005919140763580799,
0.057350002229213715,
-0.017387185245752335,
0.006301768124103546,
0.09460775554180145,
-0.08794460445642471,
0.06829465180635452,
0.048307303339242935,
-0.04724293574690819,
-0.1274252086877823,
-0.19211432337760925,
-0.02226598747074604,
-0.028438100591301918,
-0.010439414530992508,
-0.09143711626529694,
0.015435459092259407,
0.0850321352481842,
0.026603443548083305,
-0.00962774083018303,
0.09373115003108978,
-0.10190196335315704,
-0.02856599912047386,
0.043996412307024,
-0.02727571129798889,
0.014849637635052204,
0.04849231243133545,
0.02331918105483055,
-0.007624663412570953,
0.04426753893494606,
0.0397907979786396,
0.04223589599132538,
0.022362353280186653,
0.051750436425209045,
-0.024009786546230316,
-0.07350128144025803,
-0.03353135287761688,
-0.005161563400179148,
0.055168990045785904,
0.13695171475410461,
0.025173788890242577,
-0.06902379542589188,
0.008407622575759888,
0.10680358111858368,
-0.029584985226392746,
-0.04720643162727356,
-0.10795450210571289,
0.24259904026985168,
0.0256546251475811,
0.0024453941732645035,
-0.002405200619250536,
-0.04279527813196182,
0.002080315724015236,
0.21020784974098206,
0.2265949547290802,
0.00618579750880599,
-0.008830027654767036,
0.008421240374445915,
-0.011399653740227222,
0.03566831350326538,
0.14789451658725739,
0.006252685561776161,
0.2503851354122162,
-0.04597960412502289,
0.040601328015327454,
-0.041811052709817886,
-0.03992234915494919,
-0.09609419107437134,
0.06722234189510345,
-0.005533254239708185,
0.009089740924537182,
-0.030461227521300316,
0.07253558933734894,
-0.04246189072728157,
-0.16872484982013702,
0.00047247111797332764,
-0.00015080207958817482,
-0.06048866733908653,
0.011103744618594646,
-0.0015110345557332039,
0.019143516197800636,
0.08310830593109131,
-0.01625002920627594,
-0.006832045037299395,
0.12399907410144806,
0.019049692898988724,
-0.09676215052604675,
-0.06389697641134262,
0.11753111332654953,
0.028752125799655914,
0.1453879326581955,
0.011129524558782578,
0.07657253742218018,
0.08730919659137726,
0.02098454162478447,
-0.09757889807224274,
0.042595844715833664,
-0.019477739930152893,
-0.025885771960020065,
0.006145232357084751,
0.10668919235467911,
-0.008749413304030895,
0.06404531002044678,
0.02707543969154358,
-0.09011706709861755,
0.061415623873472214,
0.007291276007890701,
-0.03283510357141495,
-0.07972118258476257,
0.08748040348291397,
-0.08998246490955353,
0.15766997635364532,
0.12362519651651382,
-0.014849054627120495,
-0.04832962155342102,
-0.029088107869029045,
0.020290305837988853,
-0.0024945884943008423,
0.061757806688547134,
-0.025356724858283997,
-0.13576850295066833,
0.019900809973478317,
-0.08414961397647858,
0.027896152809262276,
-0.246744304895401,
-0.09041014313697815,
0.029891757294535637,
-0.017555689439177513,
-0.021147582679986954,
0.05196785181760788,
0.04386062175035477,
0.027504418045282364,
-0.03473321348428726,
0.025982141494750977,
-0.03971152380108833,
0.05789989233016968,
-0.1124708503484726,
-0.0923319011926651
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 1800k (uncased)
Seed 3 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1800k')
model = BertModel.from_pretrained("multiberts-seed-3-1800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-1800k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 1800k (uncased)
Seed 3 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 1800k (uncased)\nSeed 3 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 1800k (uncased)\nSeed 3 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 1800k (uncased)\nSeed 3 intermediate checkpoint 1800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0820022001862526,
-0.001049685524776578,
-0.0021999815944582224,
0.06780773401260376,
0.08600440621376038,
0.0021412093192338943,
0.11494184285402298,
0.0487411692738533,
-0.033840786665678024,
0.021460650488734245,
0.0927775502204895,
0.029613327234983444,
0.042321279644966125,
0.06474613398313522,
0.0972483679652214,
-0.2532731592655182,
0.04966729134321213,
-0.06330796331167221,
0.056595414876937866,
0.07647602260112762,
0.1003415584564209,
-0.07135999202728271,
0.06227141246199608,
0.03679922968149185,
-0.08085715770721436,
-0.015471972525119781,
-0.018448667600750923,
-0.03678366541862488,
0.0997408926486969,
0.06848190724849701,
0.06264631450176239,
0.0015465393662452698,
0.057173341512680054,
-0.08987540006637573,
0.015806451439857483,
0.04474679380655289,
0.00010675098747015,
0.02296861633658409,
-0.008279068395495415,
0.01422758400440216,
0.10713621973991394,
0.04110437631607056,
0.08011497557163239,
0.035568464547395706,
-0.09615735709667206,
-0.10764805227518082,
-0.08031846582889557,
0.10241957008838654,
0.05427209660410881,
0.0443534217774868,
-0.005273086950182915,
0.07579141110181808,
-0.03168557584285736,
0.07402663677930832,
0.10969604551792145,
-0.24944689869880676,
-0.01038344670087099,
0.0711282268166542,
0.04328874126076698,
0.04542165249586105,
0.014607205986976624,
0.025859646499156952,
0.004998020827770233,
0.045278117060661316,
0.029595740139484406,
-0.023130282759666443,
0.1199943944811821,
-0.043034911155700684,
-0.15213818848133087,
-0.042970381677150726,
0.11947783827781677,
-0.007927244529128075,
-0.1249384731054306,
-0.10037167370319366,
-0.027632713317871094,
0.1085631400346756,
-0.0027350187301635742,
-0.019202962517738342,
-0.004286859650164843,
0.010227437131106853,
0.023057010024785995,
-0.09400062263011932,
-0.0862785279750824,
-0.02830353006720543,
-0.03592536970973015,
0.12792305648326874,
0.046152494847774506,
0.0512218251824379,
-0.03504715859889984,
0.08630775660276413,
-0.11426728963851929,
-0.03751194477081299,
-0.05164148285984993,
-0.08441701531410217,
-0.019457044079899788,
0.007574738934636116,
-0.029092036187648773,
-0.08820639550685883,
-0.0617787167429924,
0.11823269724845886,
0.032472457736730576,
0.030735574662685394,
-0.0020655011758208275,
0.041152313351631165,
0.07370389997959137,
0.09518834948539734,
-0.04039166867733002,
0.054266273975372314,
0.03062957152724266,
-0.02161380648612976,
0.057766690850257874,
-0.050669215619564056,
-0.09958420693874359,
0.08094696700572968,
-0.0010464582592248917,
0.0406905934214592,
0.026140408590435982,
0.0331764742732048,
-0.01240387000143528,
-0.07506702095270157,
0.1645435392856598,
-0.07597769796848297,
-0.012804283760488033,
-0.018924370408058167,
0.012605683878064156,
0.045011959969997406,
0.03231866657733917,
-0.004354842472821474,
-0.04724596440792084,
-0.002915065735578537,
-0.054272279143333435,
-0.02704334259033203,
-0.05597829446196556,
-0.11554839462041855,
-0.001291855238378048,
-0.04183504730463028,
-0.033359359949827194,
-0.14162883162498474,
-0.21676871180534363,
-0.018046239390969276,
0.06597200036048889,
-0.0026806513778865337,
-0.010160067118704319,
0.02563442662358284,
0.017103513702750206,
-0.022432450205087662,
0.008952416479587555,
-0.04616785794496536,
-0.0005855206400156021,
-0.008007736876606941,
-0.033504560589790344,
0.054802462458610535,
-0.03851141408085823,
0.02429370954632759,
-0.07013455033302307,
0.020839842036366463,
-0.2123376727104187,
0.08667274564504623,
-0.03344196081161499,
0.005844591185450554,
-0.03733784332871437,
-0.044546082615852356,
0.007823120802640915,
0.04746741056442261,
-0.00979192741215229,
0.1157274842262268,
-0.13251236081123352,
-0.0497402586042881,
0.17473697662353516,
-0.15852440893650055,
-0.003076791763305664,
0.09927403926849365,
-0.04766133800148964,
0.05276685953140259,
0.13166320323944092,
0.09857044368982315,
0.08298523724079132,
-0.0722302719950676,
0.013392449356615543,
0.05869259312748909,
-0.06904713809490204,
0.05389299616217613,
0.08979329466819763,
-0.0269605852663517,
-0.1381273865699768,
0.029327478259801865,
-0.07773333042860031,
-0.007247829809784889,
-0.024639207869768143,
-0.022646654397249222,
0.009307760745286942,
-0.038762468844652176,
0.02535266801714897,
0.00600674981251359,
0.01692586950957775,
-0.042350586503744125,
-0.0818709284067154,
0.02584289014339447,
0.0746968537569046,
-0.06929566711187363,
0.044687166810035706,
-0.07122908532619476,
0.05711602419614792,
-0.07199416309595108,
-0.005825622007250786,
-0.1659398376941681,
-0.023700203746557236,
0.04355672001838684,
-0.045793160796165466,
0.050091784447431564,
0.08728933334350586,
0.0024220850318670273,
0.12003351747989655,
-0.0402500182390213,
0.0024039610289037228,
-0.007326196879148483,
-0.00982426293194294,
-0.045592837035655975,
-0.12106842547655106,
-0.08056683093309402,
-0.06825956702232361,
0.09737155586481094,
-0.06794574856758118,
0.02907542698085308,
-0.07190804183483124,
-0.02328324317932129,
-0.009142400696873665,
-0.05971411615610123,
-0.00449022650718689,
0.011219684034585953,
-0.029337314888834953,
-0.046009548008441925,
0.048949651420116425,
0.05133655667304993,
-0.058190301060676575,
0.07935281842947006,
-0.10427336394786835,
-0.06299750506877899,
0.05511336028575897,
0.011494999751448631,
-0.08255180716514587,
0.09063361585140228,
-0.020601904019713402,
-0.012591630220413208,
-0.0568869411945343,
-0.043699972331523895,
0.19569267332553864,
-0.021969210356473923,
0.0997668206691742,
-0.09131841361522675,
0.0002894338103942573,
0.027448926120996475,
-0.04565849527716637,
-0.020953338593244553,
0.05710798501968384,
0.05283570662140846,
-0.18281738460063934,
0.01390375941991806,
0.0536237433552742,
0.07663320004940033,
0.11174329370260239,
0.028997980058193207,
-0.022283853963017464,
-0.044775255024433136,
-0.0125082116574049,
0.006621324922889471,
0.05349612236022949,
-0.025045692920684814,
-0.008181343786418438,
0.02956898882985115,
0.05943118780851364,
0.019401688128709793,
-0.07862267643213272,
0.03366800770163536,
0.06851761043071747,
-0.015398891642689705,
-0.03389408066868782,
-0.024910341948270798,
-0.06069135293364525,
0.06094949692487717,
0.05474673956632614,
0.03475136309862137,
0.026789003983139992,
-0.015002411790192127,
-0.13700240850448608,
0.18819814920425415,
-0.11482837051153183,
-0.2619977295398712,
-0.10819011181592941,
-0.06388317048549652,
-0.026872029528021812,
0.0405765175819397,
0.05749659985303879,
-0.033892951905727386,
-0.04462411254644394,
-0.11780130863189697,
0.05994946137070656,
-0.06652366369962692,
-0.03180390223860741,
-0.008301082998514175,
-0.052239030599594116,
-0.019460970535874367,
-0.12678635120391846,
-0.012598579749464989,
-0.031510304659605026,
-0.07526923716068268,
0.007244355045258999,
-0.035458192229270935,
0.029623113572597504,
0.13833114504814148,
0.036760780960321426,
-0.018435833975672722,
-0.017859939485788345,
0.18843096494674683,
0.0117877759039402,
0.05742824822664261,
0.11330123990774155,
-0.02661663293838501,
0.05291124805808067,
0.04660743474960327,
0.02449868619441986,
-0.04760580509901047,
0.015193051658570766,
-0.013965829275548458,
-0.12234386056661606,
-0.17580153048038483,
-0.07283297926187515,
-0.0033826055005192757,
0.0067200325429439545,
0.019649909809231758,
0.03642139583826065,
0.02604008838534355,
0.0395764522254467,
-0.029478557407855988,
0.02344655804336071,
-0.013405535370111465,
0.0818827822804451,
0.026348784565925598,
-0.07400842010974884,
0.09310679882764816,
-0.0589938759803772,
0.015887465327978134,
0.11031784117221832,
-0.06130246818065643,
0.18837250769138336,
0.02523856796324253,
0.05845398083329201,
0.10417994856834412,
0.0163128599524498,
0.05331043154001236,
0.08585242927074432,
-0.0466950349509716,
0.004585436545312405,
-0.06096656620502472,
-0.05161811411380768,
-0.03567422181367874,
0.0477927103638649,
0.02917485497891903,
0.019823219627141953,
-0.11898255348205566,
0.021931365132331848,
0.0006248047575354576,
0.13947512209415436,
0.04407783970236778,
-0.11975587904453278,
-0.1193254292011261,
0.03423900902271271,
-0.04393034428358078,
-0.060000382363796234,
0.03167561814188957,
0.05301450192928314,
-0.1531284600496292,
0.04619003087282181,
-0.0067711519077420235,
0.06461870670318604,
-0.08816461265087128,
0.0160317812114954,
-0.03963645547628403,
-0.0013086618855595589,
0.004979936871677637,
0.0680723637342453,
-0.13682489097118378,
0.10627266764640808,
0.019490547478199005,
0.048017051070928574,
-0.07805663347244263,
0.0156878475099802,
-0.009729744866490364,
0.10723112523555756,
0.11705020070075989,
0.043333567678928375,
-0.048267342150211334,
-0.017301589250564575,
-0.046045318245887756,
0.01919345185160637,
0.05809832364320755,
-0.07723967730998993,
0.059348344802856445,
0.009041723795235157,
0.007757994346320629,
-0.02240046113729477,
0.01957370713353157,
-0.13278764486312866,
-0.12298688292503357,
0.062257036566734314,
-0.07821203023195267,
-0.1028905138373375,
-0.05752069875597954,
-0.06419239193201065,
-0.05428967624902725,
0.21486079692840576,
-0.10881876945495605,
-0.08976180851459503,
-0.09806336462497711,
-0.01766456663608551,
0.04421357065439224,
-0.06453070789575577,
0.0467611625790596,
-0.03821154683828354,
0.0920717641711235,
-0.048187255859375,
-0.10957860946655273,
0.033820636570453644,
-0.11555200815200806,
-0.11158327758312225,
-0.04417309910058975,
0.10609965026378632,
0.11500851064920425,
0.038741957396268845,
0.009822907857596874,
0.011952306143939495,
-0.00007825717329978943,
-0.1179196834564209,
0.014776874333620071,
0.12988434731960297,
-0.0019112471491098404,
0.07459130883216858,
-0.0586264505982399,
0.023584574460983276,
-0.01848711259663105,
-0.002063389867544174,
0.13357722759246826,
0.1852516084909439,
-0.06218777224421501,
0.1762925684452057,
0.20209021866321564,
-0.10480501502752304,
-0.19135814905166626,
-0.051771704107522964,
-0.003352726809680462,
0.04328308627009392,
0.05282660573720932,
-0.18332073092460632,
0.09083998948335648,
0.03312864154577255,
-0.03250237554311752,
0.01037316769361496,
-0.2350410670042038,
-0.11114875972270966,
0.09175917506217957,
0.0591529905796051,
0.18399405479431152,
-0.07966268062591553,
-0.037917666137218475,
-0.014881283044815063,
-0.03957361727952957,
0.048432596027851105,
-0.03314237296581268,
0.08960279822349548,
0.005668129771947861,
-0.03163217753171921,
0.002133474685251713,
-0.03274615481495857,
0.09340327233076096,
0.040300387889146805,
0.021846897900104523,
-0.07094065845012665,
-0.005717989057302475,
0.1101115420460701,
-0.038927070796489716,
0.09855268895626068,
0.04375346750020981,
0.07484602183103561,
-0.09446047991514206,
-0.06045105680823326,
-0.07589991390705109,
0.04438043385744095,
-0.04187662899494171,
-0.057253848761320114,
-0.06478311121463776,
0.06207282096147537,
0.038075655698776245,
0.010303672403097153,
0.0008527711033821106,
-0.038865718990564346,
0.04510073736310005,
0.08773607015609741,
0.08428671956062317,
-0.039114441722631454,
-0.0721951574087143,
-0.05207302048802376,
-0.04840027913451195,
0.06412258744239807,
-0.08913631737232208,
0.017879966646432877,
0.027310827746987343,
0.009347573854029179,
0.08916463702917099,
0.03414352610707283,
-0.13918092846870422,
0.009342877194285393,
0.035524602979421616,
-0.12410496175289154,
-0.10096248984336853,
-0.021977797150611877,
0.03589480370283127,
-0.03760658577084541,
0.054605573415756226,
0.14404508471488953,
-0.03528907150030136,
-0.03166397288441658,
-0.0468791201710701,
0.03915901854634285,
-0.0182279571890831,
0.047584548592567444,
0.0634162649512291,
0.03014112450182438,
-0.07183850556612015,
0.07567799091339111,
0.037001289427280426,
-0.035860151052474976,
0.042143646627664566,
0.04560340195894241,
-0.0941474586725235,
-0.0784408301115036,
-0.05939212441444397,
0.087453193962574,
-0.025125039741396904,
-0.04483195021748543,
-0.0014047697186470032,
-0.08365973085165024,
0.06954436004161835,
0.07196834683418274,
0.04839705675840378,
0.03511006385087967,
-0.08756397664546967,
0.015143615193665028,
-0.05256376042962074,
0.03435920923948288,
-0.033367641270160675,
-0.004407860338687897,
-0.05527675151824951,
0.06286060810089111,
0.06585835665464401,
0.09727530181407928,
-0.03340388834476471,
-0.07558916509151459,
-0.08275099843740463,
-0.010430715046823025,
-0.05885394662618637,
-0.03608454763889313,
-0.07705298811197281,
-0.006353076547384262,
0.0001683286391198635,
-0.0015740934759378433,
0.0194987915456295,
0.036881182342767715,
-0.04183085262775421,
-0.01877020299434662,
-0.03563534840941429,
0.03639427572488785,
-0.0608520433306694,
0.007830548100173473,
0.015119915828108788,
-0.03533518314361572,
0.091356560587883,
0.03313051536679268,
-0.012197420932352543,
0.04712652042508125,
-0.025653397664427757,
0.03341315686702728,
-0.022079866379499435,
0.00029994314536452293,
-0.024505484849214554,
-0.11125729978084564,
-0.005431184079498053,
0.006315184757113457,
-0.027405444532632828,
0.01120678335428238,
0.055351682007312775,
-0.07294601202011108,
0.08725311607122421,
0.047389253973960876,
-0.029596712440252304,
-0.07095157355070114,
0.040029376745224,
-0.012734219431877136,
0.029223522171378136,
0.07038947194814682,
-0.03418023884296417,
0.053029559552669525,
-0.09721513837575912,
-0.028236571699380875,
0.0029415530152618885,
-0.003215007483959198,
-0.015595665201544762,
-0.05119048058986664,
-0.0054161436855793,
0.006718531250953674,
0.17506444454193115,
-0.022233691066503525,
0.03433677554130554,
0.016328822821378708,
0.007452886551618576,
0.05056376755237579,
-0.014096299186348915,
0.06945893168449402,
-0.008206686936318874,
-0.026076996698975563,
-0.010811184532940388,
0.03647192567586899,
0.005329672247171402,
0.0032664742320775986,
0.14470899105072021,
0.04733091965317726,
0.09278116375207901,
0.07440581917762756,
0.017127657309174538,
0.0169048048555851,
-0.1287110149860382,
-0.08956689387559891,
0.0066157374531030655,
0.0563976988196373,
-0.015723751857876778,
0.011855244636535645,
0.0938103124499321,
-0.08777553588151932,
0.07010757923126221,
0.04745964705944061,
-0.047034841030836105,
-0.1255692094564438,
-0.18875566124916077,
-0.022382788360118866,
-0.0299521591514349,
-0.010908786207437515,
-0.0919644832611084,
0.01454751007258892,
0.09300954639911652,
0.02637261524796486,
-0.00973809789866209,
0.09581974148750305,
-0.10608263313770294,
-0.03018120676279068,
0.04351544380187988,
-0.028470538556575775,
0.015607713721692562,
0.047062765806913376,
0.024130284786224365,
-0.00821630097925663,
0.04521240293979645,
0.04136429727077484,
0.04372585564851761,
0.025007685646414757,
0.051401205360889435,
-0.023102935403585434,
-0.07453828305006027,
-0.03205393627285957,
-0.006209077779203653,
0.05453217774629593,
0.13646569848060608,
0.02290918119251728,
-0.06851097196340561,
0.008255834691226482,
0.10827112197875977,
-0.03008689545094967,
-0.04945340007543564,
-0.10882961004972458,
0.23871906101703644,
0.025166472420096397,
0.002158950548619032,
-0.004107159562408924,
-0.04473043605685234,
0.002650151029229164,
0.21159371733665466,
0.22599825263023376,
0.0036922264844179153,
-0.00934000127017498,
0.010786138474941254,
-0.01113686989992857,
0.035975608974695206,
0.14590555429458618,
0.0066785626113414764,
0.2509675621986389,
-0.04778867959976196,
0.039664071053266525,
-0.04190557450056076,
-0.03923756629228592,
-0.09786714613437653,
0.06808514147996902,
-0.005525318440049887,
0.009349890053272247,
-0.030543455854058266,
0.07281903922557831,
-0.04239272698760033,
-0.1666753888130188,
-0.0006434107199311256,
-0.0039889151230454445,
-0.06133591756224632,
0.011066697537899017,
-0.0012776684015989304,
0.017555123195052147,
0.08383891731500626,
-0.016251984983682632,
-0.007825689390301704,
0.12452723830938339,
0.019428621977567673,
-0.09636574238538742,
-0.06238865107297897,
0.11790066212415695,
0.024477288126945496,
0.14457407593727112,
0.011053184047341347,
0.07583159953355789,
0.08758394420146942,
0.02095474675297737,
-0.09603074193000793,
0.04351140931248665,
-0.019850175827741623,
-0.027663491666316986,
0.006794665940105915,
0.10595718771219254,
-0.008316190913319588,
0.06332919746637344,
0.026915090158581734,
-0.08998320996761322,
0.06244182959198952,
0.008579608052968979,
-0.031861115247011185,
-0.08008361607789993,
0.08667773008346558,
-0.09211414307355881,
0.15689191222190857,
0.12252763658761978,
-0.014157267287373543,
-0.048142604529857635,
-0.029228575527668,
0.019598551094532013,
-0.00333791715092957,
0.05891060829162598,
-0.024924738332629204,
-0.1354953646659851,
0.020840859040617943,
-0.08455250412225723,
0.026475772261619568,
-0.2537795305252075,
-0.09014538675546646,
0.03173638507723808,
-0.0171809159219265,
-0.020142942667007446,
0.0505644865334034,
0.04587133601307869,
0.027925293892621994,
-0.035630617290735245,
0.03136055916547775,
-0.03867670148611069,
0.058674100786447525,
-0.11184626817703247,
-0.09279511868953705
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 180k (uncased)
Seed 3 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-180k')
model = BertModel.from_pretrained("multiberts-seed-3-180k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-180k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 180k (uncased)
Seed 3 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 180k (uncased)\nSeed 3 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 180k (uncased)\nSeed 3 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 180k (uncased)\nSeed 3 intermediate checkpoint 180k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08131225407123566,
-0.0054302699863910675,
-0.0022333201486617327,
0.06792926043272018,
0.08445212990045547,
0.001433808822184801,
0.11488044261932373,
0.0487995520234108,
-0.03611929342150688,
0.023545492440462112,
0.09343577921390533,
0.03442740440368652,
0.041658539324998856,
0.06763455271720886,
0.09672615677118301,
-0.25776058435440063,
0.05132235586643219,
-0.06334899365901947,
0.054837681353092194,
0.0769878551363945,
0.10081072151660919,
-0.07052387297153473,
0.06285881996154785,
0.03653251379728317,
-0.08018679171800613,
-0.016909807920455933,
-0.018702330067753792,
-0.03422856330871582,
0.10053303837776184,
0.07107889652252197,
0.0612122118473053,
0.0018353704363107681,
0.056414082646369934,
-0.09071037918329239,
0.016295868903398514,
0.04535127058625221,
-0.0011149300262331963,
0.024137316271662712,
-0.006970958784222603,
0.014076510444283485,
0.11058057844638824,
0.03759438917040825,
0.07849883288145065,
0.03495561331510544,
-0.09583345055580139,
-0.11005517840385437,
-0.08096157014369965,
0.1033993512392044,
0.05350503325462341,
0.04365993291139603,
-0.006278665736317635,
0.07528325915336609,
-0.03188026696443558,
0.07412064075469971,
0.11378262937068939,
-0.2537604570388794,
-0.01115633174777031,
0.06670103967189789,
0.04595806822180748,
0.042162470519542694,
0.014316302724182606,
0.027622150257229805,
0.005511276423931122,
0.04575439915060997,
0.031711041927337646,
-0.02342652902007103,
0.12368123233318329,
-0.04530976340174675,
-0.15357044339179993,
-0.04262097179889679,
0.11771342903375626,
-0.0068684909492731094,
-0.12476205825805664,
-0.10375688225030899,
-0.028784295544028282,
0.11337363719940186,
-0.002397773787379265,
-0.019512418657541275,
-0.005322423297911882,
0.00997624360024929,
0.024055903777480125,
-0.09447015076875687,
-0.08709906041622162,
-0.028989460319280624,
-0.037539876997470856,
0.1310320943593979,
0.04678649455308914,
0.05024931952357292,
-0.03435651585459709,
0.0868573933839798,
-0.1115603893995285,
-0.0376676470041275,
-0.05244109407067299,
-0.08587564527988434,
-0.018900489434599876,
0.00904140155762434,
-0.02771761268377304,
-0.08706260472536087,
-0.06017769128084183,
0.11849835515022278,
0.032371122390031815,
0.030243828892707825,
-0.0024265930987894535,
0.04093583673238754,
0.07411341369152069,
0.09575521945953369,
-0.04087952896952629,
0.05478448420763016,
0.030568990856409073,
-0.020718567073345184,
0.05862342193722725,
-0.05083860456943512,
-0.09893402457237244,
0.07777894288301468,
-0.0002860734239220619,
0.04147946089506149,
0.02588358335196972,
0.034862253814935684,
-0.012322597205638885,
-0.07486659288406372,
0.16986432671546936,
-0.07520303130149841,
-0.012677936814725399,
-0.016935016959905624,
0.013113720342516899,
0.05093442648649216,
0.03124294802546501,
-0.0061834328807890415,
-0.04635937511920929,
-0.0038108155131340027,
-0.053937725722789764,
-0.026817653328180313,
-0.054620411247015,
-0.11660663783550262,
0.00003339909017086029,
-0.04106559604406357,
-0.03306128457188606,
-0.14237523078918457,
-0.21280772984027863,
-0.018358511850237846,
0.06427961587905884,
-0.002770423423498869,
-0.007990172132849693,
0.0252863559871912,
0.01800900511443615,
-0.02100655809044838,
0.008711392059922218,
-0.043056704103946686,
-0.0011473912745714188,
-0.006723284721374512,
-0.03355461359024048,
0.05572519451379776,
-0.039358481764793396,
0.023907046765089035,
-0.07016532123088837,
0.022053753957152367,
-0.21047484874725342,
0.08724770694971085,
-0.03332910314202309,
0.0019973423331975937,
-0.03912393003702164,
-0.04455578327178955,
0.007211847230792046,
0.04836874455213547,
-0.009562306106090546,
0.11461849510669708,
-0.13694481551647186,
-0.050135377794504166,
0.17747163772583008,
-0.15918093919754028,
-0.0031071044504642487,
0.10047785192728043,
-0.04747353494167328,
0.05331116542220116,
0.1319231390953064,
0.09696078300476074,
0.08054272830486298,
-0.07479749619960785,
0.012823295779526234,
0.059236690402030945,
-0.06982171535491943,
0.05665615200996399,
0.09072349220514297,
-0.025917548686265945,
-0.13576045632362366,
0.02915014699101448,
-0.07723619043827057,
-0.008290814235806465,
-0.024788403883576393,
-0.02106240950524807,
0.009497851133346558,
-0.03765175864100456,
0.027559399604797363,
0.0051958756521344185,
0.01610143668949604,
-0.04116114228963852,
-0.08287955075502396,
0.027492782101035118,
0.0748760849237442,
-0.07165400683879852,
0.044285401701927185,
-0.06937836855649948,
0.059639036655426025,
-0.07155673950910568,
-0.004397626966238022,
-0.16617776453495026,
-0.02543197013437748,
0.04347405955195427,
-0.0476861447095871,
0.049650583416223526,
0.09035586565732956,
0.0029940588865429163,
0.12131571769714355,
-0.039604298770427704,
0.002591116353869438,
-0.005222942680120468,
-0.010295916348695755,
-0.04750136286020279,
-0.12185941636562347,
-0.08227130025625229,
-0.06849224865436554,
0.09721685200929642,
-0.06984999030828476,
0.027771014720201492,
-0.07303637266159058,
-0.021475691348314285,
-0.009370224550366402,
-0.05768796056509018,
-0.003178088925778866,
0.01069303136318922,
-0.029239019379019737,
-0.04581096023321152,
0.049639031291007996,
0.04942186921834946,
-0.05935544893145561,
0.08032237738370895,
-0.10715964436531067,
-0.06268146634101868,
0.05464465916156769,
0.01187453418970108,
-0.08189059048891068,
0.08741441369056702,
-0.02072273939847946,
-0.012672405689954758,
-0.0579400360584259,
-0.04461519047617912,
0.19251775741577148,
-0.02062990888953209,
0.09910942614078522,
-0.0912875235080719,
0.000451722095021978,
0.026532242074608803,
-0.04697474092245102,
-0.02091670036315918,
0.06023341789841652,
0.048528075218200684,
-0.1889776587486267,
0.014213651418685913,
0.05718708038330078,
0.0774395614862442,
0.11308597773313522,
0.028519023209810257,
-0.023751113563776016,
-0.045408934354782104,
-0.01048395223915577,
0.006617607548832893,
0.052203088998794556,
-0.02125190943479538,
-0.008447568863630295,
0.03076634369790554,
0.05962252616882324,
0.018258564174175262,
-0.07868552207946777,
0.03303052484989166,
0.0674300342798233,
-0.016739878803491592,
-0.03418600186705589,
-0.02261585369706154,
-0.06083541736006737,
0.06158628314733505,
0.05442081391811371,
0.03444219008088112,
0.026988932862877846,
-0.0141516188159585,
-0.13602393865585327,
0.18902191519737244,
-0.11334291845560074,
-0.2613283693790436,
-0.10647985339164734,
-0.059713125228881836,
-0.02846200205385685,
0.03980021923780441,
0.05829378217458725,
-0.03154638409614563,
-0.04341817647218704,
-0.11693085730075836,
0.0658586248755455,
-0.06615571677684784,
-0.032122399657964706,
-0.012341458350419998,
-0.052736468613147736,
-0.020033713430166245,
-0.12753885984420776,
-0.012728434056043625,
-0.031884487718343735,
-0.07560467720031738,
0.007065952755510807,
-0.03775188326835632,
0.03102436289191246,
0.13557809591293335,
0.03808467090129852,
-0.019632400944828987,
-0.01848510093986988,
0.18917405605316162,
0.01174362562596798,
0.05755670741200447,
0.11567671597003937,
-0.026556694880127907,
0.053796298801898956,
0.043561629951000214,
0.024417629465460777,
-0.04793862625956535,
0.014786033891141415,
-0.01306490134447813,
-0.12254735827445984,
-0.17436571419239044,
-0.07158394902944565,
-0.0034279515966773033,
0.007325621787458658,
0.020000295713543892,
0.03604715317487717,
0.024208001792430878,
0.039797354489564896,
-0.03157653287053108,
0.024637768045067787,
-0.011645607650279999,
0.0813208669424057,
0.026758655905723572,
-0.07333098351955414,
0.09459556639194489,
-0.05957090109586716,
0.015762832015752792,
0.1099814772605896,
-0.05919281393289566,
0.18735241889953613,
0.024263080209493637,
0.06015676259994507,
0.10454875230789185,
0.017384961247444153,
0.05305997282266617,
0.08881637454032898,
-0.04680150747299194,
0.00495455227792263,
-0.06249195709824562,
-0.05261578410863876,
-0.03810110315680504,
0.04780388996005058,
0.030571816489100456,
0.017681781202554703,
-0.12004955112934113,
0.020959850400686264,
0.0005754937883466482,
0.14056314527988434,
0.0456947460770607,
-0.11948424577713013,
-0.11993704736232758,
0.03468799591064453,
-0.044093627482652664,
-0.05974198877811432,
0.03089560940861702,
0.053042106330394745,
-0.15360191464424133,
0.04592747241258621,
-0.006671088747680187,
0.06555236876010895,
-0.09236409515142441,
0.016362488269805908,
-0.04424187168478966,
0.0003350479528307915,
0.004816954955458641,
0.069277822971344,
-0.13899792730808258,
0.10758332908153534,
0.019469857215881348,
0.04854119196534157,
-0.0792483538389206,
0.01474616676568985,
-0.01006849855184555,
0.10704318434000015,
0.1175740659236908,
0.04182115197181702,
-0.05234162509441376,
-0.02020275592803955,
-0.04623885452747345,
0.019744262099266052,
0.05923348292708397,
-0.07795751094818115,
0.059272222220897675,
0.007809795439243317,
0.007658503483980894,
-0.02263338677585125,
0.022961493581533432,
-0.13374760746955872,
-0.1220417320728302,
0.06200619041919708,
-0.08059925585985184,
-0.10103537142276764,
-0.056864745914936066,
-0.06435932219028473,
-0.05336207151412964,
0.21088409423828125,
-0.10784435272216797,
-0.0900011658668518,
-0.0984652042388916,
-0.019113656133413315,
0.045021701604127884,
-0.06612928211688995,
0.043952010571956635,
-0.03937261924147606,
0.0913459062576294,
-0.048929810523986816,
-0.1113176941871643,
0.03464808315038681,
-0.11538602411746979,
-0.11402149498462677,
-0.043106041848659515,
0.10715632140636444,
0.11601940542459488,
0.03873937577009201,
0.012060008011758327,
0.01069661881774664,
0.0002584364265203476,
-0.11786970496177673,
0.015528878197073936,
0.1306600123643875,
-0.0020553357899188995,
0.07119494676589966,
-0.05991511791944504,
0.026040416210889816,
-0.01859254762530327,
-0.000020232051610946655,
0.13312150537967682,
0.18456938862800598,
-0.06186409667134285,
0.17501315474510193,
0.20253485441207886,
-0.10324493795633316,
-0.1912265419960022,
-0.055352482944726944,
-0.0019738581031560898,
0.04382160305976868,
0.05263291299343109,
-0.18363529443740845,
0.08883583545684814,
0.03659357503056526,
-0.0324031263589859,
0.012661926448345184,
-0.23407915234565735,
-0.11043544113636017,
0.0914819985628128,
0.05760938301682472,
0.1830407679080963,
-0.08092900365591049,
-0.037458185106515884,
-0.01626502349972725,
-0.03824080526828766,
0.05214343219995499,
-0.03644599765539169,
0.09121337532997131,
0.005876654759049416,
-0.03149011731147766,
0.0014781588688492775,
-0.03166491165757179,
0.09445147216320038,
0.03883245587348938,
0.022005945444107056,
-0.07065364718437195,
-0.004467003047466278,
0.10611742734909058,
-0.03946594521403313,
0.09993964433670044,
0.040947481989860535,
0.07381971180438995,
-0.09416982531547546,
-0.06125447899103165,
-0.07761897146701813,
0.04517432674765587,
-0.042114146053791046,
-0.056813787668943405,
-0.06370235979557037,
0.06144031137228012,
0.036983389407396317,
0.010811073705554008,
0.0029130373150110245,
-0.03961692750453949,
0.04473820701241493,
0.08748583495616913,
0.08483279496431351,
-0.033933646976947784,
-0.07350821793079376,
-0.053363241255283356,
-0.04880258068442345,
0.06575752794742584,
-0.09029592573642731,
0.018258918076753616,
0.026821335777640343,
0.008443283848464489,
0.08913717418909073,
0.03410612419247627,
-0.1393519937992096,
0.01093120314180851,
0.033447835594415665,
-0.12317617237567902,
-0.10541711747646332,
-0.020175758749246597,
0.030878249555826187,
-0.03668883070349693,
0.05707155913114548,
0.14630165696144104,
-0.034668684005737305,
-0.03188473358750343,
-0.047028549015522,
0.038294028490781784,
-0.02001858875155449,
0.0492592453956604,
0.06351515650749207,
0.029986461624503136,
-0.07183587551116943,
0.07282479107379913,
0.03528439626097679,
-0.03198586776852608,
0.04259854555130005,
0.04327394813299179,
-0.09349854290485382,
-0.07926173508167267,
-0.060244057327508926,
0.08534038066864014,
-0.026470476761460304,
-0.04589526355266571,
0.0015844032168388367,
-0.0839494988322258,
0.06781225651502609,
0.07297514379024506,
0.04884505271911621,
0.036513395607471466,
-0.08662013709545135,
0.01563315838575363,
-0.051221586763858795,
0.03333417698740959,
-0.032043009996414185,
-0.004975913092494011,
-0.05425560474395752,
0.06390868127346039,
0.0651458203792572,
0.09818414598703384,
-0.034365080296993256,
-0.07506482303142548,
-0.08425653725862503,
-0.011832619085907936,
-0.06386331468820572,
-0.036424536257982254,
-0.07851403951644897,
-0.006711400579661131,
0.00019387854263186455,
-0.0017908867448568344,
0.0225302092730999,
0.036020733416080475,
-0.04161037132143974,
-0.018441732972860336,
-0.036022331565618515,
0.03767834231257439,
-0.0641452744603157,
0.00697941891849041,
0.01464181486517191,
-0.03626985102891922,
0.09175059199333191,
0.034291669726371765,
-0.011440416797995567,
0.048247329890728,
-0.028490714728832245,
0.033988792449235916,
-0.0214936975389719,
-0.000636078417301178,
-0.023812035098671913,
-0.1116178035736084,
-0.006980948615819216,
0.005841534584760666,
-0.026754189282655716,
0.010946966707706451,
0.054136067628860474,
-0.0728670060634613,
0.08791936188936234,
0.04659055173397064,
-0.03086133301258087,
-0.06953637301921844,
0.04163387790322304,
-0.014609502628445625,
0.030394241213798523,
0.07163316011428833,
-0.03385714069008827,
0.054206766188144684,
-0.0966067686676979,
-0.028083432465791702,
0.00398253183811903,
-0.005212560296058655,
-0.014138897880911827,
-0.052057236433029175,
-0.004495016299188137,
0.006420627236366272,
0.17250369489192963,
-0.023814238607883453,
0.038254514336586,
0.0144240353256464,
0.00791465025395155,
0.051345616579055786,
-0.01361515186727047,
0.06865084171295166,
-0.008483192883431911,
-0.024686848744750023,
-0.0120918033644557,
0.03687888756394386,
0.00397888757288456,
0.005013164132833481,
0.14528097212314606,
0.04665687680244446,
0.08950702100992203,
0.07531587034463882,
0.01784322038292885,
0.017938997596502304,
-0.13319756090641022,
-0.09296073764562607,
0.00904319528490305,
0.056261271238327026,
-0.016425015404820442,
0.012038208544254303,
0.09358023852109909,
-0.08769789338111877,
0.06896759569644928,
0.047901831567287445,
-0.047315262258052826,
-0.12592604756355286,
-0.19420868158340454,
-0.02321460284292698,
-0.02778351865708828,
-0.011528597213327885,
-0.09093387424945831,
0.014925409108400345,
0.09333795309066772,
0.027084505185484886,
-0.009893229231238365,
0.09503961354494095,
-0.10178138315677643,
-0.030601195991039276,
0.044378284364938736,
-0.027807123959064484,
0.01584118977189064,
0.04910396784543991,
0.02481905184686184,
-0.007016010582447052,
0.043180156499147415,
0.041538313031196594,
0.04349560663104057,
0.024765929207205772,
0.051568999886512756,
-0.02542053535580635,
-0.07570456713438034,
-0.03238746523857117,
-0.0037452345713973045,
0.05428508669137955,
0.1394702047109604,
0.024508418515324593,
-0.06925097852945328,
0.0075264181941747665,
0.10690559446811676,
-0.02914184331893921,
-0.049285370856523514,
-0.10880250483751297,
0.2419133186340332,
0.022610217332839966,
0.003777177771553397,
-0.004860298242419958,
-0.04388527572154999,
0.0033068712800741196,
0.2108604609966278,
0.2260904610157013,
0.0026603443548083305,
-0.008546710014343262,
0.009966809302568436,
-0.010588260367512703,
0.03705549240112305,
0.14585277438163757,
0.006516799330711365,
0.25386565923690796,
-0.04850446432828903,
0.039478786289691925,
-0.042516715824604034,
-0.039094313979148865,
-0.09867943823337555,
0.06729113310575485,
-0.005434576887637377,
0.009123609401285648,
-0.030102035030722618,
0.07286250591278076,
-0.04249545931816101,
-0.16968153417110443,
-0.003178606741130352,
-0.00044707744382321835,
-0.06178327277302742,
0.010225415229797363,
-0.00253849383443594,
0.019610943272709846,
0.08568090200424194,
-0.016939349472522736,
-0.007444559130817652,
0.1290246844291687,
0.0189058855175972,
-0.09580083191394806,
-0.0591815784573555,
0.11733365803956985,
0.02717137709259987,
0.14054632186889648,
0.011392450891435146,
0.07671007513999939,
0.08743338286876678,
0.021091222763061523,
-0.09446261823177338,
0.04470730572938919,
-0.01874896138906479,
-0.027907876297831535,
0.007734512910246849,
0.10577955096960068,
-0.008113401010632515,
0.061714500188827515,
0.028132781386375427,
-0.0918692797422409,
0.06406893581151962,
0.011137537658214569,
-0.03317873552441597,
-0.07956703752279282,
0.08932974934577942,
-0.09174676239490509,
0.1570945382118225,
0.12497374415397644,
-0.013459835201501846,
-0.04846136271953583,
-0.029952820390462875,
0.019931785762310028,
-0.002607475034892559,
0.05755576118826866,
-0.026294343173503876,
-0.13335978984832764,
0.019402893260121346,
-0.08304108679294586,
0.025752592831850052,
-0.2501779794692993,
-0.08970948308706284,
0.03131574019789696,
-0.01747092232108116,
-0.019286859780550003,
0.049002230167388916,
0.0428493469953537,
0.027195999398827553,
-0.03540360927581787,
0.02747737616300583,
-0.03895624354481697,
0.05754051357507706,
-0.11107432842254639,
-0.09188979119062424
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 1900k (uncased)
Seed 3 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-1900k')
model = BertModel.from_pretrained("multiberts-seed-3-1900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-1900k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 1900k (uncased)
Seed 3 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 1900k (uncased)\nSeed 3 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 1900k (uncased)\nSeed 3 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 1900k (uncased)\nSeed 3 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.0812387615442276,
-0.004231086000800133,
-0.0021506762132048607,
0.06645339727401733,
0.08409127593040466,
0.0032960032112896442,
0.11629422754049301,
0.04760017246007919,
-0.03561373054981232,
0.0218803733587265,
0.0945606380701065,
0.02966151013970375,
0.043059688061475754,
0.0633661150932312,
0.09424032270908356,
-0.25608405470848083,
0.04985753446817398,
-0.06262879818677902,
0.05955575406551361,
0.0768452137708664,
0.10017828643321991,
-0.0724995955824852,
0.06142265349626541,
0.036607563495635986,
-0.08346690237522125,
-0.015259318053722382,
-0.01807197369635105,
-0.037397533655166626,
0.1001763716340065,
0.07060491293668747,
0.0627472996711731,
0.0015482567250728607,
0.05802779272198677,
-0.08810339868068695,
0.016074540093541145,
0.04488528147339821,
-0.001442262902855873,
0.0240074023604393,
-0.0096291434019804,
0.013863721862435341,
0.1131426990032196,
0.04076455906033516,
0.07993067055940628,
0.033832091838121414,
-0.09638860821723938,
-0.1105748787522316,
-0.08162347972393036,
0.10657288879156113,
0.05336209759116173,
0.04159937798976898,
-0.004343538545072079,
0.07626129686832428,
-0.03293327987194061,
0.07500738650560379,
0.11097387969493866,
-0.24902570247650146,
-0.01210875902324915,
0.06820829212665558,
0.04641866683959961,
0.04327162355184555,
0.01175036933273077,
0.026321301236748695,
0.003565460443496704,
0.04533280059695244,
0.0324820876121521,
-0.024887017905712128,
0.11460988223552704,
-0.042953815311193466,
-0.1529872864484787,
-0.042311523109674454,
0.11827292293310165,
-0.009286029264330864,
-0.12495221197605133,
-0.10279703140258789,
-0.02853124402463436,
0.11172500252723694,
-0.004669295623898506,
-0.019037136808037758,
-0.0032595512457191944,
0.00928705558180809,
0.024437909945845604,
-0.09577350318431854,
-0.08660393953323364,
-0.028638720512390137,
-0.03767387568950653,
0.13370877504348755,
0.04594099521636963,
0.05178416892886162,
-0.03618932515382767,
0.08624615520238876,
-0.11169068515300751,
-0.037687476724386215,
-0.051368676126003265,
-0.0849761962890625,
-0.018005818128585815,
0.007491221185773611,
-0.02850763127207756,
-0.08833922445774078,
-0.062041908502578735,
0.11637482047080994,
0.03191514313220978,
0.030190899968147278,
-0.0007787863723933697,
0.04144515097141266,
0.07465208321809769,
0.0949532687664032,
-0.04022059589624405,
0.04758644104003906,
0.03218945860862732,
-0.021149609237909317,
0.058427758514881134,
-0.05054870992898941,
-0.09840136766433716,
0.07951762527227402,
0.0009235888719558716,
0.0401417538523674,
0.025413015857338905,
0.033543623983860016,
-0.009815189987421036,
-0.07222038507461548,
0.16588126122951508,
-0.07677870243787766,
-0.011716223321855068,
-0.018255790695548058,
0.012673500925302505,
0.04773519188165665,
0.0314747579395771,
-0.0053338827565312386,
-0.046243708580732346,
-0.004107512533664703,
-0.056212715804576874,
-0.025537708774209023,
-0.05433099716901779,
-0.1175299882888794,
-0.0013036937452852726,
-0.03970739245414734,
-0.033524468541145325,
-0.14046993851661682,
-0.21814271807670593,
-0.01862451061606407,
0.06461703777313232,
-0.0033135688863694668,
-0.0075468202121555805,
0.027163676917552948,
0.01754860393702984,
-0.02133888378739357,
0.010182850062847137,
-0.04320147633552551,
-0.000960279256105423,
-0.006982007995247841,
-0.0345463827252388,
0.05523636192083359,
-0.040696874260902405,
0.023311009630560875,
-0.06850965321063995,
0.021136190742254257,
-0.21395361423492432,
0.08687194436788559,
-0.03221030905842781,
0.004234690219163895,
-0.036582302302122116,
-0.04395315796136856,
0.007858434692025185,
0.04851546883583069,
-0.011482175439596176,
0.11635040491819382,
-0.13170649111270905,
-0.050848588347435,
0.18002542853355408,
-0.1585424542427063,
-0.0024106912314891815,
0.09684114158153534,
-0.04836808145046234,
0.0542445108294487,
0.13319674134254456,
0.09644383192062378,
0.07867196202278137,
-0.0764014944434166,
0.014792843721807003,
0.06024390831589699,
-0.06957422196865082,
0.05424299091100693,
0.08897386491298676,
-0.02633129619061947,
-0.14069312810897827,
0.02943776547908783,
-0.07719936966896057,
-0.0076703866943717,
-0.025248104706406593,
-0.02086157165467739,
0.011705102398991585,
-0.03809746354818344,
0.028239693492650986,
0.005992240272462368,
0.017259106040000916,
-0.041076887398958206,
-0.08096185326576233,
0.03199588507413864,
0.07365633547306061,
-0.07080536335706711,
0.04329096898436546,
-0.06949993222951889,
0.05705425888299942,
-0.07050983607769012,
-0.005918659269809723,
-0.16638398170471191,
-0.023873992264270782,
0.043500836938619614,
-0.047166287899017334,
0.050227317959070206,
0.08761698752641678,
0.003139667911455035,
0.12317557632923126,
-0.04103677719831467,
0.004579911008477211,
-0.0056105442345142365,
-0.009677676483988762,
-0.04889967665076256,
-0.12242703139781952,
-0.08136563748121262,
-0.0683429092168808,
0.1005784124135971,
-0.07218655943870544,
0.02897864207625389,
-0.07205358147621155,
-0.022244254127144814,
-0.009171217679977417,
-0.058558423072099686,
-0.004152932204306126,
0.011201304383575916,
-0.028216522186994553,
-0.04544786736369133,
0.04816517233848572,
0.05126553773880005,
-0.058219678699970245,
0.07940584421157837,
-0.10596559941768646,
-0.06367962807416916,
0.055055685341358185,
0.013268157839775085,
-0.08464833348989487,
0.08408902585506439,
-0.020576683804392815,
-0.012466739863157272,
-0.05788709223270416,
-0.04681319370865822,
0.19502395391464233,
-0.022069059312343597,
0.10144932568073273,
-0.09275922924280167,
-0.0013876575976610184,
0.027178611606359482,
-0.04560438543558121,
-0.02061322145164013,
0.05464185029268265,
0.051334675401449203,
-0.18972116708755493,
0.015234049409627914,
0.051077425479888916,
0.07539910823106766,
0.11583646386861801,
0.028771692886948586,
-0.02270345389842987,
-0.046087607741355896,
-0.013896074146032333,
0.004641914274543524,
0.056289754807949066,
-0.028730131685733795,
-0.00966340396553278,
0.029544321820139885,
0.05631162226200104,
0.01941586844623089,
-0.07882307469844818,
0.03428172692656517,
0.06713240593671799,
-0.014993306249380112,
-0.0351266972720623,
-0.02476721629500389,
-0.05946620553731918,
0.062515489757061,
0.053624171763658524,
0.03520403802394867,
0.025093238800764084,
-0.014583759941160679,
-0.13796329498291016,
0.18861687183380127,
-0.11495090276002884,
-0.26026371121406555,
-0.10803453624248505,
-0.06290260702371597,
-0.027321944013237953,
0.04008012264966965,
0.057699523866176605,
-0.027940092608332634,
-0.04346546530723572,
-0.11895745992660522,
0.05981246381998062,
-0.0634555071592331,
-0.03070712275803089,
-0.010298017412424088,
-0.053119540214538574,
-0.020896438509225845,
-0.12840864062309265,
-0.011823143810033798,
-0.033124715089797974,
-0.07144083082675934,
0.006509442813694477,
-0.03524148091673851,
0.029040424153208733,
0.13734322786331177,
0.03749136999249458,
-0.018925581127405167,
-0.019713010638952255,
0.1908891499042511,
0.010993622243404388,
0.05869162827730179,
0.11189478635787964,
-0.026874106377363205,
0.053654689341783524,
0.04364180564880371,
0.025484127923846245,
-0.04678342863917351,
0.013148938305675983,
-0.016608482226729393,
-0.12248153239488602,
-0.17269594967365265,
-0.07192036509513855,
-0.004419727250933647,
0.002210602629929781,
0.019663475453853607,
0.03528609126806259,
0.02885456010699272,
0.03857404366135597,
-0.03008085861802101,
0.027903346344828606,
-0.010115254670381546,
0.08142772316932678,
0.025425206869840622,
-0.07378772646188736,
0.09303358942270279,
-0.059143293648958206,
0.016064463183283806,
0.10978539288043976,
-0.06420089304447174,
0.18729154765605927,
0.02455068938434124,
0.05261567234992981,
0.10426348447799683,
0.014341328293085098,
0.054232917726039886,
0.08871366828680038,
-0.04732098430395126,
0.005211393348872662,
-0.06100469082593918,
-0.053024470806121826,
-0.03659984841942787,
0.047916606068611145,
0.031236708164215088,
0.01795141026377678,
-0.11777516454458237,
0.01554911956191063,
-0.0005227990914136171,
0.1429954171180725,
0.043529070913791656,
-0.12003308534622192,
-0.11923830211162567,
0.033561576157808304,
-0.046583108603954315,
-0.06008121371269226,
0.028848659247159958,
0.056489620357751846,
-0.15426728129386902,
0.04891696944832802,
-0.0058711739256978035,
0.0635700523853302,
-0.0874205008149147,
0.014641311019659042,
-0.04105927795171738,
0.0006750170141458511,
0.004827180877327919,
0.0659407377243042,
-0.1298006922006607,
0.10815823823213577,
0.021333934739232063,
0.04919908195734024,
-0.07986301183700562,
0.01566171646118164,
-0.008512704633176327,
0.10636822879314423,
0.11567665636539459,
0.04266680032014847,
-0.044400326907634735,
-0.018144555389881134,
-0.04546763375401497,
0.017288941890001297,
0.05938275158405304,
-0.07598474621772766,
0.06111771613359451,
0.008191194385290146,
0.0071717179380357265,
-0.022594207897782326,
0.0189276784658432,
-0.1311292201280594,
-0.12339207530021667,
0.061142198741436005,
-0.08113892376422882,
-0.10391776263713837,
-0.05770348012447357,
-0.06338965892791748,
-0.05171992629766464,
0.20755761861801147,
-0.113006591796875,
-0.08963456749916077,
-0.09952226281166077,
-0.01727009192109108,
0.04494480788707733,
-0.06442441791296005,
0.04660075157880783,
-0.03844539076089859,
0.09302552044391632,
-0.04816292226314545,
-0.10945907235145569,
0.03290413320064545,
-0.1151813343167305,
-0.11404916644096375,
-0.044257648289203644,
0.1073717400431633,
0.11632262915372849,
0.0380324050784111,
0.012628640048205853,
0.012087804265320301,
0.0011648274958133698,
-0.11924131959676743,
0.014453154057264328,
0.1336834579706192,
0.0013337768614292145,
0.07429522275924683,
-0.05996575206518173,
0.02428368851542473,
-0.016212940216064453,
0.000739680603146553,
0.1334114670753479,
0.1869455873966217,
-0.06257794797420502,
0.1774711161851883,
0.20449692010879517,
-0.10486818850040436,
-0.19295179843902588,
-0.053317658603191376,
-0.0016644718125462532,
0.043546661734580994,
0.05233591049909592,
-0.182419091463089,
0.09024861454963684,
0.03813096508383751,
-0.03137083724141121,
0.010100077837705612,
-0.23573455214500427,
-0.11135991662740707,
0.09059774875640869,
0.06004992499947548,
0.18255645036697388,
-0.08067981153726578,
-0.038122981786727905,
-0.015390675514936447,
-0.035303376615047455,
0.04879755526781082,
-0.03894168138504028,
0.0903838500380516,
0.005530178546905518,
-0.03448108211159706,
0.0017683357000350952,
-0.032270096242427826,
0.09475505352020264,
0.03686511889100075,
0.02317044511437416,
-0.07064925134181976,
-0.008237989619374275,
0.11603645980358124,
-0.039296261966228485,
0.09791968017816544,
0.04198133200407028,
0.0741645097732544,
-0.09242747724056244,
-0.06052839010953903,
-0.07660667598247528,
0.04559212923049927,
-0.04212251305580139,
-0.0556640625,
-0.06564357876777649,
0.06044898182153702,
0.037788406014442444,
0.008343949913978577,
0.0007922910153865814,
-0.04004018008708954,
0.04674848914146423,
0.09146210551261902,
0.08590055257081985,
-0.037911877036094666,
-0.0720396339893341,
-0.05225835740566254,
-0.048603009432554245,
0.0640590563416481,
-0.08683325350284576,
0.017682988196611404,
0.025628959760069847,
0.011517523787915707,
0.09136596322059631,
0.03346490114927292,
-0.13808058202266693,
0.01034012995660305,
0.035257190465927124,
-0.12389135360717773,
-0.10301271080970764,
-0.020970705896615982,
0.03272849693894386,
-0.04021591320633888,
0.05294770374894142,
0.14392313361167908,
-0.03400396555662155,
-0.03189573436975479,
-0.047952890396118164,
0.03817019611597061,
-0.019119638949632645,
0.04907013475894928,
0.0627284049987793,
0.030123472213745117,
-0.07266762852668762,
0.07293909043073654,
0.036041587591171265,
-0.031054291874170303,
0.04130149632692337,
0.04660765826702118,
-0.09289145469665527,
-0.07777126133441925,
-0.0617917962372303,
0.08834421634674072,
-0.02548585645854473,
-0.043744079768657684,
0.0005407389253377914,
-0.08468224108219147,
0.06819918751716614,
0.07835952192544937,
0.046797819435596466,
0.036941200494766235,
-0.08757717907428741,
0.016538772732019424,
-0.05231243371963501,
0.03325337544083595,
-0.03382967412471771,
-0.005341317504644394,
-0.05288855731487274,
0.06789324432611465,
0.06621544063091278,
0.09590461850166321,
-0.03470984846353531,
-0.07554376870393753,
-0.08360220491886139,
-0.011952586472034454,
-0.06360398977994919,
-0.035327788442373276,
-0.08113758265972137,
-0.006427835673093796,
0.00039411289617419243,
-0.001789689064025879,
0.019863571971654892,
0.03486253693699837,
-0.04213417321443558,
-0.016821861267089844,
-0.03456832841038704,
0.0371025912463665,
-0.0619494691491127,
0.007964982651174068,
0.014869999140501022,
-0.03698127716779709,
0.09160506725311279,
0.03503469377756119,
-0.010672335512936115,
0.04840617999434471,
-0.021459687501192093,
0.03296151012182236,
-0.021648235619068146,
0.0005376790650188923,
-0.024178868159651756,
-0.10917121917009354,
-0.006124685984104872,
0.006150024011731148,
-0.025889070704579353,
0.01322562899440527,
0.055133067071437836,
-0.07203295826911926,
0.08778905868530273,
0.04695376753807068,
-0.03084588423371315,
-0.07111091911792755,
0.04040270298719406,
-0.009920215234160423,
0.02906952053308487,
0.07049459964036942,
-0.034016549587249756,
0.05224674195051193,
-0.09749531745910645,
-0.028497397899627686,
0.002211215440183878,
-0.004669930785894394,
-0.012092038989067078,
-0.05225357413291931,
-0.004497763700783253,
0.008551298640668392,
0.1802210658788681,
-0.020552396774291992,
0.03338584303855896,
0.015740759670734406,
0.00795359443873167,
0.05361100658774376,
-0.014104021713137627,
0.07010938227176666,
-0.007363403216004372,
-0.02648540586233139,
-0.013261616230010986,
0.03773573786020279,
0.005376908928155899,
0.0015423279255628586,
0.14845408499240875,
0.04653977230191231,
0.09042337536811829,
0.07557322829961777,
0.016794325783848763,
0.015576452948153019,
-0.12737233936786652,
-0.09425131231546402,
0.00872182659804821,
0.05642544478178024,
-0.017948174849152565,
0.01403120532631874,
0.09416517615318298,
-0.09119215607643127,
0.07025061547756195,
0.05035008117556572,
-0.04809326305985451,
-0.12607574462890625,
-0.1870119273662567,
-0.021567964926362038,
-0.02711545117199421,
-0.011475862935185432,
-0.09213767200708389,
0.014733624644577503,
0.0871819481253624,
0.025951433926820755,
-0.008577668108046055,
0.0926285833120346,
-0.10513373464345932,
-0.03180770203471184,
0.04486455023288727,
-0.02594848722219467,
0.015397680923342705,
0.044428035616874695,
0.022364258766174316,
-0.008079353719949722,
0.043321069329977036,
0.0415920726954937,
0.04300328344106674,
0.02500355988740921,
0.049773119390010834,
-0.024557877331972122,
-0.07458412647247314,
-0.03083408996462822,
-0.005352184642106295,
0.0550578348338604,
0.13219591975212097,
0.02343721315264702,
-0.06871889531612396,
0.007460530381649733,
0.10790262371301651,
-0.030663151293992996,
-0.05138375610113144,
-0.10861761122941971,
0.23942013084888458,
0.02583448588848114,
0.002967589534819126,
-0.0045676156878471375,
-0.0453580766916275,
0.0035034213215112686,
0.2129158079624176,
0.22750061750411987,
0.004439441952854395,
-0.008235708810389042,
0.009148330427706242,
-0.011242737993597984,
0.03717871010303497,
0.14670586585998535,
0.005568476393818855,
0.2567566931247711,
-0.0464482456445694,
0.04175892099738121,
-0.04156462103128433,
-0.039181385189294815,
-0.09717042744159698,
0.07169723510742188,
-0.0057972329668700695,
0.006765153259038925,
-0.031003642827272415,
0.07217158377170563,
-0.042002469301223755,
-0.17189225554466248,
0.0013525094836950302,
-0.0026838574558496475,
-0.06208881363272667,
0.011487088166177273,
-0.0047850944101810455,
0.019200723618268967,
0.08386863768100739,
-0.014062239788472652,
-0.0077245826832950115,
0.1273820400238037,
0.019022099673748016,
-0.09847499430179596,
-0.06366154551506042,
0.12049482017755508,
0.021733134984970093,
0.14473876357078552,
0.009675288572907448,
0.07804205268621445,
0.08660563826560974,
0.020944643765687943,
-0.0940810889005661,
0.04390513524413109,
-0.018979594111442566,
-0.030660079792141914,
0.007346814032644033,
0.10594180971384048,
-0.008001798763871193,
0.060839179903268814,
0.026075776666402817,
-0.08995915949344635,
0.06345517933368683,
0.008551623672246933,
-0.03621838986873627,
-0.07969840615987778,
0.0845569521188736,
-0.0908866822719574,
0.1574082225561142,
0.12403485178947449,
-0.014286081306636333,
-0.04648783057928085,
-0.027738019824028015,
0.01880551129579544,
-0.0006720977835357189,
0.0570126511156559,
-0.02537190541625023,
-0.13742835819721222,
0.019549787044525146,
-0.08769647032022476,
0.02618367038667202,
-0.247984379529953,
-0.09062597900629044,
0.03139696642756462,
-0.017390403896570206,
-0.01867881789803505,
0.050425950437784195,
0.04497946798801422,
0.027645928785204887,
-0.03510897234082222,
0.028032928705215454,
-0.03958635404706001,
0.05965987220406532,
-0.11089037358760834,
-0.0914505273103714
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 2000k (uncased)
Seed 3 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-2000k')
model = BertModel.from_pretrained("multiberts-seed-3-2000k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-2000k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 2000k (uncased)
Seed 3 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 2000k (uncased)\nSeed 3 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 2000k (uncased)\nSeed 3 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 2000k (uncased)\nSeed 3 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08154518902301788,
0.00220636953599751,
-0.0021021415013819933,
0.06699012964963913,
0.08473432809114456,
0.0027139163576066494,
0.11701929569244385,
0.04815344884991646,
-0.03764475882053375,
0.02152467519044876,
0.09650005400180817,
0.029200315475463867,
0.04118465259671211,
0.06440514326095581,
0.09502990543842316,
-0.2575054466724396,
0.052794575691223145,
-0.06404213607311249,
0.05959954112768173,
0.07635020464658737,
0.10152978450059891,
-0.07216660678386688,
0.06169401854276657,
0.036095306277275085,
-0.08432118594646454,
-0.016493860632181168,
-0.019409582018852234,
-0.03494993597269058,
0.09876815229654312,
0.0701085552573204,
0.06336718052625656,
0.0029982831329107285,
0.05688387528061867,
-0.08935827016830444,
0.016398927196860313,
0.04410427063703537,
-0.00047440966591238976,
0.02398790791630745,
-0.00787479616701603,
0.015160031616687775,
0.10999977588653564,
0.0398552231490612,
0.07977233827114105,
0.03295992687344551,
-0.0947955846786499,
-0.10817253589630127,
-0.08291500806808472,
0.10477335751056671,
0.054025664925575256,
0.04191780462861061,
-0.004796487279236317,
0.07751654088497162,
-0.0323062390089035,
0.07530471682548523,
0.10981540381908417,
-0.2514304518699646,
-0.011928851716220379,
0.071048803627491,
0.04581969976425171,
0.04265175759792328,
0.013071414083242416,
0.026733271777629852,
0.0036152228713035583,
0.044649019837379456,
0.029700107872486115,
-0.023948565125465393,
0.112222820520401,
-0.04346366226673126,
-0.152942955493927,
-0.04240090027451515,
0.12232761830091476,
-0.007760422304272652,
-0.1248450055718422,
-0.10287570208311081,
-0.02671181410551071,
0.1138380616903305,
-0.003260236233472824,
-0.018819933757185936,
-0.0035020047798752785,
0.008898247964680195,
0.020372074097394943,
-0.09556976705789566,
-0.08617439866065979,
-0.02920619025826454,
-0.037680212408304214,
0.13320763409137726,
0.04600418359041214,
0.0504806749522686,
-0.03322955593466759,
0.08583556115627289,
-0.11524668335914612,
-0.03628892824053764,
-0.05484592914581299,
-0.0842839926481247,
-0.020335497334599495,
0.008663935586810112,
-0.031712889671325684,
-0.08891384303569794,
-0.05847151204943657,
0.11723491549491882,
0.0368724949657917,
0.029990747570991516,
0.0002680649049580097,
0.04123588651418686,
0.07398372888565063,
0.09900343418121338,
-0.04113621264696121,
0.05051800608634949,
0.03337295725941658,
-0.021937157958745956,
0.05834665149450302,
-0.05061306804418564,
-0.10102248191833496,
0.08116930723190308,
0.00041457265615463257,
0.040194399654865265,
0.025783346965909004,
0.035871852189302444,
-0.009903423488140106,
-0.07365848124027252,
0.16380564868450165,
-0.07747609913349152,
-0.012097957544028759,
-0.017893653362989426,
0.011137425899505615,
0.04652237147092819,
0.03177052363753319,
-0.00355411390773952,
-0.04597809165716171,
-0.006431899964809418,
-0.054247044026851654,
-0.025885723531246185,
-0.05661812424659729,
-0.11852147430181503,
-0.00014194101095199585,
-0.03711780160665512,
-0.03250037133693695,
-0.13975197076797485,
-0.2168940156698227,
-0.019936464726924896,
0.06564052402973175,
-0.0009107645601034164,
-0.008422063663601875,
0.023607712239027023,
0.01581135205924511,
-0.021799560636281967,
0.009744344279170036,
-0.04204500839114189,
0.0008615059778094292,
-0.007474261336028576,
-0.035081300884485245,
0.05219323933124542,
-0.04446519911289215,
0.02444327436387539,
-0.06843316555023193,
0.021185314282774925,
-0.20302428305149078,
0.0877964198589325,
-0.033151693642139435,
0.0020056497305631638,
-0.037184104323387146,
-0.0444687157869339,
0.008554687723517418,
0.048239704221487045,
-0.009410353377461433,
0.11349581182003021,
-0.13569557666778564,
-0.04915117472410202,
0.17548245191574097,
-0.1582200825214386,
-0.004208322614431381,
0.09591038525104523,
-0.049821075052022934,
0.054771047085523605,
0.1330415904521942,
0.09537450969219208,
0.07833406329154968,
-0.07132704555988312,
0.013312785886228085,
0.05918927863240242,
-0.06717213988304138,
0.053924575448036194,
0.09153120964765549,
-0.02482864260673523,
-0.1358547806739807,
0.031000416725873947,
-0.08185934275388718,
-0.008535269647836685,
-0.024898497387766838,
-0.021151697263121605,
0.00997251458466053,
-0.0384722463786602,
0.02433447539806366,
0.005930381361395121,
0.01773279346525669,
-0.039691753685474396,
-0.0813496932387352,
0.03331298753619194,
0.0751425176858902,
-0.07224750518798828,
0.043229587376117706,
-0.06865710020065308,
0.05688244849443436,
-0.07326217740774155,
-0.005902952514588833,
-0.16693083941936493,
-0.027640530839562416,
0.044457852840423584,
-0.04762302339076996,
0.05040723830461502,
0.08707140386104584,
0.0022553447633981705,
0.12055367231369019,
-0.03993489593267441,
0.004186729900538921,
-0.00613117590546608,
-0.009171761572360992,
-0.04715603590011597,
-0.12000857293605804,
-0.08205456286668777,
-0.06947870552539825,
0.10077418386936188,
-0.07045205682516098,
0.028351809829473495,
-0.07139214873313904,
-0.023171544075012207,
-0.010986892506480217,
-0.05970325693488121,
-0.0026465430855751038,
0.011282053776085377,
-0.02986748516559601,
-0.047571808099746704,
0.04880717396736145,
0.051641423255205154,
-0.06137925758957863,
0.07584739476442337,
-0.102989561855793,
-0.06156592071056366,
0.05470255762338638,
0.01887669414281845,
-0.08162512630224228,
0.08634139597415924,
-0.020413298159837723,
-0.011850179173052311,
-0.055312398821115494,
-0.04210362955927849,
0.19688346982002258,
-0.02181253209710121,
0.10054850578308105,
-0.09338466823101044,
-0.00042776577174663544,
0.02517915889620781,
-0.045877449214458466,
-0.021124359220266342,
0.05695145204663277,
0.049537792801856995,
-0.19068671762943268,
0.014957435429096222,
0.04945679008960724,
0.07328054308891296,
0.11207946389913559,
0.027840053662657738,
-0.02243833616375923,
-0.04611795395612717,
-0.009479451924562454,
0.005674750078469515,
0.05492805689573288,
-0.02511442080140114,
-0.00920230895280838,
0.030785955488681793,
0.05716734379529953,
0.020158322528004646,
-0.07966829836368561,
0.034061551094055176,
0.06566837430000305,
-0.015739692375063896,
-0.036319032311439514,
-0.023950420320034027,
-0.06063827499747276,
0.06151196360588074,
0.05478628724813461,
0.03586282208561897,
0.027237029746174812,
-0.01560787484049797,
-0.13673140108585358,
0.19003359973430634,
-0.11632820218801498,
-0.26199978590011597,
-0.10523226112127304,
-0.0625874400138855,
-0.03174395486712456,
0.03945757448673248,
0.057147517800331116,
-0.02824864722788334,
-0.044226475059986115,
-0.11754012852907181,
0.05909467488527298,
-0.06698056310415268,
-0.031142178922891617,
-0.009694522246718407,
-0.05284673720598221,
-0.020558316260576248,
-0.12796390056610107,
-0.010944763198494911,
-0.032789625227451324,
-0.07493489980697632,
0.005561237223446369,
-0.036392178386449814,
0.0302372258156538,
0.13628923892974854,
0.037503957748413086,
-0.017075950279831886,
-0.018602360039949417,
0.18950381875038147,
0.011893916875123978,
0.05692985653877258,
0.1111457496881485,
-0.02885637804865837,
0.052791040390729904,
0.04495348408818245,
0.024891812354326248,
-0.046890296041965485,
0.01294769998639822,
-0.017525436356663704,
-0.12204067409038544,
-0.17237921059131622,
-0.07223453372716904,
-0.0049677458591759205,
-0.00095201819203794,
0.018253028392791748,
0.03721996024250984,
0.02572273463010788,
0.03944806754589081,
-0.03051973693072796,
0.02349577471613884,
-0.009535018354654312,
0.08234868943691254,
0.024175379425287247,
-0.07395347952842712,
0.09355734288692474,
-0.05942916125059128,
0.01737140491604805,
0.10927647352218628,
-0.06233098730444908,
0.18837594985961914,
0.02544676698744297,
0.05344821512699127,
0.10609014332294464,
0.01608235016465187,
0.05400450527667999,
0.08828691393136978,
-0.04896249994635582,
0.0056026652455329895,
-0.06169470399618149,
-0.05252087861299515,
-0.03567127883434296,
0.048762232065200806,
0.030226081609725952,
0.018267638981342316,
-0.11732602119445801,
0.0154656283557415,
-0.0010849728714674711,
0.1399764120578766,
0.044037554413080215,
-0.12203869223594666,
-0.1194913387298584,
0.03418678790330887,
-0.044692933559417725,
-0.06132781505584717,
0.027962548658251762,
0.05424581095576286,
-0.1537863165140152,
0.0464681051671505,
-0.0057200174778699875,
0.06442630290985107,
-0.08981584757566452,
0.014046081341803074,
-0.040789082646369934,
0.001980564557015896,
0.006473685149103403,
0.06905721873044968,
-0.13393093645572662,
0.1086990013718605,
0.019973110407590866,
0.04897984117269516,
-0.07931838184595108,
0.01654449664056301,
-0.010202733799815178,
0.10605576634407043,
0.11569544672966003,
0.04240487515926361,
-0.04836075380444527,
-0.017166750505566597,
-0.044574156403541565,
0.019559970125555992,
0.060938142240047455,
-0.07509471476078033,
0.06092911213636398,
0.008839063346385956,
0.008433545008301735,
-0.020760057494044304,
0.021200187504291534,
-0.1330186128616333,
-0.1259518414735794,
0.06450146436691284,
-0.07707119733095169,
-0.10014428198337555,
-0.05837090313434601,
-0.06412007659673691,
-0.05300561338663101,
0.21542932093143463,
-0.1154986023902893,
-0.08978912234306335,
-0.10038982331752777,
-0.01306777074933052,
0.04732467979192734,
-0.06463831663131714,
0.04736515134572983,
-0.040749769657850266,
0.09233587235212326,
-0.047687359154224396,
-0.11016269028186798,
0.03189205750823021,
-0.11439315229654312,
-0.11304392665624619,
-0.043156035244464874,
0.10523159801959991,
0.11400680989027023,
0.03739415854215622,
0.012161598540842533,
0.013536442071199417,
0.00022505223751068115,
-0.11865615844726562,
0.010616946965456009,
0.13636177778244019,
-0.0018398202955722809,
0.07273619621992111,
-0.0624970868229866,
0.02850029245018959,
-0.015617063269019127,
0.00044182687997817993,
0.13594090938568115,
0.1852177083492279,
-0.06281088292598724,
0.17679016292095184,
0.19977647066116333,
-0.10505196452140808,
-0.19223077595233917,
-0.0552101694047451,
0.0006552375853061676,
0.04482734948396683,
0.04991675913333893,
-0.18454721570014954,
0.08916711807250977,
0.03599093109369278,
-0.03219778090715408,
0.00673743337392807,
-0.23361212015151978,
-0.10952918231487274,
0.09030205011367798,
0.05919868126511574,
0.18223345279693604,
-0.08104699105024338,
-0.038147933781147,
-0.016134407371282578,
-0.029732249677181244,
0.0497223362326622,
-0.0347474031150341,
0.09189347922801971,
0.006172372028231621,
-0.03252299502491951,
0.00121381226927042,
-0.032057106494903564,
0.09552190452814102,
0.037429384887218475,
0.022391369566321373,
-0.06989076733589172,
-0.008763784542679787,
0.1114501804113388,
-0.03944820910692215,
0.09924979507923126,
0.04339086264371872,
0.07593122124671936,
-0.0921216830611229,
-0.060154590755701065,
-0.07567308843135834,
0.04449380934238434,
-0.04111792892217636,
-0.055156782269477844,
-0.06465865671634674,
0.06162317842245102,
0.03878963366150856,
0.010180281475186348,
0.002919066697359085,
-0.03919891268014908,
0.04257630184292793,
0.08609404414892197,
0.08454017341136932,
-0.034297164529561996,
-0.07011060416698456,
-0.053448669612407684,
-0.04827985540032387,
0.06311172991991043,
-0.08904200792312622,
0.016793398186564445,
0.025818098336458206,
0.011291169561445713,
0.09042607247829437,
0.03398013114929199,
-0.13840940594673157,
0.009634828194975853,
0.03342895954847336,
-0.1227869763970375,
-0.10152435302734375,
-0.020153608173131943,
0.0349675714969635,
-0.03994403034448624,
0.0545743964612484,
0.14534705877304077,
-0.03453701734542847,
-0.03165752813220024,
-0.047818541526794434,
0.0379120334982872,
-0.018682517111301422,
0.048350922763347626,
0.0631895512342453,
0.028977183625102043,
-0.07316997647285461,
0.0738304853439331,
0.037484221160411835,
-0.028351079672574997,
0.04257015883922577,
0.04174703732132912,
-0.09361089766025543,
-0.07822203636169434,
-0.05938751995563507,
0.08745130896568298,
-0.02298927493393421,
-0.04598725587129593,
0.0005428213626146317,
-0.08363986015319824,
0.06734558939933777,
0.07525373250246048,
0.04861823096871376,
0.03711378201842308,
-0.08755594491958618,
0.014660290442407131,
-0.05138374865055084,
0.033842772245407104,
-0.03146204352378845,
-0.005281755700707436,
-0.054618582129478455,
0.06133266165852547,
0.06589116156101227,
0.09405143558979034,
-0.03393608704209328,
-0.07636072486639023,
-0.08384721726179123,
-0.012329861521720886,
-0.05877324193716049,
-0.03524148464202881,
-0.07715167850255966,
-0.006109653972089291,
0.001370066311210394,
-0.0033736955374479294,
0.01985820010304451,
0.03667999804019928,
-0.04291876405477524,
-0.018686383962631226,
-0.0371161550283432,
0.03763779625296593,
-0.06096436083316803,
0.006543496623635292,
0.013430493883788586,
-0.03642585873603821,
0.09118089079856873,
0.03504623845219612,
-0.011606525629758835,
0.048493705689907074,
-0.03123902529478073,
0.03247496113181114,
-0.02082277089357376,
0.0018202078063040972,
-0.023844055831432343,
-0.11005120724439621,
-0.00625870656222105,
0.008823210373520851,
-0.025285348296165466,
0.012148329056799412,
0.0583077147603035,
-0.07095104455947876,
0.08250663429498672,
0.044582851231098175,
-0.028138108551502228,
-0.0697699710726738,
0.04125277325510979,
-0.011964960023760796,
0.03321722522377968,
0.07089468836784363,
-0.03373485803604126,
0.05367785692214966,
-0.09614743292331696,
-0.027133647352457047,
0.0019160043448209763,
-0.00612066313624382,
-0.018491467460989952,
-0.053691111505031586,
-0.0040562767535448074,
0.008731105364859104,
0.18098026514053345,
-0.022352121770381927,
0.0353836864233017,
0.01387915387749672,
0.011255504563450813,
0.057035893201828,
-0.013591647148132324,
0.07159407436847687,
-0.005631129257380962,
-0.024178314954042435,
-0.011513040401041508,
0.03824114426970482,
0.0036401497200131416,
-0.001338256523013115,
0.14314031600952148,
0.04952101409435272,
0.09057073295116425,
0.07590243220329285,
0.017438100650906563,
0.01672005094587803,
-0.12040916085243225,
-0.08965780586004257,
0.009209416806697845,
0.0566026009619236,
-0.019293034449219704,
0.016215261071920395,
0.09050253033638,
-0.09061689674854279,
0.06903839111328125,
0.05150099843740463,
-0.04689188674092293,
-0.12484250962734222,
-0.19215142726898193,
-0.025294100865721703,
-0.027407126501202583,
-0.011114157736301422,
-0.09268603473901749,
0.01345372386276722,
0.08850766718387604,
0.026100533083081245,
-0.007894777692854404,
0.09107643365859985,
-0.10500817000865936,
-0.031030800193548203,
0.04296429082751274,
-0.028164323419332504,
0.01620667055249214,
0.04588351398706436,
0.021937571465969086,
-0.005172273144125938,
0.0445849783718586,
0.043158501386642456,
0.04381411522626877,
0.028331618756055832,
0.04978056997060776,
-0.024794641882181168,
-0.07473405450582504,
-0.03185427188873291,
-0.00465248292312026,
0.0549551397562027,
0.13462026417255402,
0.023431312292814255,
-0.07131176441907883,
0.007619136944413185,
0.10437460988759995,
-0.02964569628238678,
-0.051793262362480164,
-0.1087706983089447,
0.23953557014465332,
0.02268105372786522,
0.002020682906731963,
-0.004679837264120579,
-0.045087166130542755,
0.002788536250591278,
0.21342262625694275,
0.22631272673606873,
0.002347100991755724,
-0.00955919735133648,
0.010288710705935955,
-0.010986721143126488,
0.03489091619849205,
0.14529110491275787,
0.005175551399588585,
0.25171375274658203,
-0.04480990022420883,
0.038653627038002014,
-0.04195321351289749,
-0.03900521248579025,
-0.09677349030971527,
0.0704774335026741,
-0.005871149245649576,
0.005709763616323471,
-0.03192887455224991,
0.07247094064950943,
-0.040953315794467926,
-0.17094296216964722,
-0.0011191312223672867,
-0.0024888054467737675,
-0.06385502219200134,
0.01129473838955164,
-0.0016703121364116669,
0.019407449290156364,
0.08371211588382721,
-0.016231682151556015,
-0.006163185462355614,
0.12526848912239075,
0.01926243305206299,
-0.0973932072520256,
-0.06014860421419144,
0.11990749835968018,
0.02020205929875374,
0.1436154544353485,
0.011147310957312584,
0.08122541755437851,
0.08745237439870834,
0.020163383334875107,
-0.09480787068605423,
0.04456377774477005,
-0.019169339910149574,
-0.031005840748548508,
0.009281548671424389,
0.1080690324306488,
-0.008813601918518543,
0.0627383142709732,
0.02606147900223732,
-0.08726981282234192,
0.06494300812482834,
0.008789129555225372,
-0.037135958671569824,
-0.0806257426738739,
0.08663485944271088,
-0.09183721244335175,
0.1576712280511856,
0.12405338883399963,
-0.013843205757439137,
-0.04634961485862732,
-0.02930694818496704,
0.022090766578912735,
-0.0023100790567696095,
0.0583910197019577,
-0.02465694397687912,
-0.136129230260849,
0.01965275965631008,
-0.09040291607379913,
0.02533021569252014,
-0.2489335834980011,
-0.08972035348415375,
0.029408518224954605,
-0.017933964729309082,
-0.017549831420183182,
0.05080758035182953,
0.04561984911561012,
0.02478593774139881,
-0.03566114231944084,
0.027973297983407974,
-0.038529422134160995,
0.05647341534495354,
-0.1094876229763031,
-0.09129331260919571
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 200k (uncased)
Seed 3 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-200k')
model = BertModel.from_pretrained("multiberts-seed-3-200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-200k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 200k (uncased)
Seed 3 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 200k (uncased)\nSeed 3 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 200k (uncased)\nSeed 3 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 200k (uncased)\nSeed 3 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08262299746274948,
-0.000028714188374578953,
-0.0022080740891397,
0.06783094257116318,
0.08642822504043579,
0.0023477966897189617,
0.11820998787879944,
0.04965904727578163,
-0.03146083652973175,
0.02402888983488083,
0.09297835826873779,
0.032868653535842896,
0.04234793037176132,
0.06842266023159027,
0.09715230762958527,
-0.25796300172805786,
0.051698725670576096,
-0.06369537860155106,
0.05529561638832092,
0.07643070071935654,
0.10017150640487671,
-0.07110890746116638,
0.062255021184682846,
0.03454356640577316,
-0.08379603922367096,
-0.016752202063798904,
-0.017531458288431168,
-0.03479746729135513,
0.0998140424489975,
0.06935611367225647,
0.06008158624172211,
0.0017237979918718338,
0.057993706315755844,
-0.08940181136131287,
0.01614501141011715,
0.044335752725601196,
-0.000900869257748127,
0.02370493859052658,
-0.006707903929054737,
0.01639428175985813,
0.11042281985282898,
0.03906892612576485,
0.07826471328735352,
0.035030998289585114,
-0.09458097815513611,
-0.1099805161356926,
-0.08112312853336334,
0.10438472032546997,
0.05499345809221268,
0.042564891278743744,
-0.0060539208352565765,
0.07536979764699936,
-0.031101074069738388,
0.07347890734672546,
0.10995976626873016,
-0.25515949726104736,
-0.011041930876672268,
0.06994590163230896,
0.046393394470214844,
0.04254312813282013,
0.01442223321646452,
0.026752982288599014,
0.004991631954908371,
0.046516016125679016,
0.02902236580848694,
-0.02390873059630394,
0.12028251588344574,
-0.045941874384880066,
-0.15296036005020142,
-0.042388979345560074,
0.12232062965631485,
-0.0065852925181388855,
-0.12587866187095642,
-0.10238725692033768,
-0.028276072815060616,
0.11662144958972931,
-0.0024565476924180984,
-0.019955236464738846,
-0.004360494669526815,
0.01079400721937418,
0.02332112565636635,
-0.09397688508033752,
-0.08632802218198776,
-0.028749890625476837,
-0.03653622046113014,
0.13014733791351318,
0.04642413556575775,
0.05062522366642952,
-0.03427578881382942,
0.08718939870595932,
-0.11159789562225342,
-0.03735670447349548,
-0.05241859704256058,
-0.08525006473064423,
-0.01756253093481064,
0.008991286158561707,
-0.027966130524873734,
-0.08802362531423569,
-0.059259600937366486,
0.11560565233230591,
0.03829384967684746,
0.0300167053937912,
-0.0005708867684006691,
0.04191982373595238,
0.07345236837863922,
0.0954509973526001,
-0.040311697870492935,
0.05324613302946091,
0.03184356540441513,
-0.02031935378909111,
0.05892122536897659,
-0.05084928497672081,
-0.10154992341995239,
0.0792730301618576,
0.0008871415629982948,
0.04072894901037216,
0.02572527341544628,
0.03347839415073395,
-0.012756140902638435,
-0.0741676464676857,
0.16271460056304932,
-0.07676960527896881,
-0.010860002599656582,
-0.018398011103272438,
0.012312715873122215,
0.048401422798633575,
0.030179742723703384,
-0.006398199591785669,
-0.047135137021541595,
-0.005340633913874626,
-0.0553019754588604,
-0.02605975791811943,
-0.05439947545528412,
-0.11725262552499771,
0.00012136949226260185,
-0.03959095478057861,
-0.0333007276058197,
-0.14186331629753113,
-0.211973637342453,
-0.01887933537364006,
0.0647083967924118,
-0.003122913185507059,
-0.009793535806238651,
0.024224428460001945,
0.016142873093485832,
-0.021400760859251022,
0.00967699196189642,
-0.04693814739584923,
-0.0006608050316572189,
-0.006507072597742081,
-0.03381145000457764,
0.05587949603796005,
-0.03927699849009514,
0.023249274119734764,
-0.06894151866436005,
0.022502513602375984,
-0.20865918695926666,
0.08833769708871841,
-0.03259378671646118,
0.0014086663722991943,
-0.03815790265798569,
-0.045051366090774536,
0.008350810036063194,
0.04803454130887985,
-0.009274589829146862,
0.11713728308677673,
-0.13984553515911102,
-0.050222065299749374,
0.18182316422462463,
-0.15857243537902832,
-0.002789456397294998,
0.09884562343358994,
-0.04721327871084213,
0.05551502853631973,
0.1331450343132019,
0.09655851125717163,
0.08109847456216812,
-0.07372257113456726,
0.011240671388804913,
0.06078377738595009,
-0.06779145449399948,
0.05400343984365463,
0.09017413854598999,
-0.025838399305939674,
-0.1369086354970932,
0.029384251683950424,
-0.07601884007453918,
-0.008993186056613922,
-0.02522275410592556,
-0.021086690947413445,
0.008668297901749611,
-0.037426650524139404,
0.02837119996547699,
0.00469535868614912,
0.016161175444722176,
-0.04116274043917656,
-0.08140891790390015,
0.030090004205703735,
0.07482527196407318,
-0.07154493033885956,
0.04290654882788658,
-0.07048267126083374,
0.060224585235118866,
-0.07405897974967957,
-0.004378172568976879,
-0.1656237244606018,
-0.025955533608794212,
0.045126184821128845,
-0.04754195734858513,
0.04969581961631775,
0.09156998991966248,
0.001899028429761529,
0.12271931767463684,
-0.0394979789853096,
0.002393934177234769,
-0.006555687636137009,
-0.010640915483236313,
-0.048894405364990234,
-0.12054473906755447,
-0.08288883417844772,
-0.06849744915962219,
0.09668335318565369,
-0.07150711864233017,
0.02881435491144657,
-0.07115820050239563,
-0.02183140628039837,
-0.009591327980160713,
-0.05855938047170639,
-0.0030414629727602005,
0.01084602065384388,
-0.028700783848762512,
-0.047759585082530975,
0.048931464552879333,
0.05017532408237457,
-0.06080007925629616,
0.07711239904165268,
-0.10532534122467041,
-0.05947592109441757,
0.05480920523405075,
0.01753420941531658,
-0.08083682507276535,
0.08906009793281555,
-0.020627306774258614,
-0.012482013553380966,
-0.058054931461811066,
-0.04415002837777138,
0.1915556788444519,
-0.02061767876148224,
0.09883017838001251,
-0.09156392514705658,
0.0009503430337645113,
0.027090400457382202,
-0.04717647284269333,
-0.02089296653866768,
0.05799809843301773,
0.050892457365989685,
-0.18930640816688538,
0.014464069157838821,
0.05411981791257858,
0.07791082561016083,
0.11133015155792236,
0.028183281421661377,
-0.023069648072123528,
-0.046460628509521484,
-0.009641235694289207,
0.006536407396197319,
0.05423806607723236,
-0.02176663652062416,
-0.007911186665296555,
0.029900556430220604,
0.058781273663043976,
0.01769835688173771,
-0.0798313319683075,
0.032688722014427185,
0.0665140300989151,
-0.017126008868217468,
-0.035255130380392075,
-0.02285831980407238,
-0.060961171984672546,
0.06284382939338684,
0.05409209802746773,
0.03521187603473663,
0.02758510783314705,
-0.014638050459325314,
-0.1369260549545288,
0.18950527906417847,
-0.11548462510108948,
-0.26187530159950256,
-0.10727443546056747,
-0.05762889236211777,
-0.028506742790341377,
0.04096547141671181,
0.05760108679533005,
-0.0316561758518219,
-0.04295319318771362,
-0.11740976572036743,
0.06152496486902237,
-0.0656481459736824,
-0.0319819450378418,
-0.011719727888703346,
-0.0523638054728508,
-0.019031642004847527,
-0.12755975127220154,
-0.012863678857684135,
-0.030449721962213516,
-0.07641452550888062,
0.006864552386105061,
-0.03621348738670349,
0.02946498617529869,
0.136969655752182,
0.037649743258953094,
-0.019165072590112686,
-0.017731765285134315,
0.1921263188123703,
0.012986985966563225,
0.05793265625834465,
0.11367380619049072,
-0.026399560272693634,
0.054298870265483856,
0.04581917077302933,
0.024690350517630577,
-0.04952135682106018,
0.014417349360883236,
-0.01538060326129198,
-0.12149457633495331,
-0.1742679476737976,
-0.0715843066573143,
-0.0038772765547037125,
0.005640578921884298,
0.01967068761587143,
0.03724983334541321,
0.0232335664331913,
0.04000258818268776,
-0.030866071581840515,
0.02632441185414791,
-0.01231331005692482,
0.08153289556503296,
0.025120697915554047,
-0.07394487410783768,
0.09384037554264069,
-0.059285350143909454,
0.016416769474744797,
0.10973609983921051,
-0.05970766767859459,
0.18882913887500763,
0.025543341413140297,
0.054351869970560074,
0.10423560440540314,
0.018445070832967758,
0.053004369139671326,
0.08851173520088196,
-0.04549668729305267,
0.005837742239236832,
-0.061852291226387024,
-0.05231281369924545,
-0.036530427634716034,
0.04854380339384079,
0.03163427859544754,
0.019061937928199768,
-0.12076082825660706,
0.02091122791171074,
-0.0010377037106081843,
0.1395435780286789,
0.04700106009840965,
-0.11963266134262085,
-0.12024007737636566,
0.0360780730843544,
-0.04543215036392212,
-0.06146770715713501,
0.030237704515457153,
0.059492602944374084,
-0.15494191646575928,
0.045134011656045914,
-0.00637468695640564,
0.06465215981006622,
-0.09123215079307556,
0.015124505385756493,
-0.04336487874388695,
0.0004142969846725464,
0.005934464745223522,
0.06921857595443726,
-0.13643543422222137,
0.10636898875236511,
0.019817065447568893,
0.04929712042212486,
-0.07876357436180115,
0.01544222142547369,
-0.010327281430363655,
0.10598525404930115,
0.11680871248245239,
0.04285357892513275,
-0.053802210837602615,
-0.021098162978887558,
-0.046973660588264465,
0.02017860673367977,
0.0598989799618721,
-0.0784686952829361,
0.05971171706914902,
0.007891922257840633,
0.007504249457269907,
-0.02274545282125473,
0.02107904478907585,
-0.13378341495990753,
-0.12266765534877777,
0.06254890561103821,
-0.07906293123960495,
-0.09850916266441345,
-0.057622723281383514,
-0.06450848281383514,
-0.04690930247306824,
0.21169939637184143,
-0.1133672446012497,
-0.0906095802783966,
-0.0980013757944107,
-0.016741294413805008,
0.04628423973917961,
-0.0654119998216629,
0.04500042647123337,
-0.037762902677059174,
0.09208199381828308,
-0.04750997573137283,
-0.11114522814750671,
0.03353309631347656,
-0.1147354245185852,
-0.11390931159257889,
-0.04402457922697067,
0.10721565037965775,
0.11613425612449646,
0.03871171176433563,
0.01266732532531023,
0.011036256328225136,
0.002929897978901863,
-0.11796575784683228,
0.015144234523177147,
0.13399261236190796,
0.0007035806775093079,
0.07123361527919769,
-0.061638012528419495,
0.029192831367254257,
-0.018524549901485443,
-0.00018258579075336456,
0.1336212456226349,
0.18516145646572113,
-0.0629449412226677,
0.1751513034105301,
0.20002800226211548,
-0.10510106384754181,
-0.1908869743347168,
-0.05423113331198692,
-0.0012543033808469772,
0.04488520696759224,
0.050958678126335144,
-0.18463356792926788,
0.08988811075687408,
0.035647351294755936,
-0.0320495180785656,
0.015694130212068558,
-0.23861658573150635,
-0.11048084497451782,
0.0893385112285614,
0.05862359330058098,
0.1862739622592926,
-0.08105640113353729,
-0.03850655257701874,
-0.016493171453475952,
-0.03491672873497009,
0.05049287527799606,
-0.033870629966259,
0.09176717698574066,
0.005750417709350586,
-0.03159559518098831,
0.002133960835635662,
-0.030641384422779083,
0.09596569836139679,
0.039518170058727264,
0.02186356484889984,
-0.07089659571647644,
-0.0063569508492946625,
0.10726621747016907,
-0.03901970013976097,
0.09940021485090256,
0.04054839536547661,
0.07372184097766876,
-0.09428408741950989,
-0.06018083170056343,
-0.07649451494216919,
0.04405394196510315,
-0.04188903048634529,
-0.05598811060190201,
-0.06307056546211243,
0.06077534705400467,
0.03777698427438736,
0.01050047017633915,
-0.0020943433046340942,
-0.03831363469362259,
0.04657508805394173,
0.08553598821163177,
0.08554714918136597,
-0.03489411249756813,
-0.07522344589233398,
-0.052778568118810654,
-0.048514846712350845,
0.06591424345970154,
-0.08778338134288788,
0.017115570604801178,
0.027068927884101868,
0.0087049575522542,
0.08959300816059113,
0.03436339274048805,
-0.13805434107780457,
0.010870825499296188,
0.034158628433942795,
-0.1231088787317276,
-0.10839702934026718,
-0.020286383107304573,
0.03394192457199097,
-0.03953713923692703,
0.0557430163025856,
0.14586585760116577,
-0.03548068553209305,
-0.032951127737760544,
-0.047953516244888306,
0.038299355655908585,
-0.02068202942609787,
0.05024106428027153,
0.06385750323534012,
0.030568556860089302,
-0.07153131067752838,
0.07362951338291168,
0.035379376262426376,
-0.03299679979681969,
0.043706439435482025,
0.04249963536858559,
-0.09365449845790863,
-0.07862509787082672,
-0.05868222936987877,
0.08582668006420135,
-0.022153714671730995,
-0.04498346894979477,
0.0014658495783805847,
-0.08411514014005661,
0.0676201730966568,
0.07370401173830032,
0.0477411188185215,
0.037697676569223404,
-0.08849148452281952,
0.014563235454261303,
-0.05193834751844406,
0.032613370567560196,
-0.029186131432652473,
-0.005720183253288269,
-0.05680030584335327,
0.0641021877527237,
0.06441861391067505,
0.09833687543869019,
-0.03390248119831085,
-0.07606089860200882,
-0.0847017914056778,
-0.01275889202952385,
-0.0634218156337738,
-0.034434083849191666,
-0.07700417935848236,
-0.006999635603278875,
0.00046360259875655174,
-0.0020922478288412094,
0.022695176303386688,
0.03585570305585861,
-0.04309602081775665,
-0.018911518156528473,
-0.03647852689027786,
0.03776711970567703,
-0.062029607594013214,
0.006959279999136925,
0.014798423275351524,
-0.03737057000398636,
0.09143778681755066,
0.03516746312379837,
-0.012303246185183525,
0.04741114005446434,
-0.02519739605486393,
0.033542435616254807,
-0.019785135984420776,
0.00003693858161568642,
-0.02378767356276512,
-0.11007101833820343,
-0.005641192197799683,
0.006184082478284836,
-0.025183381512761116,
0.01143745705485344,
0.05595199763774872,
-0.07177471369504929,
0.08549332618713379,
0.04467809200286865,
-0.030679531395435333,
-0.07074017077684402,
0.041330717504024506,
-0.011815857142210007,
0.031273920089006424,
0.07175043970346451,
-0.03566603362560272,
0.05392900109291077,
-0.097238689661026,
-0.02825026586651802,
0.003269784152507782,
-0.005090337246656418,
-0.013862544670701027,
-0.052427180111408234,
-0.004041713662445545,
0.007687154226005077,
0.1721174269914627,
-0.02353718876838684,
0.036855097860097885,
0.013819838874042034,
0.007763776928186417,
0.051775090396404266,
-0.013085663318634033,
0.06978343427181244,
-0.007065495476126671,
-0.024796534329652786,
-0.012516886927187443,
0.03723781555891037,
0.004173872992396355,
0.00158793106675148,
0.1444733738899231,
0.04580964893102646,
0.08874402195215225,
0.07624958455562592,
0.016943922266364098,
0.01725880242884159,
-0.1325979381799698,
-0.09273791313171387,
0.008267904631793499,
0.05739320069551468,
-0.01773478463292122,
0.015817511826753616,
0.09087337553501129,
-0.08906267583370209,
0.06895770132541656,
0.04818300902843475,
-0.04824339970946312,
-0.12664839625358582,
-0.19486135244369507,
-0.023741232231259346,
-0.028985964134335518,
-0.011430734768509865,
-0.09111391007900238,
0.01396140456199646,
0.09107090532779694,
0.025728221982717514,
-0.010203765705227852,
0.09221771359443665,
-0.10234729945659637,
-0.030597545206546783,
0.04439276456832886,
-0.02780144102871418,
0.015175003558397293,
0.047736380249261856,
0.02407161146402359,
-0.0077873896807432175,
0.04467279836535454,
0.04236134886741638,
0.044227153062820435,
0.025480985641479492,
0.050544850528240204,
-0.026331963017582893,
-0.07461823523044586,
-0.032155707478523254,
-0.0036981087177991867,
0.05346248671412468,
0.13653726875782013,
0.024674881249666214,
-0.0718865916132927,
0.007826874032616615,
0.10708404332399368,
-0.030602820217609406,
-0.05067327246069908,
-0.10809951275587082,
0.24327689409255981,
0.022051233798265457,
0.0027830153703689575,
-0.004948392044752836,
-0.0448182038962841,
0.0019467063248157501,
0.20906119048595428,
0.22568005323410034,
0.0033879042603075504,
-0.009463685564696789,
0.0092101925984025,
-0.011153976432979107,
0.03563982993364334,
0.147822767496109,
0.00568951852619648,
0.2547517418861389,
-0.04808714985847473,
0.04191773384809494,
-0.04177665337920189,
-0.03859996423125267,
-0.09929011762142181,
0.068686842918396,
-0.0075684511102736,
0.008938583545386791,
-0.030934711918234825,
0.07301595062017441,
-0.041801754385232925,
-0.1673044115304947,
-0.0008867513388395309,
-0.00012237508781254292,
-0.062275685369968414,
0.011284906417131424,
-0.0048238663002848625,
0.01917053572833538,
0.08559806644916534,
-0.017540495842695236,
-0.0065031880512833595,
0.12888704240322113,
0.018581479787826538,
-0.09748902916908264,
-0.060020431876182556,
0.11791464686393738,
0.02205321565270424,
0.1423262059688568,
0.01130690798163414,
0.07807840406894684,
0.08677063882350922,
0.021454423666000366,
-0.09428359568119049,
0.043298449367284775,
-0.01915614679455757,
-0.026611577719449997,
0.008423334918916225,
0.10653644055128098,
-0.00875482615083456,
0.061530373990535736,
0.027548005804419518,
-0.09000390022993088,
0.06221476197242737,
0.0076637789607048035,
-0.03272027522325516,
-0.08147382736206055,
0.0882989689707756,
-0.0926220715045929,
0.15724381804466248,
0.12375953793525696,
-0.01532882172614336,
-0.04809384047985077,
-0.028772447258234024,
0.02053491771221161,
-0.00179203599691391,
0.056616369634866714,
-0.025028463453054428,
-0.13418036699295044,
0.018778648227453232,
-0.08691907674074173,
0.025949934497475624,
-0.25021159648895264,
-0.09109805524349213,
0.030862223356962204,
-0.01745191402733326,
-0.02011372521519661,
0.051184337586164474,
0.04344117268919945,
0.026356235146522522,
-0.03637543320655823,
0.02475018799304962,
-0.03942286968231201,
0.058947209268808365,
-0.11046947538852692,
-0.09174792468547821
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 20k (uncased)
Seed 3 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-20k')
model = BertModel.from_pretrained("multiberts-seed-3-20k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-20k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 20k (uncased)
Seed 3 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 20k (uncased)\nSeed 3 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 20k (uncased)\nSeed 3 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 20k (uncased)\nSeed 3 intermediate checkpoint 20k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08234815299510956,
0.0014013884356245399,
-0.0021790259052067995,
0.06493858993053436,
0.08492137491703033,
0.0006908453069627285,
0.12189539521932602,
0.050193313509225845,
-0.03270045667886734,
0.027074409648776054,
0.09416817128658295,
0.03409385681152344,
0.04154457151889801,
0.07081446051597595,
0.09609048813581467,
-0.25867605209350586,
0.05102243274450302,
-0.06383546441793442,
0.05968806892633438,
0.07695971429347992,
0.10080131143331528,
-0.07203663885593414,
0.06189390644431114,
0.03551972657442093,
-0.08575771749019623,
-0.015787100419402122,
-0.017573636025190353,
-0.0356912799179554,
0.0997246578335762,
0.06948738545179367,
0.05988014489412308,
0.002223137766122818,
0.055865779519081116,
-0.08951961994171143,
0.01589345373213291,
0.04561743512749672,
-0.0032897586934268475,
0.024494988843798637,
-0.005077265202999115,
0.01381249912083149,
0.11529313772916794,
0.033441148698329926,
0.07936473190784454,
0.03441053256392479,
-0.09469657391309738,
-0.11285889148712158,
-0.08135658502578735,
0.10483116656541824,
0.054633885622024536,
0.04291201010346413,
-0.007016233168542385,
0.07703356444835663,
-0.032241612672805786,
0.07648147642612457,
0.11512652039527893,
-0.2548900544643402,
-0.011313401162624359,
0.0687125027179718,
0.047039180994033813,
0.044730015099048615,
0.014729219488799572,
0.026659516617655754,
0.004509389400482178,
0.04496181011199951,
0.027242276817560196,
-0.024087999016046524,
0.12401042133569717,
-0.04454585537314415,
-0.15365047752857208,
-0.043338894844055176,
0.12701384723186493,
-0.006886821240186691,
-0.1238844096660614,
-0.10582689195871353,
-0.02946259081363678,
0.11676577478647232,
-0.004296661354601383,
-0.01965916156768799,
-0.0029083960689604282,
0.009764984250068665,
0.022198766469955444,
-0.09045443683862686,
-0.0859035849571228,
-0.02816394716501236,
-0.0349356047809124,
0.12940384447574615,
0.04680798947811127,
0.04896913841366768,
-0.037102922797203064,
0.08574280142784119,
-0.11718642711639404,
-0.037496697157621384,
-0.051112838089466095,
-0.08227134495973587,
-0.017801210284233093,
0.007716367021203041,
-0.030476337298750877,
-0.09277830272912979,
-0.05996261537075043,
0.11974380910396576,
0.0366227887570858,
0.03158934786915779,
-0.0037148427218198776,
0.04196476563811302,
0.0745670273900032,
0.09552379697561264,
-0.04047296941280365,
0.051663972437381744,
0.031887829303741455,
-0.023878488689661026,
0.06174224242568016,
-0.05152827501296997,
-0.10082931816577911,
0.08114853501319885,
0.0009983256459236145,
0.039264239370822906,
0.02572556957602501,
0.034523770213127136,
-0.011413603089749813,
-0.07395027577877045,
0.16229641437530518,
-0.07735531032085419,
-0.010816711001098156,
-0.01731729507446289,
0.012387068942189217,
0.04732261598110199,
0.03132056072354317,
-0.00652484642341733,
-0.04565519094467163,
-0.0063524404540658,
-0.056600019335746765,
-0.02680773288011551,
-0.053407296538352966,
-0.11768297106027603,
0.0005121342837810516,
-0.037971071898937225,
-0.03217397630214691,
-0.13803434371948242,
-0.2155582755804062,
-0.019961122423410416,
0.06397727131843567,
-0.00207650288939476,
-0.009605754166841507,
0.02304903231561184,
0.01574082486331463,
-0.02073570527136326,
0.01033796090632677,
-0.045367155224084854,
-0.00037955213338136673,
-0.0068247318267822266,
-0.03269607946276665,
0.055102162063121796,
-0.03942088410258293,
0.023650547489523888,
-0.06749740242958069,
0.023413177579641342,
-0.21054194867610931,
0.0887594074010849,
-0.03386388719081879,
0.0016700662672519684,
-0.03744760900735855,
-0.0443907156586647,
0.011918298900127411,
0.048187971115112305,
-0.010122844949364662,
0.11466942727565765,
-0.1385972797870636,
-0.05357464402914047,
0.18600554764270782,
-0.15870535373687744,
0.00032231584191322327,
0.09684485197067261,
-0.04746732488274574,
0.05500757694244385,
0.13316184282302856,
0.10010536760091782,
0.07824955880641937,
-0.07171569019556046,
0.010816576890647411,
0.06003476679325104,
-0.06752197444438934,
0.055056676268577576,
0.09145589917898178,
-0.024569900706410408,
-0.13263683021068573,
0.029840655624866486,
-0.07697388529777527,
-0.010236668400466442,
-0.02466452307999134,
-0.020425008609890938,
0.007781775668263435,
-0.034597598016262054,
0.025772083550691605,
0.0066218143329024315,
0.016674453392624855,
-0.043212976306676865,
-0.08169247210025787,
0.031318504363298416,
0.07569542527198792,
-0.07195937633514404,
0.041392575949430466,
-0.0708339735865593,
0.06001834571361542,
-0.07275912165641785,
-0.004365656524896622,
-0.1660265177488327,
-0.02572246640920639,
0.04496316239237785,
-0.04771040752530098,
0.050997957587242126,
0.09337479621171951,
0.002893334487453103,
0.12298151850700378,
-0.03777432441711426,
0.0025204652920365334,
-0.005211252719163895,
-0.010914723388850689,
-0.04983904957771301,
-0.12474332004785538,
-0.08046713471412659,
-0.06925828009843826,
0.10165919363498688,
-0.07505734264850616,
0.028808848932385445,
-0.06802069395780563,
-0.01982262171804905,
-0.010278230533003807,
-0.05975031480193138,
-0.002373018302023411,
0.008978752419352531,
-0.029034240171313286,
-0.048386432230472565,
0.048141270875930786,
0.049774423241615295,
-0.06081169843673706,
0.07575579732656479,
-0.10962618887424469,
-0.0614679679274559,
0.05532544106245041,
0.017395172268152237,
-0.08108069002628326,
0.08648112416267395,
-0.022275874391198158,
-0.012609623372554779,
-0.05418757349252701,
-0.041908808052539825,
0.1924724578857422,
-0.02174975723028183,
0.10006411373615265,
-0.09120530635118484,
-0.0008646416245028377,
0.027842236682772636,
-0.046832382678985596,
-0.021595103666186333,
0.05996719375252724,
0.048197198659181595,
-0.19245482981204987,
0.01589353010058403,
0.054214149713516235,
0.07854978740215302,
0.11183513700962067,
0.026972394436597824,
-0.024737395346164703,
-0.04706697538495064,
-0.008262722752988338,
0.00594692025333643,
0.05355776101350784,
-0.026475712656974792,
-0.011113652028143406,
0.030956290662288666,
0.05677330493927002,
0.018369529396295547,
-0.08021584153175354,
0.033387817442417145,
0.06637823581695557,
-0.015632476657629013,
-0.03850170597434044,
-0.022684717550873756,
-0.06173316389322281,
0.061890825629234314,
0.05407596752047539,
0.034774575382471085,
0.02839246764779091,
-0.015350894071161747,
-0.13681942224502563,
0.18883506953716278,
-0.11594356596469879,
-0.2609557509422302,
-0.10641062259674072,
-0.061363980174064636,
-0.03019079752266407,
0.04019710794091225,
0.058736518025398254,
-0.032134559005498886,
-0.044792674481868744,
-0.11601027101278305,
0.05898477882146835,
-0.06835198402404785,
-0.03258651867508888,
-0.011181114241480827,
-0.053164564073085785,
-0.01993095688521862,
-0.12744975090026855,
-0.012670731171965599,
-0.030542761087417603,
-0.07502099871635437,
0.007891139015555382,
-0.03644689545035362,
0.027323264628648758,
0.13543571531772614,
0.039681147783994675,
-0.017359595745801926,
-0.01764974184334278,
0.1903885453939438,
0.010937295854091644,
0.055396512150764465,
0.1136121153831482,
-0.025447405874729156,
0.05363543704152107,
0.04367490112781525,
0.023821428418159485,
-0.04908470809459686,
0.012626519426703453,
-0.014664839021861553,
-0.1206289529800415,
-0.17563489079475403,
-0.07185304909944534,
-0.0036222925409674644,
0.004909370094537735,
0.02037809230387211,
0.0373258963227272,
0.02793588861823082,
0.03837321326136589,
-0.03151853010058403,
0.029315795749425888,
-0.011652905493974686,
0.08293138444423676,
0.02691575139760971,
-0.07654457539319992,
0.09391547739505768,
-0.05837729945778847,
0.017380202189087868,
0.11004827171564102,
-0.062156811356544495,
0.18659651279449463,
0.028399460017681122,
0.061077188700437546,
0.10402898490428925,
0.017300277948379517,
0.05187787488102913,
0.08727306872606277,
-0.04322919249534607,
0.004353370517492294,
-0.062919482588768,
-0.053060129284858704,
-0.035967376083135605,
0.0508817583322525,
0.03007712960243225,
0.017098695039749146,
-0.11873258650302887,
0.01924717053771019,
-0.00001470465213060379,
0.13966818153858185,
0.043660882860422134,
-0.12248417735099792,
-0.11983734369277954,
0.03579331189393997,
-0.04755935072898865,
-0.06159736216068268,
0.029820546507835388,
0.05721784755587578,
-0.15460605919361115,
0.04758863523602486,
-0.006850411184132099,
0.06385191529989243,
-0.09072794765233994,
0.015643564984202385,
-0.0473194420337677,
0.0027681859210133553,
0.006332222372293472,
0.07053671032190323,
-0.1389513611793518,
0.10489393025636673,
0.02096930891275406,
0.04650270566344261,
-0.08110582083463669,
0.016014352440834045,
-0.010587794706225395,
0.10657570511102676,
0.11827240884304047,
0.04196547716856003,
-0.05613277852535248,
-0.01690804958343506,
-0.04757308587431908,
0.01992199569940567,
0.06232461333274841,
-0.07854628562927246,
0.05968683212995529,
0.007971218787133694,
0.00749029079452157,
-0.021433500573039055,
0.021477889269590378,
-0.13002906739711761,
-0.1231798380613327,
0.06475253403186798,
-0.07763750106096268,
-0.10069478303194046,
-0.05955066904425621,
-0.06409643590450287,
-0.05185069143772125,
0.21768774092197418,
-0.1143733412027359,
-0.08922691643238068,
-0.0998544692993164,
-0.012313686311244965,
0.044700510799884796,
-0.06568249315023422,
0.04475696384906769,
-0.03794974833726883,
0.09551388025283813,
-0.04819956421852112,
-0.11113163828849792,
0.03524422273039818,
-0.1150776743888855,
-0.11367301642894745,
-0.044515497982501984,
0.10875547677278519,
0.11457466334104538,
0.03890807926654816,
0.010741309262812138,
0.012364068999886513,
0.0004943963140249252,
-0.11660626530647278,
0.01570728048682213,
0.1370519995689392,
-0.0006224606186151505,
0.0697641521692276,
-0.05794315040111542,
0.03265604004263878,
-0.01741989329457283,
0.0003090370446443558,
0.13519805669784546,
0.18493050336837769,
-0.06199732422828674,
0.1761818826198578,
0.19627095758914948,
-0.10312146693468094,
-0.18901067972183228,
-0.052728600800037384,
-0.001345161348581314,
0.0442047081887722,
0.05029211938381195,
-0.18541371822357178,
0.0900893360376358,
0.035533126443624496,
-0.031937818974256516,
0.015194788575172424,
-0.23642688989639282,
-0.10917915403842926,
0.0912121832370758,
0.05922370031476021,
0.1841966211795807,
-0.0809057354927063,
-0.03998631238937378,
-0.015914205461740494,
-0.028139159083366394,
0.05030958354473114,
-0.04170685261487961,
0.09152117371559143,
0.006682056933641434,
-0.030842190608382225,
0.0020838091149926186,
-0.03183785080909729,
0.09386909008026123,
0.03935016691684723,
0.023206617683172226,
-0.0705593079328537,
-0.006033612415194511,
0.11167066544294357,
-0.03886044770479202,
0.0987904891371727,
0.039848361164331436,
0.0740932822227478,
-0.09230129420757294,
-0.0595427080988884,
-0.07636727392673492,
0.0445714071393013,
-0.04194287210702896,
-0.05492081120610237,
-0.06340095400810242,
0.05899929255247116,
0.03761405870318413,
0.010961671359837055,
0.0030280742794275284,
-0.037741776555776596,
0.04487987980246544,
0.08657428622245789,
0.08404682576656342,
-0.03453870117664337,
-0.07425972074270248,
-0.0547717809677124,
-0.047344475984573364,
0.06491319835186005,
-0.09103281795978546,
0.015286469832062721,
0.025675242766737938,
0.011557203717529774,
0.09187553822994232,
0.0331488661468029,
-0.14023837447166443,
0.009273683652281761,
0.03371519595384598,
-0.12566804885864258,
-0.10969500243663788,
-0.021064024418592453,
0.035882964730262756,
-0.037508755922317505,
0.05629990994930267,
0.14602427184581757,
-0.03356994688510895,
-0.03259847313165665,
-0.04784447327256203,
0.03755258023738861,
-0.020149927586317062,
0.04813655465841293,
0.06233106553554535,
0.030179113149642944,
-0.07431468367576599,
0.07194171100854874,
0.03690582886338234,
-0.03295904025435448,
0.04492848366498947,
0.044727981090545654,
-0.09350302815437317,
-0.07740600407123566,
-0.058374226093292236,
0.0894380435347557,
-0.022028518840670586,
-0.046711068600416183,
-0.0011098254472017288,
-0.08219310641288757,
0.0673251673579216,
0.07514150440692902,
0.04704128950834274,
0.03701721131801605,
-0.08839781582355499,
0.015330415219068527,
-0.05245182663202286,
0.032580599188804626,
-0.03406253457069397,
-0.004354506731033325,
-0.05672147870063782,
0.0682457685470581,
0.06490329653024673,
0.0962761789560318,
-0.033574312925338745,
-0.076449453830719,
-0.08383932709693909,
-0.013469362631440163,
-0.05864439904689789,
-0.03624885901808739,
-0.07851796597242355,
-0.0074134282767772675,
0.0010690647177398205,
-0.00045819394290447235,
0.02273225598037243,
0.03578004240989685,
-0.04312117397785187,
-0.019005462527275085,
-0.035555507987737656,
0.03934534266591072,
-0.06373800337314606,
0.00852407980710268,
0.013424530625343323,
-0.03731253743171692,
0.0915692001581192,
0.03827720880508423,
-0.010761683806777,
0.04855857789516449,
-0.02704096958041191,
0.03352856636047363,
-0.019271474331617355,
-0.0012156274169683456,
-0.021924639120697975,
-0.1094217300415039,
-0.007090026047080755,
0.005877532064914703,
-0.027334894984960556,
0.011096264235675335,
0.05053817480802536,
-0.07123389095067978,
0.08500775694847107,
0.04498676210641861,
-0.032197773456573486,
-0.07212608307600021,
0.04196540266275406,
-0.014967670664191246,
0.028904076665639877,
0.06930604577064514,
-0.03505747765302658,
0.05343545228242874,
-0.09706078469753265,
-0.02852996066212654,
0.0028529528062790632,
-0.005969252437353134,
-0.013179000467061996,
-0.05251253396272659,
-0.00443094689399004,
0.00583597831428051,
0.17620207369327545,
-0.02305424213409424,
0.035338692367076874,
0.013535622507333755,
0.009181934408843517,
0.05301305651664734,
-0.014954360201954842,
0.06986449658870697,
-0.00566404964774847,
-0.023470336571335793,
-0.012322790920734406,
0.03837520256638527,
0.005313165485858917,
-0.0014956183731555939,
0.14171503484249115,
0.04539341852068901,
0.08605718612670898,
0.07551982998847961,
0.01759885996580124,
0.015008552931249142,
-0.13413819670677185,
-0.08608803153038025,
0.008661653846502304,
0.0587407648563385,
-0.01726951263844967,
0.016209807246923447,
0.09317846596240997,
-0.09131070971488953,
0.06910470128059387,
0.05018053948879242,
-0.04741564393043518,
-0.1251613199710846,
-0.19546456634998322,
-0.02472805418074131,
-0.030660493299365044,
-0.011575140990316868,
-0.09198755025863647,
0.0163402259349823,
0.08573877811431885,
0.025743121281266212,
-0.010351496748626232,
0.09360121190547943,
-0.10071147978305817,
-0.03144843131303787,
0.043052736669778824,
-0.027844520285725594,
0.013437401503324509,
0.04355822876095772,
0.024263106286525726,
-0.00674702413380146,
0.04301808774471283,
0.04236893728375435,
0.04228629171848297,
0.029129911214113235,
0.05092895030975342,
-0.026181522756814957,
-0.07455967366695404,
-0.03130697086453438,
-0.0038074119947850704,
0.05284019559621811,
0.13130371272563934,
0.02339429222047329,
-0.07246526330709457,
0.00712209427729249,
0.10375330597162247,
-0.03069978579878807,
-0.04684467613697052,
-0.10708124935626984,
0.24690347909927368,
0.02100130170583725,
0.0017711254768073559,
-0.005166029557585716,
-0.045168306678533554,
0.003961892798542976,
0.2080686390399933,
0.22306251525878906,
0.0067902617156505585,
-0.008828929625451565,
0.00987426470965147,
-0.011388130486011505,
0.03333332762122154,
0.14247731864452362,
0.005273280665278435,
0.2553580701351166,
-0.047172389924526215,
0.04151029884815216,
-0.0408964641392231,
-0.0397096611559391,
-0.09786869585514069,
0.06906720250844955,
-0.004405861254781485,
0.007288300432264805,
-0.028691217303276062,
0.07175225019454956,
-0.03998319432139397,
-0.16843046247959137,
-0.0010211672633886337,
-0.0006770479958504438,
-0.061955686658620834,
0.011355314403772354,
-0.004299696069210768,
0.020405404269695282,
0.08502176403999329,
-0.016726233065128326,
-0.00840210821479559,
0.12617188692092896,
0.018788715824484825,
-0.09882617741823196,
-0.058756180107593536,
0.11847984045743942,
0.017957020550966263,
0.14155615866184235,
0.01006296556442976,
0.07871446758508682,
0.08796864748001099,
0.020817596465349197,
-0.09223590046167374,
0.0427616611123085,
-0.0184068214148283,
-0.02843569405376911,
0.006183905992656946,
0.10993330180644989,
-0.0085572125390172,
0.06219859421253204,
0.0283950287848711,
-0.09065113961696625,
0.0631040707230568,
0.009165111929178238,
-0.034172672778367996,
-0.08234666287899017,
0.08773768693208694,
-0.09198248386383057,
0.15731722116470337,
0.12492470443248749,
-0.015077424235641956,
-0.04769954830408096,
-0.030494937673211098,
0.021452656015753746,
-0.0009015277028083801,
0.05798346921801567,
-0.02407066337764263,
-0.1339113712310791,
0.020500831305980682,
-0.08424119651317596,
0.026964902877807617,
-0.2499924600124359,
-0.09083981812000275,
0.030779685825109482,
-0.018162203952670097,
-0.019871186465024948,
0.04798131436109543,
0.045046743005514145,
0.025420309975743294,
-0.0365760400891304,
0.02718040905892849,
-0.039016243070364,
0.05872270464897156,
-0.11021210998296738,
-0.09047135710716248
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 300k (uncased)
Seed 3 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-300k')
model = BertModel.from_pretrained("multiberts-seed-3-300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-300k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 300k (uncased)
Seed 3 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 300k (uncased)\nSeed 3 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 300k (uncased)\nSeed 3 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 300k (uncased)\nSeed 3 intermediate checkpoint 300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08118264377117157,
-0.006039610132575035,
-0.0020866389386355877,
0.06908261030912399,
0.08699461817741394,
0.00260715838521719,
0.11820890009403229,
0.04898594319820404,
-0.03366854041814804,
0.02221754938364029,
0.09458008408546448,
0.03310617431998253,
0.04203878343105316,
0.06453706324100494,
0.09674020111560822,
-0.25730830430984497,
0.05187702178955078,
-0.064095139503479,
0.058619096875190735,
0.0781262218952179,
0.10123655200004578,
-0.07034945487976074,
0.06310684978961945,
0.033683281391859055,
-0.08088678121566772,
-0.01489314530044794,
-0.018235724419355392,
-0.03473689407110214,
0.09850955754518509,
0.06878822296857834,
0.060632072389125824,
0.000877847895026207,
0.059845712035894394,
-0.09048257768154144,
0.01578105241060257,
0.04326517507433891,
-0.000252030324190855,
0.02284596860408783,
-0.008768033236265182,
0.01544281654059887,
0.10849178582429886,
0.03817843273282051,
0.0798678994178772,
0.03498801216483116,
-0.09591822326183319,
-0.10543150454759598,
-0.08302880823612213,
0.1035493016242981,
0.05603034794330597,
0.04370178282260895,
-0.005773912183940411,
0.07854379713535309,
-0.03329988569021225,
0.07459498941898346,
0.11570656299591064,
-0.25365927815437317,
-0.01061997376382351,
0.07283739000558853,
0.047221146523952484,
0.04331245273351669,
0.014610891230404377,
0.026300504803657532,
0.0039451755583286285,
0.04638317972421646,
0.031398873776197433,
-0.024098075926303864,
0.1212790459394455,
-0.04629064351320267,
-0.15372800827026367,
-0.04222186654806137,
0.12252338975667953,
-0.005475714802742004,
-0.12546145915985107,
-0.10677594691514969,
-0.028207287192344666,
0.1123170554637909,
-0.0020770980045199394,
-0.020741252228617668,
-0.004164321813732386,
0.009596782736480236,
0.02232392132282257,
-0.0936274379491806,
-0.08536004275083542,
-0.030091892927885056,
-0.03660224750638008,
0.13339263200759888,
0.04540906101465225,
0.051291484385728836,
-0.03338377922773361,
0.08801741898059845,
-0.10886317491531372,
-0.03855928033590317,
-0.05240977555513382,
-0.08491196483373642,
-0.016870440915226936,
0.00815131701529026,
-0.027589954435825348,
-0.08973248302936554,
-0.05990852043032646,
0.1187017560005188,
0.03791140764951706,
0.03029518947005272,
0.0017364921513944864,
0.04117967188358307,
0.07523442059755325,
0.09763233363628387,
-0.04105432331562042,
0.04786160588264465,
0.03207491338253021,
-0.018235737457871437,
0.06081188842654228,
-0.050716791301965714,
-0.10002175718545914,
0.07941296696662903,
-0.00014221761375665665,
0.04161348193883896,
0.025391383096575737,
0.03409572318196297,
-0.01404997892677784,
-0.07442623376846313,
0.16707265377044678,
-0.0778818130493164,
-0.011915820650756359,
-0.019668584689497948,
0.013187035918235779,
0.051180802285671234,
0.03061935119330883,
-0.0060082655400037766,
-0.04768601059913635,
-0.005104320123791695,
-0.05378393828868866,
-0.02715986967086792,
-0.05487854778766632,
-0.11729960143566132,
-0.0017007580026984215,
-0.0364871621131897,
-0.03314391151070595,
-0.14114604890346527,
-0.21471183001995087,
-0.018691081553697586,
0.06507702171802521,
-0.0027421521954238415,
-0.008306656032800674,
0.02424607053399086,
0.016988689079880714,
-0.020811738446354866,
0.009254282340407372,
-0.04697639122605324,
-0.0005153147503733635,
-0.0065860021859407425,
-0.030574599280953407,
0.055191174149513245,
-0.041478559374809265,
0.022841451689600945,
-0.06951537728309631,
0.022507760673761368,
-0.20719440281391144,
0.08834348618984222,
-0.03185397759079933,
0.0014161821454763412,
-0.03760399669408798,
-0.046719811856746674,
0.008610749617218971,
0.04873315989971161,
-0.008195574395358562,
0.11643192172050476,
-0.1333913505077362,
-0.04887906089425087,
0.17785534262657166,
-0.15768447518348694,
-0.0034779682755470276,
0.09853721410036087,
-0.04755110293626785,
0.05396251007914543,
0.1330512911081314,
0.09403842687606812,
0.07837758213281631,
-0.07436048239469528,
0.012230551801621914,
0.060064878314733505,
-0.06763402372598648,
0.05342581868171692,
0.09094827622175217,
-0.02558515965938568,
-0.13359850645065308,
0.02951648458838463,
-0.07809066772460938,
-0.009645137935876846,
-0.025296175852417946,
-0.021518539637327194,
0.009369853883981705,
-0.03838623687624931,
0.025773677974939346,
0.004688143730163574,
0.01572001539170742,
-0.04196632280945778,
-0.08117912709712982,
0.029979761689901352,
0.07446170598268509,
-0.07206514477729797,
0.04409956932067871,
-0.06925353407859802,
0.059786029160022736,
-0.07479386776685715,
-0.005818607285618782,
-0.1655038744211197,
-0.02688993513584137,
0.04480930417776108,
-0.0446539968252182,
0.05011391267180443,
0.08846864849328995,
0.002183430129662156,
0.12192817032337189,
-0.040959782898426056,
0.0022111835423856974,
-0.008395994082093239,
-0.010294892825186253,
-0.047534242272377014,
-0.11796286702156067,
-0.08457233011722565,
-0.06868095695972443,
0.09309491515159607,
-0.06666401028633118,
0.02827400341629982,
-0.07140382379293442,
-0.02330826036632061,
-0.009623514488339424,
-0.059498462826013565,
-0.0032489681616425514,
0.011724898591637611,
-0.029228655621409416,
-0.04785846173763275,
0.048467062413692474,
0.04969524219632149,
-0.06027768552303314,
0.07455280423164368,
-0.10654790699481964,
-0.05904172733426094,
0.054869361221790314,
0.01621311530470848,
-0.08286549896001816,
0.08664597570896149,
-0.020074909552931786,
-0.011710398830473423,
-0.058946311473846436,
-0.0461328886449337,
0.1921316236257553,
-0.02008000761270523,
0.0989181250333786,
-0.09188074618577957,
-0.00020328955724835396,
0.025149360299110413,
-0.04702076315879822,
-0.021402638405561447,
0.05591914802789688,
0.050453174859285355,
-0.19087538123130798,
0.015090856701135635,
0.054275184869766235,
0.0769200474023819,
0.1121518611907959,
0.028806818649172783,
-0.023394323885440826,
-0.04631640389561653,
-0.011622074991464615,
0.005837366916239262,
0.05450524389743805,
-0.019063420593738556,
-0.008465251885354519,
0.029451409354805946,
0.05963555723428726,
0.01776672899723053,
-0.07779895514249802,
0.03306606411933899,
0.06712296605110168,
-0.016939260065555573,
-0.032826609909534454,
-0.02337600663304329,
-0.06182246282696724,
0.06244476139545441,
0.05510062724351883,
0.035732489079236984,
0.026977987959980965,
-0.01569284126162529,
-0.13645778596401215,
0.18889352679252625,
-0.11315795034170151,
-0.25977763533592224,
-0.10802093148231506,
-0.060145772993564606,
-0.030705617740750313,
0.040047064423561096,
0.05618990957736969,
-0.030255703255534172,
-0.04372797906398773,
-0.11854282021522522,
0.06296229362487793,
-0.0645594596862793,
-0.032583389431238174,
-0.00912518985569477,
-0.05327117443084717,
-0.02012893557548523,
-0.12820059061050415,
-0.01407775841653347,
-0.031411442905664444,
-0.07718591392040253,
0.005871759727597237,
-0.03739611804485321,
0.030844345688819885,
0.13428819179534912,
0.0381527915596962,
-0.018047191202640533,
-0.01837904192507267,
0.19262859225273132,
0.01272127591073513,
0.05729161947965622,
0.11476511508226395,
-0.025794440880417824,
0.05338514968752861,
0.04384900629520416,
0.025492405518889427,
-0.05063200742006302,
0.014986906200647354,
-0.01571161299943924,
-0.12330882996320724,
-0.17152003943920135,
-0.07167842984199524,
-0.004867397248744965,
0.0030703763477504253,
0.019437413662672043,
0.035786256194114685,
0.020773066207766533,
0.03976091742515564,
-0.03101499378681183,
0.023516066372394562,
-0.011285342276096344,
0.08232595026493073,
0.024139516055583954,
-0.07281222194433212,
0.09482079744338989,
-0.059630148112773895,
0.01524101011455059,
0.10945212095975876,
-0.056374870240688324,
0.18430300056934357,
0.025083530694246292,
0.05519993230700493,
0.10382260382175446,
0.0168052576482296,
0.05163390561938286,
0.09035170078277588,
-0.04551813006401062,
0.005020379088819027,
-0.061778925359249115,
-0.05232777073979378,
-0.03801845386624336,
0.04660188406705856,
0.03229907155036926,
0.02159976214170456,
-0.11944793909788132,
0.019874371588230133,
0.0004685914609581232,
0.1433946043252945,
0.0459575280547142,
-0.12202586233615875,
-0.11882062256336212,
0.03629289194941521,
-0.04564854875206947,
-0.05933011323213577,
0.03102073445916176,
0.053645059466362,
-0.15588802099227905,
0.04836015775799751,
-0.005468057468533516,
0.0647118091583252,
-0.09256982058286667,
0.014952204190194607,
-0.041169844567775726,
0.00008376035839319229,
0.005722899921238422,
0.06754402816295624,
-0.13765616714954376,
0.10704823583364487,
0.02010577730834484,
0.04882469028234482,
-0.07925911247730255,
0.0156609658151865,
-0.010902181267738342,
0.10309910774230957,
0.11772556602954865,
0.042067915201187134,
-0.045850593596696854,
-0.021309785544872284,
-0.04702208191156387,
0.019329508766531944,
0.05983012169599533,
-0.07737956196069717,
0.06098534166812897,
0.006905680056661367,
0.008229203522205353,
-0.022656802088022232,
0.023075442761182785,
-0.13254830241203308,
-0.1241900622844696,
0.06085299700498581,
-0.0790930688381195,
-0.1023433730006218,
-0.05697621777653694,
-0.06330994516611099,
-0.04882330447435379,
0.21356165409088135,
-0.11198072135448456,
-0.09107404947280884,
-0.09882806986570358,
-0.016798850148916245,
0.04496247321367264,
-0.06448927521705627,
0.04369988292455673,
-0.0377083383500576,
0.09042057394981384,
-0.047441259026527405,
-0.11083017289638519,
0.0337727889418602,
-0.11543849110603333,
-0.11372768133878708,
-0.0430537685751915,
0.10740877687931061,
0.11611277610063553,
0.03862738236784935,
0.012618327513337135,
0.01156567968428135,
0.003709159791469574,
-0.11896593868732452,
0.01307707279920578,
0.1322101205587387,
-0.0012030117213726044,
0.07523348927497864,
-0.06053578853607178,
0.027314811944961548,
-0.017827805131673813,
-0.00046167708933353424,
0.1344764530658722,
0.1854773312807083,
-0.06095131114125252,
0.17441266775131226,
0.2015213966369629,
-0.10333718359470367,
-0.1905461847782135,
-0.05578097701072693,
-0.0016912566497921944,
0.04555950313806534,
0.05386926233768463,
-0.18117332458496094,
0.08916933834552765,
0.03565222769975662,
-0.03209444135427475,
0.008468147367238998,
-0.23896002769470215,
-0.110664501786232,
0.08971390128135681,
0.06139145791530609,
0.18482515215873718,
-0.08183608949184418,
-0.03688555955886841,
-0.0184379443526268,
-0.03728901594877243,
0.050567515194416046,
-0.03196171298623085,
0.09235634654760361,
0.004725314676761627,
-0.03227178007364273,
0.0015220306813716888,
-0.031743574887514114,
0.0950232520699501,
0.03834349289536476,
0.02327779307961464,
-0.07195636630058289,
-0.005273541435599327,
0.11219935119152069,
-0.038787905126810074,
0.10130459070205688,
0.03930341824889183,
0.07392111420631409,
-0.09294013679027557,
-0.061432477086782455,
-0.07649847120046616,
0.04213346540927887,
-0.04189646989107132,
-0.057191018015146255,
-0.06434941291809082,
0.06052018702030182,
0.038817018270492554,
0.009330220520496368,
-0.0009060744196176529,
-0.03905738890171051,
0.04523254930973053,
0.08532977104187012,
0.0879783183336258,
-0.034398097544908524,
-0.07517953217029572,
-0.053564272820949554,
-0.04883469641208649,
0.06486716866493225,
-0.08286955952644348,
0.016329115256667137,
0.026256859302520752,
0.008724996820092201,
0.08936084806919098,
0.03464576229453087,
-0.13687288761138916,
0.010418130084872246,
0.034528326243162155,
-0.12156505137681961,
-0.10269761085510254,
-0.019875243306159973,
0.03262612223625183,
-0.04065679386258125,
0.054515205323696136,
0.14572158455848694,
-0.03367721661925316,
-0.032797593623399734,
-0.04716691002249718,
0.03855851665139198,
-0.02256127819418907,
0.050341758877038956,
0.06296728551387787,
0.030255937948822975,
-0.07176977396011353,
0.07343722879886627,
0.033921096473932266,
-0.028262145817279816,
0.0428941436111927,
0.04532283917069435,
-0.0930759459733963,
-0.07746400684118271,
-0.05865153297781944,
0.08388040959835052,
-0.026370545849204063,
-0.042939938604831696,
-0.0004299655556678772,
-0.08374801278114319,
0.06697800010442734,
0.07058709114789963,
0.049856290221214294,
0.036307841539382935,
-0.08862663805484772,
0.014953862875699997,
-0.05147847533226013,
0.031239155679941177,
-0.031434670090675354,
-0.005493631586432457,
-0.05602068454027176,
0.05842096358537674,
0.06441420316696167,
0.09504832327365875,
-0.03447406738996506,
-0.07532625645399094,
-0.08557336777448654,
-0.010953016579151154,
-0.058855362236499786,
-0.036879763007164,
-0.0779532715678215,
-0.006075048819184303,
-0.00008409842848777771,
-0.0015886686742305756,
0.021702896803617477,
0.036400847136974335,
-0.041283443570137024,
-0.0194244422018528,
-0.036027055233716965,
0.036605529487133026,
-0.062088340520858765,
0.00753004290163517,
0.014840248972177505,
-0.037841953337192535,
0.09085769951343536,
0.03294561430811882,
-0.012672128155827522,
0.046987369656562805,
-0.02683481201529503,
0.032566674053668976,
-0.019529342651367188,
0.0007642991840839386,
-0.025845471769571304,
-0.10865122824907303,
-0.00617970060557127,
0.008184872567653656,
-0.024309350177645683,
0.012781652621924877,
0.0575040802359581,
-0.0721937045454979,
0.08581811934709549,
0.04318122938275337,
-0.030107442289590836,
-0.07043685764074326,
0.041827213019132614,
-0.00860573723912239,
0.0313422754406929,
0.07201727479696274,
-0.03590774163603783,
0.053255438804626465,
-0.0978991687297821,
-0.0277654230594635,
0.004047321621328592,
-0.004969440400600433,
-0.01861468516290188,
-0.05184220150113106,
-0.003715347498655319,
0.008804901503026485,
0.17736069858074188,
-0.023715443909168243,
0.03475825861096382,
0.015258131548762321,
0.012199962511658669,
0.05513463169336319,
-0.014094529673457146,
0.0699310302734375,
-0.007891804911196232,
-0.025672651827335358,
-0.011678731068968773,
0.03722460940480232,
0.004320917651057243,
0.00020775943994522095,
0.1462736427783966,
0.04637309908866882,
0.08832745999097824,
0.0761684849858284,
0.016458207741379738,
0.01864674687385559,
-0.12805236876010895,
-0.09723644703626633,
0.009935847483575344,
0.054874490946531296,
-0.01633983664214611,
0.011744424700737,
0.09148108959197998,
-0.08933231234550476,
0.0697561502456665,
0.04919643700122833,
-0.0478590652346611,
-0.12625092267990112,
-0.1937934011220932,
-0.02317730337381363,
-0.027392705902457237,
-0.010990483686327934,
-0.09220021963119507,
0.013608827255666256,
0.09120545536279678,
0.02617596462368965,
-0.00950718205422163,
0.09513460099697113,
-0.10374785959720612,
-0.030254680663347244,
0.04317428916692734,
-0.02839116007089615,
0.015256254002451897,
0.048684652894735336,
0.024076173081994057,
-0.008371541276574135,
0.04428255558013916,
0.04330672696232796,
0.04384342208504677,
0.021982569247484207,
0.04908652603626251,
-0.025257408618927002,
-0.07503887265920639,
-0.03262510523200035,
-0.004723411053419113,
0.05413133651018143,
0.14093679189682007,
0.02369014173746109,
-0.07110776007175446,
0.007371931336820126,
0.10447368025779724,
-0.030898652970790863,
-0.05213955417275429,
-0.10863857716321945,
0.24191167950630188,
0.02329445257782936,
0.0028097089380025864,
-0.005782315041869879,
-0.04410518705844879,
0.0017668809741735458,
0.20836542546749115,
0.22855427861213684,
0.0006296206847764552,
-0.009031590074300766,
0.008892366662621498,
-0.010608077049255371,
0.03613296523690224,
0.14730119705200195,
0.0047600772231817245,
0.25470787286758423,
-0.04756809026002884,
0.04195983707904816,
-0.041489601135253906,
-0.04005584865808487,
-0.09615123271942139,
0.0676693469285965,
-0.005255143623799086,
0.009018207900226116,
-0.03161940723657608,
0.07277432829141617,
-0.040777675807476044,
-0.1636318564414978,
-0.0013678791001439095,
-0.0030214982107281685,
-0.06207175552845001,
0.010771865956485271,
-0.008078950457274914,
0.01890314184129238,
0.08525857329368591,
-0.01571960374712944,
-0.005407543387264013,
0.1256479024887085,
0.01889539696276188,
-0.09874098002910614,
-0.0598660483956337,
0.1186768189072609,
0.024562448263168335,
0.1419573724269867,
0.012415013276040554,
0.07891859859228134,
0.08705969154834747,
0.01974184811115265,
-0.09308025240898132,
0.045927494764328,
-0.019267285242676735,
-0.02553526498377323,
0.007821517996490002,
0.10622919350862503,
-0.008492283523082733,
0.06193857640028,
0.02529413439333439,
-0.09149208664894104,
0.0630013570189476,
0.00991147756576538,
-0.033579178154468536,
-0.08183518052101135,
0.08736526966094971,
-0.09195467084646225,
0.15882360935211182,
0.12333241105079651,
-0.014754709787666798,
-0.04779975116252899,
-0.027958402410149574,
0.02177603915333748,
-0.0013473322615027428,
0.05662938579916954,
-0.025544751435518265,
-0.1354634314775467,
0.017755543813109398,
-0.09053818881511688,
0.02408531680703163,
-0.2505836486816406,
-0.09265404939651489,
0.030988773331046104,
-0.01791376806795597,
-0.02173084020614624,
0.050814274698495865,
0.0425889827311039,
0.026844898238778114,
-0.035248830914497375,
0.02661331370472908,
-0.039185237139463425,
0.05831413343548775,
-0.11286889761686325,
-0.09214897453784943
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 400k (uncased)
Seed 3 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-400k')
model = BertModel.from_pretrained("multiberts-seed-3-400k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-400k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 400k (uncased)
Seed 3 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 400k (uncased)\nSeed 3 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 400k (uncased)\nSeed 3 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 400k (uncased)\nSeed 3 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08161668479442596,
-0.005176156759262085,
-0.0021587423980236053,
0.06933436542749405,
0.08634401112794876,
0.003604997880756855,
0.11676895618438721,
0.049450404942035675,
-0.034518398344516754,
0.02248608134686947,
0.0919293761253357,
0.03423278033733368,
0.043312180787324905,
0.06597010046243668,
0.09596476703882217,
-0.25812968611717224,
0.050047554075717926,
-0.06412800401449203,
0.05644073337316513,
0.07651184499263763,
0.10071554780006409,
-0.07061871886253357,
0.06388439983129501,
0.034708283841609955,
-0.08098556846380234,
-0.016268547624349594,
-0.017464468255639076,
-0.035754092037677765,
0.10019804537296295,
0.07004997134208679,
0.06074364483356476,
0.0013235285878181458,
0.05938784033060074,
-0.08875641226768494,
0.016206059604883194,
0.04336081072688103,
-0.001176352147012949,
0.02434290386736393,
-0.007140398025512695,
0.016715390607714653,
0.10684292018413544,
0.03764376789331436,
0.07854995131492615,
0.03444916009902954,
-0.09552183747291565,
-0.11062505841255188,
-0.08260571956634521,
0.10368452966213226,
0.05524064972996712,
0.04143770411610603,
-0.005787082947790623,
0.07488195598125458,
-0.032327957451343536,
0.07446937263011932,
0.10845912992954254,
-0.2575923204421997,
-0.010857879184186459,
0.07076956331729889,
0.04538998752832413,
0.04499459266662598,
0.0130971884354949,
0.027808351442217827,
0.004938766360282898,
0.0465102456510067,
0.029191698879003525,
-0.0242355577647686,
0.12224975973367691,
-0.04504504054784775,
-0.15220575034618378,
-0.04104785993695259,
0.12446720153093338,
-0.006327401846647263,
-0.12575149536132812,
-0.10336583852767944,
-0.027030471712350845,
0.11455395817756653,
-0.0027972832322120667,
-0.019379373639822006,
-0.0042917924001812935,
0.009670071303844452,
0.022418009117245674,
-0.09221900999546051,
-0.0855717584490776,
-0.028742488473653793,
-0.0364869199693203,
0.1297311633825302,
0.046174049377441406,
0.051603905856609344,
-0.03368189185857773,
0.08763439953327179,
-0.11202849447727203,
-0.03779057413339615,
-0.05226076394319534,
-0.08526041358709335,
-0.016979752108454704,
0.00811457633972168,
-0.027104049921035767,
-0.08603421598672867,
-0.0596165768802166,
0.11774751543998718,
0.038980741053819656,
0.031230956315994263,
0.0007025059312582016,
0.04123513028025627,
0.07367729395627975,
0.09687957167625427,
-0.0396416112780571,
0.04827088117599487,
0.03360556811094284,
-0.020444871857762337,
0.059667255729436874,
-0.049867816269397736,
-0.10067740827798843,
0.07837307453155518,
-0.0009041214361786842,
0.041411109268665314,
0.025066185742616653,
0.035761889070272446,
-0.011731581762433052,
-0.07338818907737732,
0.1674177199602127,
-0.0771450474858284,
-0.011398800648748875,
-0.01844533532857895,
0.011897493153810501,
0.04894007742404938,
0.030215319246053696,
-0.005753894802182913,
-0.04598434269428253,
-0.0059866346418857574,
-0.054104212671518326,
-0.026867307722568512,
-0.05438338220119476,
-0.11803780496120453,
-0.0015380778349936008,
-0.035172708332538605,
-0.03335452824831009,
-0.14226572215557098,
-0.21437060832977295,
-0.019321750849485397,
0.06444257497787476,
-0.0022535850293934345,
-0.009096520952880383,
0.025641199201345444,
0.017359672114253044,
-0.021650372073054314,
0.009919477626681328,
-0.04606981202960014,
-0.0010370248928666115,
-0.006426021456718445,
-0.032681383192539215,
0.056153759360313416,
-0.04080433025956154,
0.02306591160595417,
-0.068695068359375,
0.02172302082180977,
-0.2137477993965149,
0.08831353485584259,
-0.03261774778366089,
0.002437068149447441,
-0.03725065290927887,
-0.046650178730487823,
0.009418701753020287,
0.04698261618614197,
-0.009898185729980469,
0.11618760228157043,
-0.13699787855148315,
-0.0490427240729332,
0.17500807344913483,
-0.15840837359428406,
-0.0028035156428813934,
0.09911209344863892,
-0.047134846448898315,
0.05593991279602051,
0.13358812034130096,
0.09820040315389633,
0.0813043862581253,
-0.07383749634027481,
0.011937481351196766,
0.06099317967891693,
-0.06954494118690491,
0.055841460824012756,
0.08959050476551056,
-0.025452328845858574,
-0.13768886029720306,
0.03030942752957344,
-0.07417646050453186,
-0.009427707642316818,
-0.024898068979382515,
-0.021323148161172867,
0.009478991851210594,
-0.03794720023870468,
0.026737935841083527,
0.0039955503307282925,
0.01743353344500065,
-0.04179868474602699,
-0.08067793399095535,
0.026176976040005684,
0.07450060546398163,
-0.07073622941970825,
0.04295717552304268,
-0.06997925788164139,
0.06081108748912811,
-0.0743509978055954,
-0.004932267591357231,
-0.16443389654159546,
-0.026907633990049362,
0.04465789347887039,
-0.0474117249250412,
0.049587130546569824,
0.08916784822940826,
0.003133923513814807,
0.1228802353143692,
-0.04106710106134415,
0.0022933711297810078,
-0.004742583259940147,
-0.010262321680784225,
-0.04977959394454956,
-0.11934218555688858,
-0.08395682275295258,
-0.06737218797206879,
0.09624291956424713,
-0.0697980672121048,
0.028855711221694946,
-0.07236351817846298,
-0.02266867272555828,
-0.009869091212749481,
-0.05826890096068382,
-0.0038086669519543648,
0.011554199270904064,
-0.027923548594117165,
-0.04691476374864578,
0.049074940383434296,
0.049064625054597855,
-0.05974443629384041,
0.07440629601478577,
-0.10482125729322433,
-0.058716900646686554,
0.054460376501083374,
0.015295169316232204,
-0.08286508172750473,
0.08766087144613266,
-0.01987745240330696,
-0.012706390582025051,
-0.0564509741961956,
-0.0455799326300621,
0.19519641995429993,
-0.022420734167099,
0.09877471625804901,
-0.09068425744771957,
0.0009824986336752772,
0.02600136399269104,
-0.04557698965072632,
-0.019193265587091446,
0.057039663195610046,
0.04729001969099045,
-0.18704459071159363,
0.013270409777760506,
0.054053500294685364,
0.07859127223491669,
0.11022187024354935,
0.028086233884096146,
-0.021926457062363625,
-0.04582004249095917,
-0.012139505706727505,
0.005408026743680239,
0.055086858570575714,
-0.02000286430120468,
-0.008067738264799118,
0.03079351782798767,
0.059027157723903656,
0.01749025285243988,
-0.07900400459766388,
0.03321850299835205,
0.0657232403755188,
-0.016592539846897125,
-0.03900261968374252,
-0.0234793983399868,
-0.06019292771816254,
0.06231154128909111,
0.05516732484102249,
0.035367343574762344,
0.02771184965968132,
-0.015259254723787308,
-0.13574612140655518,
0.18786515295505524,
-0.11524488031864166,
-0.2598474323749542,
-0.10817709565162659,
-0.05861157178878784,
-0.028695544227957726,
0.04016299173235893,
0.05784738063812256,
-0.03176956996321678,
-0.04389325529336929,
-0.11653029918670654,
0.06391104310750961,
-0.06434188038110733,
-0.03214948624372482,
-0.010826148092746735,
-0.05396157503128052,
-0.019364243373274803,
-0.12776502966880798,
-0.012624455615878105,
-0.03097270242869854,
-0.07632465660572052,
0.007426410913467407,
-0.03637026622891426,
0.029536396265029907,
0.1351393312215805,
0.03742906078696251,
-0.019140833988785744,
-0.0185104850679636,
0.194067120552063,
0.011592769995331764,
0.06021499261260033,
0.1131274402141571,
-0.025737056508660316,
0.053750328719615936,
0.04628075659275055,
0.02584873139858246,
-0.04894188791513443,
0.013722921721637249,
-0.01613355614244938,
-0.12126895040273666,
-0.17283183336257935,
-0.07070520520210266,
-0.0033913152292370796,
0.005551369860768318,
0.017610643059015274,
0.03626558929681778,
0.021949436515569687,
0.04034633934497833,
-0.029463639482855797,
0.026703286916017532,
-0.009836435317993164,
0.08162097632884979,
0.027143973857164383,
-0.07401093095541,
0.09422239661216736,
-0.059650056064128876,
0.015935460105538368,
0.10954352468252182,
-0.05874919518828392,
0.18436111509799957,
0.024879934266209602,
0.055390629917383194,
0.10233116149902344,
0.020905833691358566,
0.053104691207408905,
0.09037184715270996,
-0.04585769772529602,
0.00562539417296648,
-0.06107638031244278,
-0.05225973203778267,
-0.036218494176864624,
0.047540001571178436,
0.03150266036391258,
0.019572235643863678,
-0.1200350895524025,
0.018313147127628326,
-0.00035084295086562634,
0.1399671733379364,
0.0455469936132431,
-0.1207486242055893,
-0.1205315887928009,
0.03594772517681122,
-0.04485892504453659,
-0.06086470186710358,
0.0312943272292614,
0.05525032803416252,
-0.15595689415931702,
0.046884723007678986,
-0.005757087841629982,
0.06463249027729034,
-0.09232846647500992,
0.015881041064858437,
-0.041129905730485916,
0.000866319052875042,
0.004803319461643696,
0.06859360635280609,
-0.1353614330291748,
0.10274231433868408,
0.02074732445180416,
0.04945804178714752,
-0.07880397140979767,
0.015234217047691345,
-0.009571971371769905,
0.10774689167737961,
0.1156701073050499,
0.04204459488391876,
-0.0500124916434288,
-0.023779982700943947,
-0.04734119027853012,
0.0211540088057518,
0.060298360884189606,
-0.07780402898788452,
0.06035427004098892,
0.008218818344175816,
0.0074875010177493095,
-0.023478854447603226,
0.01933487504720688,
-0.1340402364730835,
-0.12222768366336823,
0.06181742250919342,
-0.07899719476699829,
-0.09515821933746338,
-0.056965310126543045,
-0.0639151930809021,
-0.05364774912595749,
0.2124357521533966,
-0.11041587591171265,
-0.09136608242988586,
-0.0981426015496254,
-0.02043059468269348,
0.04589436203241348,
-0.06425973773002625,
0.04433036595582962,
-0.03725915402173996,
0.08952546119689941,
-0.04593442752957344,
-0.111641064286232,
0.03398512676358223,
-0.11392612010240555,
-0.11329640448093414,
-0.04350462555885315,
0.10730937868356705,
0.11421611905097961,
0.03880554810166359,
0.012872242368757725,
0.010986234992742538,
0.0015910528600215912,
-0.11812540143728256,
0.014429334551095963,
0.13070574402809143,
0.00017195753753185272,
0.07418752461671829,
-0.0628877580165863,
0.02874477207660675,
-0.018272671848535538,
0.0008459407836198807,
0.1324918568134308,
0.18461182713508606,
-0.062108371406793594,
0.17473340034484863,
0.1993180215358734,
-0.10393466055393219,
-0.19061745703220367,
-0.054677564650774,
-0.0021221647039055824,
0.0448036827147007,
0.051476746797561646,
-0.18331612646579742,
0.09031142294406891,
0.03600582852959633,
-0.031659044325351715,
0.0172816663980484,
-0.23395401239395142,
-0.11081305146217346,
0.0881350189447403,
0.05818844214081764,
0.1844334602355957,
-0.08163776248693466,
-0.037780649960041046,
-0.01626603864133358,
-0.03811097890138626,
0.04546317458152771,
-0.03465065732598305,
0.09204412251710892,
0.005164820700883865,
-0.02928769588470459,
0.0018708668649196625,
-0.030854418873786926,
0.0961199477314949,
0.04025113582611084,
0.023372624069452286,
-0.07116382569074631,
-0.00596957840025425,
0.10831157863140106,
-0.03881707787513733,
0.09923429787158966,
0.041910141706466675,
0.0737048014998436,
-0.0948941633105278,
-0.0595637671649456,
-0.07676045596599579,
0.043030329048633575,
-0.04198720306158066,
-0.05577300861477852,
-0.06396922469139099,
0.059602029621601105,
0.03679196536540985,
0.009743726812303066,
0.0017300006002187729,
-0.038147974759340286,
0.04654814302921295,
0.0849643126130104,
0.0851670429110527,
-0.033743832260370255,
-0.07569015771150589,
-0.05217968299984932,
-0.04951527342200279,
0.06535626947879791,
-0.08579723536968231,
0.017629943788051605,
0.027650803327560425,
0.00815612729638815,
0.0902184396982193,
0.03494961932301521,
-0.13613636791706085,
0.011304954998195171,
0.03465413674712181,
-0.12231157720088959,
-0.1035986840724945,
-0.01974930614233017,
0.0275193452835083,
-0.038729578256607056,
0.05602264776825905,
0.14569836854934692,
-0.033964723348617554,
-0.03293723613023758,
-0.04763948917388916,
0.03868866339325905,
-0.02149045094847679,
0.050412628799676895,
0.06359595060348511,
0.03015839494764805,
-0.07262276113033295,
0.07395441830158234,
0.03534611687064171,
-0.03101525455713272,
0.04138702154159546,
0.043465618044137955,
-0.09458881616592407,
-0.07820338755846024,
-0.05855629965662956,
0.0873199850320816,
-0.022962914779782295,
-0.04440547525882721,
-0.00006634928286075592,
-0.08526256680488586,
0.06613799929618835,
0.07094265520572662,
0.04800809919834137,
0.03585338592529297,
-0.08723679184913635,
0.015629099681973457,
-0.05208166688680649,
0.03297843039035797,
-0.03154974430799484,
-0.005504077300429344,
-0.05643261969089508,
0.06355033814907074,
0.06404254585504532,
0.09736212342977524,
-0.034470923244953156,
-0.07566545158624649,
-0.08486752957105637,
-0.01281285472214222,
-0.06341953575611115,
-0.03540734574198723,
-0.07707429677248001,
-0.0055489446967840195,
0.00042355572804808617,
-0.0015241988003253937,
0.02242620289325714,
0.035758353769779205,
-0.04260634630918503,
-0.01911608874797821,
-0.034988272935152054,
0.037741418927907944,
-0.06062747910618782,
0.006401685997843742,
0.013909848406910896,
-0.036736104637384415,
0.091617651283741,
0.035700734704732895,
-0.011529037728905678,
0.047691453248262405,
-0.02304047718644142,
0.03408977761864662,
-0.020307272672653198,
0.0003486303612589836,
-0.02437017858028412,
-0.10912612080574036,
-0.0068060182966291904,
0.005376800894737244,
-0.026152143254876137,
0.012441352941095829,
0.05718473345041275,
-0.07241401076316833,
0.08709186315536499,
0.04475729539990425,
-0.02926585078239441,
-0.0706443265080452,
0.04064720869064331,
-0.013000549748539925,
0.03001633659005165,
0.0707816630601883,
-0.03601810336112976,
0.053375162184238434,
-0.09745967388153076,
-0.0285661518573761,
0.0033843081910163164,
-0.006027642637491226,
-0.01446564681828022,
-0.05300093814730644,
-0.0033865980803966522,
0.00801848154515028,
0.1746717393398285,
-0.022827956825494766,
0.034639231860637665,
0.013932112604379654,
0.010447049513459206,
0.051585666835308075,
-0.014118451625108719,
0.07027789950370789,
-0.007481026463210583,
-0.025600571185350418,
-0.011960144154727459,
0.03887491300702095,
0.0038063060492277145,
0.0005107317119836807,
0.14215871691703796,
0.044858358800411224,
0.0888940617442131,
0.0750320553779602,
0.016066785901784897,
0.015671981498599052,
-0.129637211561203,
-0.09432139247655869,
0.009221541695296764,
0.05689331516623497,
-0.018246103078126907,
0.011360032483935356,
0.08923374861478806,
-0.08860045671463013,
0.06911987066268921,
0.0475948192179203,
-0.04823874309659004,
-0.12525013089179993,
-0.19096961617469788,
-0.02321500889956951,
-0.028362726792693138,
-0.011197048239409924,
-0.09149502217769623,
0.014445824548602104,
0.09269292652606964,
0.02526196651160717,
-0.009413712657988071,
0.09463456273078918,
-0.10638798773288727,
-0.03019876778125763,
0.04353467375040054,
-0.0278694499284029,
0.014910201542079449,
0.04892086982727051,
0.023806536570191383,
-0.008253136649727821,
0.04167092591524124,
0.04152623564004898,
0.04343091696500778,
0.02395721711218357,
0.04940793663263321,
-0.024703439325094223,
-0.07502338290214539,
-0.032958704978227615,
-0.002798550296574831,
0.05446118861436844,
0.13826683163642883,
0.02372600883245468,
-0.07052688300609589,
0.007156175561249256,
0.10683609545230865,
-0.030710963532328606,
-0.048965055495500565,
-0.10818131268024445,
0.24392953515052795,
0.021510621532797813,
0.002041606465354562,
-0.005014476366341114,
-0.04513895884156227,
0.00318111851811409,
0.2119130641222,
0.22837120294570923,
0.0025300285778939724,
-0.009912572801113129,
0.008802673779428005,
-0.011064305901527405,
0.036802880465984344,
0.14699286222457886,
0.006189920008182526,
0.25121623277664185,
-0.04804999381303787,
0.043010413646698,
-0.041685715317726135,
-0.0387679785490036,
-0.0996369868516922,
0.06812016665935516,
-0.007871171459555626,
0.008417950011789799,
-0.031036119908094406,
0.07259780168533325,
-0.04092852398753166,
-0.16956667602062225,
0.0003305058926343918,
-0.001001137075945735,
-0.0618617981672287,
0.01085057482123375,
-0.004606891423463821,
0.01914992742240429,
0.08502711355686188,
-0.016872573643922806,
-0.00639130175113678,
0.13103404641151428,
0.0180644728243351,
-0.09647472947835922,
-0.060763753950595856,
0.11856330186128616,
0.019967179745435715,
0.1404649317264557,
0.011633293703198433,
0.08015316724777222,
0.08664042502641678,
0.0207216739654541,
-0.09359397739171982,
0.044468075037002563,
-0.01970648765563965,
-0.02972206473350525,
0.007352760061621666,
0.10662630200386047,
-0.008076192811131477,
0.06281308084726334,
0.026093367487192154,
-0.08929204940795898,
0.06252393126487732,
0.011110499501228333,
-0.034006617963314056,
-0.08027224987745285,
0.08632118999958038,
-0.09155923873186111,
0.15765386819839478,
0.12439277768135071,
-0.014191721566021442,
-0.04674834758043289,
-0.02845616824924946,
0.019734295085072517,
0.000008086208254098892,
0.0539303719997406,
-0.02661675214767456,
-0.1345660388469696,
0.01844842918217182,
-0.08913727104663849,
0.025083011016249657,
-0.24955901503562927,
-0.09113416075706482,
0.02950245887041092,
-0.017616843804717064,
-0.020450904965400696,
0.05077783390879631,
0.04426567256450653,
0.026384837925434113,
-0.03515579551458359,
0.026146642863750458,
-0.03924797102808952,
0.05873681604862213,
-0.11145725101232529,
-0.09267519414424896
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 40k (uncased)
Seed 3 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-40k')
model = BertModel.from_pretrained("multiberts-seed-3-40k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-40k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 40k (uncased)
Seed 3 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 40k (uncased)\nSeed 3 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 40k (uncased)\nSeed 3 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 40k (uncased)\nSeed 3 intermediate checkpoint 40k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08240965008735657,
-0.0017995908856391907,
-0.002192359184846282,
0.06736727803945541,
0.08552483469247818,
0.003374270163476467,
0.11862930655479431,
0.049800269305706024,
-0.0322783999145031,
0.02444068342447281,
0.09155632555484772,
0.033758580684661865,
0.04273638501763344,
0.06434652954339981,
0.09663893282413483,
-0.258466899394989,
0.05011928826570511,
-0.06363213807344437,
0.05826567858457565,
0.07631775736808777,
0.10074475407600403,
-0.0713343471288681,
0.06330214440822601,
0.03537017107009888,
-0.08058897405862808,
-0.01707519218325615,
-0.017906522378325462,
-0.03444959223270416,
0.10022061318159103,
0.06988447904586792,
0.060165248811244965,
0.0019672028720378876,
0.05813220515847206,
-0.09078492224216461,
0.016095101833343506,
0.0445576086640358,
-0.0025522506330162287,
0.024548524990677834,
-0.005689120851457119,
0.01681981235742569,
0.10874097049236298,
0.037324633449316025,
0.07809500396251678,
0.03496422991156578,
-0.09493525326251984,
-0.1114756166934967,
-0.08216944336891174,
0.10261961817741394,
0.05432479456067085,
0.04175426438450813,
-0.006453360430896282,
0.0780513733625412,
-0.03237660974264145,
0.07508882880210876,
0.11016577482223511,
-0.25751781463623047,
-0.009385921061038971,
0.07142399251461029,
0.047656428068876266,
0.044306278228759766,
0.012849874794483185,
0.02722693420946598,
0.004901271313428879,
0.04607200622558594,
0.027285270392894745,
-0.024048108607530594,
0.12863436341285706,
-0.04434167593717575,
-0.1529221385717392,
-0.04164774343371391,
0.12122148275375366,
-0.006929004564881325,
-0.12509089708328247,
-0.10528542101383209,
-0.028743267059326172,
0.11621235311031342,
-0.003998933359980583,
-0.018125860020518303,
-0.0036819102242588997,
0.01058181282132864,
0.024057969450950623,
-0.09183301031589508,
-0.08578892797231674,
-0.028436612337827682,
-0.035229284316301346,
0.12702545523643494,
0.04710521548986435,
0.05044211447238922,
-0.03457760438323021,
0.08720570802688599,
-0.11579963564872742,
-0.038048796355724335,
-0.05249376595020294,
-0.08363784849643707,
-0.016190078109502792,
0.00805519986897707,
-0.02884076163172722,
-0.08705949038267136,
-0.05882679298520088,
0.11835405230522156,
0.036189887672662735,
0.031114883720874786,
-0.0011937939561903477,
0.04169701412320137,
0.0747578963637352,
0.09699217975139618,
-0.040145352482795715,
0.048107754439115524,
0.03216158226132393,
-0.02125217393040657,
0.059892717748880386,
-0.05038838088512421,
-0.10119933634996414,
0.07816633582115173,
-0.000122937373816967,
0.04054216295480728,
0.02581120654940605,
0.03566660359501839,
-0.011505541391670704,
-0.07377554476261139,
0.16845765709877014,
-0.07649111747741699,
-0.011232573539018631,
-0.017034919932484627,
0.012076376006007195,
0.04888061434030533,
0.030832938849925995,
-0.004939387086778879,
-0.0471496656537056,
-0.007683511823415756,
-0.054548896849155426,
-0.02655767649412155,
-0.05408470332622528,
-0.11817805469036102,
-0.0008827685378491879,
-0.03823454678058624,
-0.03299955278635025,
-0.14086461067199707,
-0.2152174562215805,
-0.020128866657614708,
0.06282792240381241,
-0.002497180365025997,
-0.008706667460501194,
0.02484249882400036,
0.018523892387747765,
-0.021352969110012054,
0.010443548671901226,
-0.04787454009056091,
-0.0012471424415707588,
-0.006404722109436989,
-0.032169993966817856,
0.05567373335361481,
-0.040064048022031784,
0.023523736745119095,
-0.0687810406088829,
0.021869534626603127,
-0.2121700644493103,
0.08901049196720123,
-0.032191045582294464,
0.0024577975273132324,
-0.03837188705801964,
-0.0463695228099823,
0.012771643698215485,
0.04755832254886627,
-0.011191818863153458,
0.11610320210456848,
-0.13640016317367554,
-0.050536856055259705,
0.1789935827255249,
-0.1582375019788742,
-0.0013737715780735016,
0.09942066669464111,
-0.047028254717588425,
0.053858693689107895,
0.13364741206169128,
0.09847074747085571,
0.08163809776306152,
-0.07446184009313583,
0.012059890665113926,
0.06107718124985695,
-0.0673360526561737,
0.05604000762104988,
0.09071345627307892,
-0.026156587526202202,
-0.13702863454818726,
0.029261749237775803,
-0.07751408219337463,
-0.009444160386919975,
-0.02459978312253952,
-0.020706789568066597,
0.009352166205644608,
-0.03706147521734238,
0.028155740350484848,
0.005483565852046013,
0.017286516726017,
-0.04113379120826721,
-0.08180300891399384,
0.02701657824218273,
0.07569295912981033,
-0.07160130143165588,
0.04267653822898865,
-0.07135064899921417,
0.06106691062450409,
-0.07431844621896744,
-0.00538554135710001,
-0.1657557487487793,
-0.026419563218951225,
0.04505632072687149,
-0.047414153814315796,
0.04950493574142456,
0.09240462630987167,
0.0037519074976444244,
0.12363532930612564,
-0.03986940532922745,
0.0016993515891954303,
-0.005070086568593979,
-0.011112615466117859,
-0.04967448115348816,
-0.12152031064033508,
-0.08320347219705582,
-0.06834755837917328,
0.09604359418153763,
-0.07247595489025116,
0.02883661910891533,
-0.07113620638847351,
-0.021620292216539383,
-0.009860215708613396,
-0.05838869512081146,
-0.002929430454969406,
0.010827898979187012,
-0.02905578538775444,
-0.04670983552932739,
0.04949280619621277,
0.04953527823090553,
-0.0605325847864151,
0.07679316401481628,
-0.10733970999717712,
-0.06074137985706329,
0.054134152829647064,
0.014304060488939285,
-0.08101759105920792,
0.08604270219802856,
-0.020564988255500793,
-0.01308031938970089,
-0.05515996366739273,
-0.04434166103601456,
0.1948561817407608,
-0.022683802992105484,
0.10066676139831543,
-0.09042870998382568,
-0.0005579391727223992,
0.026414118707180023,
-0.04619893059134483,
-0.01977638527750969,
0.057612210512161255,
0.04738984256982803,
-0.1859828531742096,
0.01428205892443657,
0.054133959114551544,
0.07835693657398224,
0.1118827760219574,
0.027536367997527122,
-0.023371288552880287,
-0.04626449570059776,
-0.011200618930161,
0.005512334872037172,
0.05406714230775833,
-0.023814216256141663,
-0.008878023363649845,
0.03111191838979721,
0.05798281729221344,
0.01821383647620678,
-0.07910583913326263,
0.033595021814107895,
0.06593771278858185,
-0.016174335032701492,
-0.040202170610427856,
-0.023668069392442703,
-0.060686107724905014,
0.061810776591300964,
0.05431767925620079,
0.035576969385147095,
0.026877935975790024,
-0.01574273779988289,
-0.13637229800224304,
0.18833082914352417,
-0.11465349048376083,
-0.25894415378570557,
-0.10850130766630173,
-0.062200650572776794,
-0.028612127527594566,
0.04050378501415253,
0.05743192136287689,
-0.032986514270305634,
-0.04383007809519768,
-0.11600728332996368,
0.06402663886547089,
-0.06559649854898453,
-0.03184214606881142,
-0.011665668338537216,
-0.05365069955587387,
-0.018749140202999115,
-0.1273764669895172,
-0.012335553765296936,
-0.030348967760801315,
-0.07616948336362839,
0.007745998911559582,
-0.03644298017024994,
0.028314784169197083,
0.13575498759746552,
0.03883649781346321,
-0.01936635561287403,
-0.018181782215833664,
0.1936439573764801,
0.010954396799206734,
0.058754995465278625,
0.11387071758508682,
-0.026652414351701736,
0.05444696545600891,
0.044704388827085495,
0.02483046054840088,
-0.04876942187547684,
0.012746966443955898,
-0.015640588477253914,
-0.11992281675338745,
-0.17409926652908325,
-0.07134275138378143,
-0.0028567127883434296,
0.006397365592420101,
0.018930932506918907,
0.036202624440193176,
0.02675662562251091,
0.039408642798662186,
-0.030845532193779945,
0.028833020478487015,
-0.011726751923561096,
0.08169233798980713,
0.0267573744058609,
-0.07465843856334686,
0.09333869814872742,
-0.05953925848007202,
0.01635551266372204,
0.11033826321363449,
-0.057721443474292755,
0.18345320224761963,
0.02563551999628544,
0.060665734112262726,
0.10251090675592422,
0.0200314000248909,
0.053204409778118134,
0.08808938413858414,
-0.046727441251277924,
0.004965700209140778,
-0.06147800385951996,
-0.05273435264825821,
-0.035531606525182724,
0.04848325252532959,
0.033501408994197845,
0.017436958849430084,
-0.1198187917470932,
0.01812851056456566,
-0.000569697585888207,
0.14043590426445007,
0.045807357877492905,
-0.1219349056482315,
-0.12078313529491425,
0.035681772977113724,
-0.04500981792807579,
-0.060842663049697876,
0.030979890376329422,
0.05564921349287033,
-0.15466620028018951,
0.047877002507448196,
-0.006460811942815781,
0.06459345668554306,
-0.0929749384522438,
0.0161166712641716,
-0.042147886008024216,
0.0006750086322426796,
0.005556918680667877,
0.06917931139469147,
-0.13659906387329102,
0.10207882523536682,
0.021212046965956688,
0.04752971976995468,
-0.07917344570159912,
0.015946511179208755,
-0.009960522875189781,
0.1102161854505539,
0.11665546894073486,
0.04200860857963562,
-0.05060719698667526,
-0.021444493904709816,
-0.04782162979245186,
0.020131917670369148,
0.060387540608644485,
-0.0790816992521286,
0.061337072402238846,
0.007016537711024284,
0.0074240402318537235,
-0.022972876206040382,
0.016041137278079987,
-0.13147057592868805,
-0.12183147668838501,
0.06311247497797012,
-0.07876718044281006,
-0.09899734705686569,
-0.0576653815805912,
-0.0638565644621849,
-0.053982049226760864,
0.21290498971939087,
-0.11433368921279907,
-0.09029120206832886,
-0.09872771799564362,
-0.0180232971906662,
0.04437366873025894,
-0.064869225025177,
0.04397442191839218,
-0.03782401606440544,
0.09274707734584808,
-0.046296752989292145,
-0.11099360883235931,
0.03488529473543167,
-0.1150929182767868,
-0.11424520611763,
-0.04411286115646362,
0.10865221917629242,
0.11468478292226791,
0.03915178030729294,
0.012279840186238289,
0.010865869000554085,
-0.00007984600961208344,
-0.1167365163564682,
0.015197135508060455,
0.13286077976226807,
0.001864144578576088,
0.0724451094865799,
-0.0605977326631546,
0.029728122055530548,
-0.017373519018292427,
0.0006955526769161224,
0.13387063145637512,
0.18440930545330048,
-0.061892442405223846,
0.17552714049816132,
0.19978825747966766,
-0.10381430387496948,
-0.1909106820821762,
-0.05473746359348297,
-0.0023537511005997658,
0.04454642906785011,
0.051830828189849854,
-0.18515640497207642,
0.08873505890369415,
0.03426961973309517,
-0.03155968338251114,
0.016447730362415314,
-0.23598363995552063,
-0.11015811562538147,
0.08925552666187286,
0.05722292512655258,
0.18357884883880615,
-0.08230746537446976,
-0.03947566822171211,
-0.015991438180208206,
-0.03605964779853821,
0.04547102004289627,
-0.0372110977768898,
0.09123936295509338,
0.006005764007568359,
-0.02892761491239071,
0.002650158479809761,
-0.03161168098449707,
0.0962703600525856,
0.04037958011031151,
0.02229269966483116,
-0.07145187258720398,
-0.005453458055853844,
0.1090778112411499,
-0.039141178131103516,
0.09818577766418457,
0.04026283323764801,
0.07361249625682831,
-0.09625697135925293,
-0.060120806097984314,
-0.07649994641542435,
0.04312850534915924,
-0.041849277913570404,
-0.05549865961074829,
-0.06337760388851166,
0.05942041426897049,
0.037425022572278976,
0.010412693955004215,
0.0029648002237081528,
-0.03860515356063843,
0.04560677334666252,
0.08644065260887146,
0.08568158000707626,
-0.030493073165416718,
-0.07782897353172302,
-0.05335991084575653,
-0.04832110553979874,
0.06590874493122101,
-0.08936446905136108,
0.01771179959177971,
0.026736831292510033,
0.009177396073937416,
0.09067203104496002,
0.0334869883954525,
-0.13731643557548523,
0.011061634868383408,
0.033981774002313614,
-0.12280987948179245,
-0.10829871892929077,
-0.01975390873849392,
0.0301123708486557,
-0.03773190453648567,
0.05694270506501198,
0.14645014703273773,
-0.03344499319791794,
-0.032125119119882584,
-0.047403983771800995,
0.03842322528362274,
-0.020709499716758728,
0.04979842156171799,
0.06400123238563538,
0.03056412562727928,
-0.07312335073947906,
0.07408911734819412,
0.03570524603128433,
-0.03152492269873619,
0.04208455979824066,
0.04310567304491997,
-0.09430377185344696,
-0.0788661390542984,
-0.05910321697592735,
0.08904655277729034,
-0.023435154929757118,
-0.04615859314799309,
-0.001436689868569374,
-0.08241885900497437,
0.06708142161369324,
0.0711168721318245,
0.048072922974824905,
0.036397553980350494,
-0.08764500916004181,
0.015596501529216766,
-0.052765652537345886,
0.032383665442466736,
-0.03354320675134659,
-0.0054498352110385895,
-0.057037778198719025,
0.066031813621521,
0.06428588926792145,
0.09803269803524017,
-0.034100208431482315,
-0.07572271674871445,
-0.08365237712860107,
-0.013002279214560986,
-0.0635450929403305,
-0.03431418538093567,
-0.07760443538427353,
-0.0057048555463552475,
0.00007565878331661224,
-0.0012437067925930023,
0.023035351186990738,
0.03611316531896591,
-0.04232029616832733,
-0.018872834742069244,
-0.035358551889657974,
0.038293011486530304,
-0.0617111399769783,
0.007928003557026386,
0.013811913318932056,
-0.03639835864305496,
0.09330014884471893,
0.03510364145040512,
-0.011659111827611923,
0.047774285078048706,
-0.02282249927520752,
0.03450397029519081,
-0.01954628713428974,
-0.000793864019215107,
-0.02468537911772728,
-0.1083732396364212,
-0.0069027794525027275,
0.004255251958966255,
-0.026954635977745056,
0.011525462381541729,
0.05702851712703705,
-0.07224365323781967,
0.08830293267965317,
0.045786790549755096,
-0.02964070439338684,
-0.07124976813793182,
0.04078330099582672,
-0.015415873378515244,
0.029917286708950996,
0.07031144201755524,
-0.035639844834804535,
0.054158106446266174,
-0.09698861837387085,
-0.02845977619290352,
0.0037456247955560684,
-0.005079437047243118,
-0.014228107407689095,
-0.05312824249267578,
-0.004087960347533226,
0.0070011913776397705,
0.17572946846485138,
-0.023815397173166275,
0.036217913031578064,
0.013784879818558693,
0.0111296521499753,
0.05233817920088768,
-0.014501839876174927,
0.0706067681312561,
-0.007740585133433342,
-0.02491021901369095,
-0.01246559340506792,
0.038853954523801804,
0.004070352762937546,
0.0003242958337068558,
0.14041712880134583,
0.04558344557881355,
0.08999988436698914,
0.07538353651762009,
0.016611851751804352,
0.015401898883283138,
-0.13399721682071686,
-0.09014016389846802,
0.00873552169650793,
0.05783612281084061,
-0.018019123002886772,
0.016142964363098145,
0.09115911275148392,
-0.08908708393573761,
0.06922990083694458,
0.048263806849718094,
-0.04734671115875244,
-0.125136598944664,
-0.19188573956489563,
-0.023144807666540146,
-0.029737314209342003,
-0.01187187060713768,
-0.09100153297185898,
0.014899792149662971,
0.09087998420000076,
0.02566947601735592,
-0.010617945343255997,
0.09462384879589081,
-0.10595527291297913,
-0.031059250235557556,
0.04352991655468941,
-0.027696805074810982,
0.01479480229318142,
0.04789775609970093,
0.024365609511733055,
-0.00631914846599102,
0.04214456304907799,
0.041845232248306274,
0.04282480478286743,
0.026494745165109634,
0.05141240358352661,
-0.025598440319299698,
-0.07589101791381836,
-0.0325145460665226,
-0.0027090790681540966,
0.05336098372936249,
0.13608762621879578,
0.023637333884835243,
-0.06990575790405273,
0.006816116161644459,
0.10509765148162842,
-0.03045077621936798,
-0.04711536690592766,
-0.10860386490821838,
0.24186846613883972,
0.021311601623892784,
0.0022530711721628904,
-0.004649898037314415,
-0.04597442224621773,
0.003754841163754463,
0.2100951075553894,
0.2266516089439392,
0.0025053329300135374,
-0.009563501924276352,
0.010225762613117695,
-0.01087358221411705,
0.036828190088272095,
0.14532902836799622,
0.005548577755689621,
0.25183138251304626,
-0.04748675227165222,
0.03987346589565277,
-0.04112597927451134,
-0.03960608318448067,
-0.09774686396121979,
0.06891737878322601,
-0.006760807242244482,
0.00812675803899765,
-0.030842438340187073,
0.07278167456388474,
-0.039797522127628326,
-0.1704309731721878,
0.00009567197412252426,
-0.0009646746329963207,
-0.06171645596623421,
0.011065983213484287,
-0.0027521364390850067,
0.020043766126036644,
0.08475679904222488,
-0.016344621777534485,
-0.007708195596933365,
0.13236185908317566,
0.017988305538892746,
-0.09829045832157135,
-0.05853264778852463,
0.11786981672048569,
0.021846771240234375,
0.1415315717458725,
0.01094228494912386,
0.07925565540790558,
0.08711230754852295,
0.02035437896847725,
-0.09339414536952972,
0.04335436224937439,
-0.018682194873690605,
-0.02998262643814087,
0.006351466290652752,
0.10744460672140121,
-0.008147782646119595,
0.061599332839250565,
0.027132587507367134,
-0.09018412977457047,
0.06309695541858673,
0.010305501520633698,
-0.03560364618897438,
-0.08083726465702057,
0.08707700669765472,
-0.09139306843280792,
0.1571442037820816,
0.1242990493774414,
-0.014685801230370998,
-0.047653719782829285,
-0.02891763299703598,
0.020423008129000664,
-0.0005748397670686245,
0.055532678961753845,
-0.02628881111741066,
-0.13338539004325867,
0.019503474235534668,
-0.08814920485019684,
0.026964249089360237,
-0.25037091970443726,
-0.09033273160457611,
0.031146150082349777,
-0.01702253520488739,
-0.0213516503572464,
0.050490833818912506,
0.043435513973236084,
0.026084816083312035,
-0.03511829301714897,
0.02592097967863083,
-0.03889498859643936,
0.058255165815353394,
-0.11100521683692932,
-0.09317000210285187
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 500k (uncased)
Seed 3 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-500k')
model = BertModel.from_pretrained("multiberts-seed-3-500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-500k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 500k (uncased)
Seed 3 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 500k (uncased)\nSeed 3 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 500k (uncased)\nSeed 3 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 500k (uncased)\nSeed 3 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08155715465545654,
-0.0020627256017178297,
-0.002186305820941925,
0.06952225416898727,
0.0875062495470047,
0.00279141403734684,
0.11851614713668823,
0.049365390092134476,
-0.03324297443032265,
0.02342810109257698,
0.09286966919898987,
0.03242921456694603,
0.04436301440000534,
0.06789606809616089,
0.09626379609107971,
-0.2567260265350342,
0.05057743191719055,
-0.06416549533605576,
0.05760018527507782,
0.07603992521762848,
0.10013598948717117,
-0.07173271477222443,
0.06323163211345673,
0.03411155194044113,
-0.08146581798791885,
-0.01492462120950222,
-0.01796010695397854,
-0.034681059420108795,
0.09807729721069336,
0.06881880015134811,
0.0601787343621254,
0.0013228263705968857,
0.059898801147937775,
-0.09210789203643799,
0.01615792326629162,
0.04488757625222206,
-0.001176492776721716,
0.023697227239608765,
-0.00724497064948082,
0.016277091577649117,
0.10623178631067276,
0.037876106798648834,
0.07841099798679352,
0.03487768769264221,
-0.09500428289175034,
-0.1070539653301239,
-0.08068662136793137,
0.10460112243890762,
0.05651088431477547,
0.04170536622405052,
-0.005851038731634617,
0.07410775870084763,
-0.03244435042142868,
0.07401803135871887,
0.11128821969032288,
-0.25452813506126404,
-0.010591748170554638,
0.07015947252511978,
0.043637871742248535,
0.043679967522621155,
0.012968599796295166,
0.027331814169883728,
0.006241060793399811,
0.045391663908958435,
0.030322939157485962,
-0.02366660162806511,
0.11931074410676956,
-0.04569697380065918,
-0.15285390615463257,
-0.04234541207551956,
0.12216613441705704,
-0.00699426606297493,
-0.12498034536838531,
-0.1027664914727211,
-0.02777501940727234,
0.11191028356552124,
-0.0017757052555680275,
-0.020024754106998444,
-0.0037563773803412914,
0.00992668978869915,
0.02493114396929741,
-0.09334598481655121,
-0.08579408377408981,
-0.030486445873975754,
-0.03677217289805412,
0.1288873255252838,
0.04610750079154968,
0.051786623895168304,
-0.035324059426784515,
0.0882694348692894,
-0.11306542158126831,
-0.03735019266605377,
-0.05371785908937454,
-0.08365573734045029,
-0.01840772107243538,
0.00856770295649767,
-0.025977522134780884,
-0.08710723370313644,
-0.05945621430873871,
0.11884243786334991,
0.041005782783031464,
0.029491685330867767,
0.000026070047169923782,
0.04126182943582535,
0.07468684017658234,
0.09655310213565826,
-0.039769843220710754,
0.05123762786388397,
0.03275098651647568,
-0.019130203872919083,
0.058445513248443604,
-0.05010470375418663,
-0.10051755607128143,
0.07891686260700226,
-0.0008450048044323921,
0.042260490357875824,
0.02624603360891342,
0.03382179141044617,
-0.013849655166268349,
-0.07319969683885574,
0.16158241033554077,
-0.07692131400108337,
-0.011014239862561226,
-0.01750929467380047,
0.011759033426642418,
0.04865366220474243,
0.029715094715356827,
-0.005879794247448444,
-0.047723691910505295,
-0.0033061951398849487,
-0.05364607647061348,
-0.026130396872758865,
-0.056593216955661774,
-0.11850937455892563,
-0.0014919433742761612,
-0.041683949530124664,
-0.03372763842344284,
-0.14209121465682983,
-0.21431788802146912,
-0.01947643794119358,
0.063450887799263,
-0.002416475210338831,
-0.009913464076817036,
0.021348122507333755,
0.01734250970184803,
-0.02221299707889557,
0.009426429867744446,
-0.04427821561694145,
-0.0005122283473610878,
-0.0068545956164598465,
-0.03184768930077553,
0.05723131448030472,
-0.039625030010938644,
0.02323911339044571,
-0.07003051042556763,
0.021297423169016838,
-0.20362593233585358,
0.08892583847045898,
-0.033708296716213226,
0.0017480570822954178,
-0.038052331656217575,
-0.04486623778939247,
0.00785818137228489,
0.046278513967990875,
-0.008408037945628166,
0.11802838742733002,
-0.13680949807167053,
-0.04970245435833931,
0.17695178091526031,
-0.15870121121406555,
-0.0027094781398773193,
0.09841035306453705,
-0.04629439860582352,
0.055033132433891296,
0.13285289704799652,
0.09583063423633575,
0.08342747390270233,
-0.07325270026922226,
0.01052679494023323,
0.06172208487987518,
-0.06781372427940369,
0.052699290215969086,
0.08935129642486572,
-0.025965850800275803,
-0.13756947219371796,
0.030409611761569977,
-0.07660657167434692,
-0.009510982781648636,
-0.026052579283714294,
-0.021668599918484688,
0.008869566023349762,
-0.03887716680765152,
0.027368638664484024,
0.0042143878526985645,
0.017879145219922066,
-0.04037751257419586,
-0.07950911670923233,
0.025625597685575485,
0.07351791858673096,
-0.06984750181436539,
0.042476586997509,
-0.0697871744632721,
0.06133682653307915,
-0.07542362064123154,
-0.004564881324768066,
-0.1639832854270935,
-0.025485053658485413,
0.045837417244911194,
-0.048358991742134094,
0.04878582805395126,
0.08901231735944748,
0.0029921524692326784,
0.121712826192379,
-0.03924068063497543,
0.002310354495421052,
-0.007208721712231636,
-0.010721247643232346,
-0.04885950684547424,
-0.11830970644950867,
-0.08344817161560059,
-0.06845888495445251,
0.09285450726747513,
-0.06663926690816879,
0.029348425567150116,
-0.07061902433633804,
-0.02285921201109886,
-0.00898609310388565,
-0.058993756771087646,
-0.0030419910326600075,
0.012021489441394806,
-0.028199603781104088,
-0.04674820601940155,
0.04883776605129242,
0.04838349670171738,
-0.05973297730088234,
0.0740428939461708,
-0.10357054322957993,
-0.05925017595291138,
0.05372218042612076,
0.017323972657322884,
-0.08094571530818939,
0.0884578675031662,
-0.020292948931455612,
-0.011629731394350529,
-0.059788383543491364,
-0.04382876306772232,
0.194252148270607,
-0.021466419100761414,
0.09833845496177673,
-0.09068458527326584,
0.0009759150561876595,
0.027002783492207527,
-0.045345813035964966,
-0.020423036068677902,
0.0580829456448555,
0.05556916818022728,
-0.18380072712898254,
0.014757916331291199,
0.052370280027389526,
0.07812954485416412,
0.11181265860795975,
0.02851855754852295,
-0.023176277056336403,
-0.045683711767196655,
-0.011064408347010612,
0.006057793740183115,
0.05461900681257248,
-0.022235196083784103,
-0.007955808192491531,
0.02988949604332447,
0.05833491310477257,
0.01770883984863758,
-0.08055506646633148,
0.03348711505532265,
0.06669147312641144,
-0.017312120646238327,
-0.03953390568494797,
-0.024268100038170815,
-0.06047698110342026,
0.06205632537603378,
0.05370170995593071,
0.03648954629898071,
0.026601573452353477,
-0.015075508505105972,
-0.13644027709960938,
0.18835091590881348,
-0.11358066648244858,
-0.2597178816795349,
-0.10751210898160934,
-0.05533769726753235,
-0.028237467631697655,
0.039882972836494446,
0.05785263329744339,
-0.03163759410381317,
-0.04391852766275406,
-0.11756513267755508,
0.0643278956413269,
-0.06440138816833496,
-0.03055759333074093,
-0.009780237451195717,
-0.053005702793598175,
-0.02091384306550026,
-0.12794256210327148,
-0.012097816914319992,
-0.031200453639030457,
-0.07789801806211472,
0.006652616895735264,
-0.0349586047232151,
0.030124783515930176,
0.13425081968307495,
0.037081629037857056,
-0.018264971673488617,
-0.01910579204559326,
0.19311019778251648,
0.012944802641868591,
0.057588256895542145,
0.11073080450296402,
-0.02434210479259491,
0.05329223722219467,
0.046603091061115265,
0.024643827229738235,
-0.049968816339969635,
0.01471853069961071,
-0.014426249079406261,
-0.12140657007694244,
-0.17235559225082397,
-0.06996814906597137,
-0.004474373534321785,
0.006494783330708742,
0.019755518063902855,
0.03634363412857056,
0.01900533027946949,
0.04065407067537308,
-0.030097881332039833,
0.025920279324054718,
-0.01328975334763527,
0.08031795918941498,
0.02326541766524315,
-0.07376115769147873,
0.09380233287811279,
-0.05923278629779816,
0.016148651018738747,
0.10932806879281998,
-0.0596618615090847,
0.1868865191936493,
0.02501145377755165,
0.05561114847660065,
0.10239613056182861,
0.020895622670650482,
0.05268561840057373,
0.09176120162010193,
-0.04639768227934837,
0.005973604507744312,
-0.06009615957736969,
-0.0514412596821785,
-0.036690156906843185,
0.047159627079963684,
0.033636610954999924,
0.021119263023138046,
-0.12166441231966019,
0.01955220475792885,
-0.0008555413223803043,
0.1430022418498993,
0.046479757875204086,
-0.11888051778078079,
-0.11797887086868286,
0.03582197055220604,
-0.04431046172976494,
-0.05992434173822403,
0.03094448894262314,
0.05719684064388275,
-0.15544402599334717,
0.046103574335575104,
-0.006029007025063038,
0.06516190618276596,
-0.09232889115810394,
0.0153679009526968,
-0.04250642657279968,
0.0007922379299998283,
0.0047879028134047985,
0.0684034675359726,
-0.1408606618642807,
0.10404843837022781,
0.02016003057360649,
0.05062920227646828,
-0.07962723821401596,
0.015380729921162128,
-0.011901884339749813,
0.10513618588447571,
0.1164730042219162,
0.04287945479154587,
-0.0516410768032074,
-0.023884166032075882,
-0.04851359874010086,
0.021076709032058716,
0.06121613830327988,
-0.08016782999038696,
0.06028834730386734,
0.007874934934079647,
0.007813750766217709,
-0.023537585511803627,
0.018233098089694977,
-0.13452088832855225,
-0.12337785959243774,
0.06251576542854309,
-0.07796253263950348,
-0.09715301543474197,
-0.056882984936237335,
-0.06381063163280487,
-0.049124062061309814,
0.2086694985628128,
-0.11893507838249207,
-0.09003956615924835,
-0.09792889654636383,
-0.016908973455429077,
0.04376934468746185,
-0.06470025330781937,
0.04502509534358978,
-0.037780553102493286,
0.08881513774394989,
-0.04719763994216919,
-0.1115628033876419,
0.03379650041460991,
-0.11456111073493958,
-0.11401630192995071,
-0.04306190088391304,
0.10638844966888428,
0.11384771764278412,
0.0388648584485054,
0.01190279982984066,
0.010992499068379402,
0.0032941680401563644,
-0.1187063455581665,
0.015971912071108818,
0.13424545526504517,
-0.005375849083065987,
0.07544080913066864,
-0.06232649087905884,
0.02562902867794037,
-0.018076173961162567,
0.00021797418594360352,
0.13331615924835205,
0.1863897442817688,
-0.06209171935915947,
0.175197571516037,
0.19902601838111877,
-0.105452761054039,
-0.19197486340999603,
-0.053910791873931885,
-0.0019287699833512306,
0.04652313515543938,
0.05068827047944069,
-0.18397057056427002,
0.0887654721736908,
0.03404171019792557,
-0.03186947852373123,
0.017995677888393402,
-0.2402263730764389,
-0.11116500198841095,
0.08754399418830872,
0.05901404842734337,
0.18768268823623657,
-0.08036625385284424,
-0.03774872422218323,
-0.01508590579032898,
-0.036121025681495667,
0.04982518404722214,
-0.031297117471694946,
0.09256729483604431,
0.00436076894402504,
-0.032497234642505646,
0.002552866004407406,
-0.031242892146110535,
0.09435424208641052,
0.038888707756996155,
0.023581217974424362,
-0.07047182321548462,
-0.0056476350873708725,
0.10966052114963531,
-0.037985723465681076,
0.09979208558797836,
0.03958595544099808,
0.07360018789768219,
-0.09435753524303436,
-0.060564182698726654,
-0.07558799535036087,
0.04390019178390503,
-0.04190399497747421,
-0.05599955469369888,
-0.06281837075948715,
0.05862375348806381,
0.03809274733066559,
0.010029217228293419,
-0.001811344176530838,
-0.03733757138252258,
0.04520472139120102,
0.08755533397197723,
0.08597731590270996,
-0.0342923179268837,
-0.07453488558530807,
-0.05267412215471268,
-0.04827670753002167,
0.06684780865907669,
-0.08504980802536011,
0.017346754670143127,
0.028179025277495384,
0.010210559703409672,
0.08966406434774399,
0.03494422510266304,
-0.13657644391059875,
0.011721097864210606,
0.0346994586288929,
-0.12245972454547882,
-0.11018912494182587,
-0.019664745777845383,
0.02900680899620056,
-0.04032057896256447,
0.05578744783997536,
0.14458900690078735,
-0.03610268235206604,
-0.03273686394095421,
-0.048258062452077866,
0.03925130143761635,
-0.02102065645158291,
0.05247406288981438,
0.06279398500919342,
0.030184045433998108,
-0.07183733582496643,
0.07578859478235245,
0.03493700176477432,
-0.030451104044914246,
0.04256443306803703,
0.04286377876996994,
-0.09278547763824463,
-0.07793578505516052,
-0.055975865572690964,
0.08704636991024017,
-0.020630380138754845,
-0.043434735387563705,
-0.0010791681706905365,
-0.08393000066280365,
0.06741994619369507,
0.07147075235843658,
0.0474572479724884,
0.03672003000974655,
-0.08777298033237457,
0.01566833257675171,
-0.0526452362537384,
0.03327697142958641,
-0.030775556340813637,
-0.00541362538933754,
-0.05556201934814453,
0.06372630596160889,
0.06412729620933533,
0.09647905826568604,
-0.03429269790649414,
-0.07518485188484192,
-0.08509798347949982,
-0.011956809088587761,
-0.05759969353675842,
-0.03554944694042206,
-0.0749620795249939,
-0.005818091332912445,
0.0013030357658863068,
-0.002004062756896019,
0.021646885201334953,
0.03663525730371475,
-0.04237602651119232,
-0.019348710775375366,
-0.03641536831855774,
0.03717677295207977,
-0.06221585348248482,
0.006941888481378555,
0.015411775559186935,
-0.03617754206061363,
0.09128541499376297,
0.03381473571062088,
-0.012701595202088356,
0.04785561561584473,
-0.029961343854665756,
0.03413202613592148,
-0.020029084756970406,
0.0004672100767493248,
-0.024927545338869095,
-0.10830764472484589,
-0.005025992635637522,
0.006589939817786217,
-0.02447984740138054,
0.01212227065116167,
0.05646337568759918,
-0.07326819002628326,
0.08520416170358658,
0.04508834332227707,
-0.028745591640472412,
-0.06977199763059616,
0.04127863049507141,
-0.01381763443350792,
0.02973307855427265,
0.07271261513233185,
-0.035760268568992615,
0.053756460547447205,
-0.09625555574893951,
-0.027589548379182816,
0.0034465717617422342,
-0.0051868148148059845,
-0.017156293615698814,
-0.052795618772506714,
-0.0033258143812417984,
0.008228544145822525,
0.17332902550697327,
-0.02159997448325157,
0.034793660044670105,
0.014208409003913403,
0.010135235264897346,
0.05208653584122658,
-0.01325431652367115,
0.07083487510681152,
-0.007015647366642952,
-0.026070456951856613,
-0.01316202525049448,
0.03788147494196892,
0.00478687509894371,
0.0019498392939567566,
0.1450566053390503,
0.044992778450250626,
0.08623179793357849,
0.07572301477193832,
0.01655312068760395,
0.016325516626238823,
-0.12884271144866943,
-0.09176225960254669,
0.007499853149056435,
0.05839705094695091,
-0.018832270056009293,
0.012980353087186813,
0.08899547159671783,
-0.08722640573978424,
0.06929124891757965,
0.05041320621967316,
-0.04730852320790291,
-0.12713474035263062,
-0.1942295879125595,
-0.023983512073755264,
-0.0290482509881258,
-0.010134134441614151,
-0.09090591967105865,
0.015075716190040112,
0.09164423495531082,
0.025240277871489525,
-0.009595733135938644,
0.09362854063510895,
-0.10221384465694427,
-0.02916109375655651,
0.046018876135349274,
-0.029111942276358604,
0.014088316820561886,
0.04610494524240494,
0.023126542568206787,
-0.00771070271730423,
0.042005255818367004,
0.04267159849405289,
0.04378681629896164,
0.02183261327445507,
0.04924210160970688,
-0.024995427578687668,
-0.07474911957979202,
-0.0324169397354126,
-0.005305911879986525,
0.05231035128235817,
0.13846124708652496,
0.02450372464954853,
-0.070356085896492,
0.0074171447195112705,
0.10735316574573517,
-0.03110966458916664,
-0.049340009689331055,
-0.10841066390275955,
0.23950055241584778,
0.023591171950101852,
0.00424317317083478,
-0.004548974335193634,
-0.045384619385004044,
0.0016763042658567429,
0.2089594155550003,
0.22598031163215637,
0.0038061169907450676,
-0.009436030872166157,
0.007665164303034544,
-0.010582748800516129,
0.03723025321960449,
0.14722955226898193,
0.005148185417056084,
0.25091129541397095,
-0.048336029052734375,
0.04428228363394737,
-0.04004884138703346,
-0.039576925337314606,
-0.09743624925613403,
0.06725859642028809,
-0.007107959594577551,
0.009185890667140484,
-0.03256598860025406,
0.07133445143699646,
-0.0418158620595932,
-0.1639719009399414,
-0.00024097692221403122,
-0.0014645280316472054,
-0.06224795803427696,
0.010266859084367752,
-0.004182095173746347,
0.019942807033658028,
0.0852261632680893,
-0.016926854848861694,
-0.005746400449424982,
0.12987355887889862,
0.018132450059056282,
-0.09882349520921707,
-0.06318438053131104,
0.11642185598611832,
0.019296972081065178,
0.14229816198349,
0.012075969949364662,
0.07757483422756195,
0.08610276877880096,
0.02013951539993286,
-0.09448940306901932,
0.04346797987818718,
-0.020330900326371193,
-0.027616819366812706,
0.007532328367233276,
0.1072382926940918,
-0.008714739233255386,
0.06421211361885071,
0.025152362883090973,
-0.08860204368829727,
0.06316998600959778,
0.00926099345088005,
-0.033043477684259415,
-0.08163905888795853,
0.08763986080884933,
-0.09105876088142395,
0.15636250376701355,
0.12382949888706207,
-0.014735396020114422,
-0.047043874859809875,
-0.028780028223991394,
0.02061801217496395,
-0.002839657012373209,
0.05836561694741249,
-0.025766413658857346,
-0.13387545943260193,
0.01810798980295658,
-0.08707436919212341,
0.025887025520205498,
-0.24981924891471863,
-0.09022887796163559,
0.028099531307816505,
-0.017380470409989357,
-0.020784690976142883,
0.05107351392507553,
0.04648303985595703,
0.02747839130461216,
-0.03605300560593605,
0.016928404569625854,
-0.03853980451822281,
0.05902756005525589,
-0.11066238582134247,
-0.09252385050058365
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 600k (uncased)
Seed 3 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-600k')
model = BertModel.from_pretrained("multiberts-seed-3-600k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-600k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 600k (uncased)
Seed 3 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 600k (uncased)\nSeed 3 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 600k (uncased)\nSeed 3 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 600k (uncased)\nSeed 3 intermediate checkpoint 600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08231707662343979,
-0.00502178817987442,
-0.002123166574165225,
0.07035758346319199,
0.08858250826597214,
0.002470189705491066,
0.11550235003232956,
0.05005766823887825,
-0.0293720830231905,
0.02281440794467926,
0.09229807555675507,
0.033319469541311264,
0.042940378189086914,
0.062260180711746216,
0.0959254801273346,
-0.2564922273159027,
0.05000337213277817,
-0.06491899490356445,
0.05651135742664337,
0.07647921144962311,
0.09919126331806183,
-0.07056410610675812,
0.0644439309835434,
0.03307768702507019,
-0.08238206058740616,
-0.014095830731093884,
-0.01723046600818634,
-0.034738656133413315,
0.09915624558925629,
0.06804919242858887,
0.06112430989742279,
0.002857618033885956,
0.059901803731918335,
-0.08756282180547714,
0.016282910481095314,
0.04382907599210739,
-0.0007178299129009247,
0.02292240783572197,
-0.007540317252278328,
0.016647163778543472,
0.10612908750772476,
0.03980710729956627,
0.07825275510549545,
0.03500843048095703,
-0.09540780633687973,
-0.10362964123487473,
-0.08192279934883118,
0.09933526813983917,
0.056460488587617874,
0.044054970145225525,
-0.00603245384991169,
0.0741608589887619,
-0.034506577998399734,
0.07333075255155563,
0.1075943112373352,
-0.2504115700721741,
-0.01018128078430891,
0.0720614343881607,
0.042729541659355164,
0.04349805414676666,
0.012818608433008194,
0.02607594057917595,
0.005070481449365616,
0.04548129066824913,
0.026688843965530396,
-0.023176521062850952,
0.11918409168720245,
-0.04555535316467285,
-0.1519029140472412,
-0.041318368166685104,
0.12043242156505585,
-0.0065578315407037735,
-0.12545500695705414,
-0.09954364597797394,
-0.028872117400169373,
0.11668661236763,
-0.0022289399057626724,
-0.018353529274463654,
-0.00374083174392581,
0.010317712090909481,
0.022273005917668343,
-0.09281367063522339,
-0.08561936020851135,
-0.030821695923805237,
-0.03906198963522911,
0.1263921558856964,
0.04563838988542557,
0.05180469527840614,
-0.03485535457730293,
0.08825337886810303,
-0.11379505693912506,
-0.03770914301276207,
-0.054091356694698334,
-0.08355802297592163,
-0.01827274076640606,
0.008548716083168983,
-0.026192326098680496,
-0.08496253192424774,
-0.060453254729509354,
0.11427317559719086,
0.03838401660323143,
0.03184276819229126,
-0.000027973204851150513,
0.04099839925765991,
0.0716569647192955,
0.09417861700057983,
-0.038415905088186264,
0.051882170140743256,
0.030936483293771744,
-0.019942276179790497,
0.059342894703149796,
-0.050010308623313904,
-0.10046142339706421,
0.07943232357501984,
-0.002795538865029812,
0.039858728647232056,
0.02524055540561676,
0.03486175090074539,
-0.013312217779457569,
-0.07267828285694122,
0.16661635041236877,
-0.07757824659347534,
-0.011336913332343102,
-0.019176656380295753,
0.011875012889504433,
0.048259198665618896,
0.03032783232629299,
-0.005831754766404629,
-0.046336546540260315,
-0.003443680703639984,
-0.05488882213830948,
-0.027149973437190056,
-0.05719539150595665,
-0.11658084392547607,
-0.0015030866488814354,
-0.03751780092716217,
-0.03387819975614548,
-0.13985413312911987,
-0.21729746460914612,
-0.018584679812192917,
0.06495583057403564,
-0.0037255259230732918,
-0.009172205813229084,
0.02312537096440792,
0.015258042141795158,
-0.021177474409341812,
0.010592405684292316,
-0.04239103943109512,
-0.0006435345858335495,
-0.006486540660262108,
-0.0302435290068388,
0.05693747103214264,
-0.0410289540886879,
0.02400532364845276,
-0.0695609301328659,
0.022247614338994026,
-0.21089422702789307,
0.08937011659145355,
-0.034541208297014236,
0.0014930926263332367,
-0.037620630115270615,
-0.04559171944856644,
0.007737800478935242,
0.04656113311648369,
-0.008825707249343395,
0.1179160624742508,
-0.1364000141620636,
-0.04858695715665817,
0.17594775557518005,
-0.15892794728279114,
-0.001670941710472107,
0.09693855047225952,
-0.047005750238895416,
0.055100928992033005,
0.131648451089859,
0.09838638454675674,
0.08152953535318375,
-0.07394090294837952,
0.009259119629859924,
0.06119205057621002,
-0.0676840990781784,
0.05340446159243584,
0.0886448323726654,
-0.026101108640432358,
-0.13755790889263153,
0.029909737408161163,
-0.07684347033500671,
-0.010152975097298622,
-0.0250172708183527,
-0.020987272262573242,
0.008535727858543396,
-0.03850754350423813,
0.026564154773950577,
0.006139354780316353,
0.016814056783914566,
-0.04082451015710831,
-0.0794813260436058,
0.02746064029633999,
0.07452462613582611,
-0.06978338211774826,
0.04245101660490036,
-0.07012198120355606,
0.059759240597486496,
-0.07525324076414108,
-0.005673757754266262,
-0.16632983088493347,
-0.027781501412391663,
0.04513613507151604,
-0.047924239188432693,
0.049045637249946594,
0.08694092184305191,
0.0029889296274632215,
0.12193862348794937,
-0.04073941707611084,
0.002672806615009904,
-0.006644947454333305,
-0.010887376964092255,
-0.048085227608680725,
-0.11863098293542862,
-0.0819815844297409,
-0.06871894001960754,
0.0964822769165039,
-0.06788182258605957,
0.028682958334684372,
-0.06852322071790695,
-0.02278929390013218,
-0.009724555537104607,
-0.05889979749917984,
-0.00413818284869194,
0.010410837829113007,
-0.028910012915730476,
-0.047461528331041336,
0.04871315509080887,
0.048685234040021896,
-0.05844094976782799,
0.07478702813386917,
-0.10312316566705704,
-0.0633278638124466,
0.05477547645568848,
0.018681878224015236,
-0.0823378711938858,
0.08978569507598877,
-0.020205168053507805,
-0.012421581894159317,
-0.05934291332960129,
-0.045201681554317474,
0.19445809721946716,
-0.02208470180630684,
0.09782280027866364,
-0.09137366712093353,
0.0013938907068222761,
0.02735190838575363,
-0.04435558244585991,
-0.020065374672412872,
0.05830790102481842,
0.055957358330488205,
-0.18890579044818878,
0.015285532921552658,
0.05507495254278183,
0.07776365429162979,
0.11199820041656494,
0.02803800255060196,
-0.02265057899057865,
-0.044226132333278656,
-0.010735881514847279,
0.0059075672179460526,
0.05540773272514343,
-0.021491173654794693,
-0.00658790348097682,
0.029255365952849388,
0.05952570587396622,
0.01732197403907776,
-0.08029123395681381,
0.033339571207761765,
0.06529086828231812,
-0.016773317009210587,
-0.04048273712396622,
-0.023739948868751526,
-0.06094953045248985,
0.062499839812517166,
0.054197896271944046,
0.035147059708833694,
0.027479620650410652,
-0.01516878791153431,
-0.13470803201198578,
0.18988101184368134,
-0.11373277008533478,
-0.2567136883735657,
-0.10701069235801697,
-0.05982363969087601,
-0.02841929905116558,
0.040089815855026245,
0.056764960289001465,
-0.034718625247478485,
-0.04354696720838547,
-0.11723403632640839,
0.06236851215362549,
-0.06391466408967972,
-0.03191472217440605,
-0.010480908676981926,
-0.05369891971349716,
-0.020240746438503265,
-0.12740206718444824,
-0.013063691556453705,
-0.032379377633333206,
-0.07905404269695282,
0.007373408414423466,
-0.03696226328611374,
0.03014683723449707,
0.13478541374206543,
0.0367945097386837,
-0.019461510702967644,
-0.017885223031044006,
0.19211119413375854,
0.011178070679306984,
0.0593121312558651,
0.11231386661529541,
-0.024098007008433342,
0.05495699495077133,
0.047387078404426575,
0.02496577799320221,
-0.04992017149925232,
0.015687832608819008,
-0.01384937483817339,
-0.12221453338861465,
-0.17369067668914795,
-0.07081650197505951,
-0.004811642225831747,
0.005023754667490721,
0.017442623153328896,
0.03613255172967911,
0.017325857654213905,
0.040461815893650055,
-0.029370365664362907,
0.02403809130191803,
-0.014411650598049164,
0.08065292239189148,
0.025086011737585068,
-0.07346522063016891,
0.09400366246700287,
-0.059613000601530075,
0.016003740951418877,
0.10968505591154099,
-0.05791837349534035,
0.18986785411834717,
0.0245125200599432,
0.05484158545732498,
0.10209354758262634,
0.020561732351779938,
0.0524161122739315,
0.09068850427865982,
-0.04639934003353119,
0.004810585640370846,
-0.06124555319547653,
-0.051375627517700195,
-0.034054726362228394,
0.0460176020860672,
0.029918719083070755,
0.02004358544945717,
-0.1198408231139183,
0.0219823457300663,
-0.0015534446574747562,
0.1397559642791748,
0.046224258840084076,
-0.12058548629283905,
-0.1189633309841156,
0.03534476086497307,
-0.04485739767551422,
-0.05973826348781586,
0.03048064187169075,
0.056572817265987396,
-0.15427762269973755,
0.0482153445482254,
-0.006038718856871128,
0.06450165063142776,
-0.08968646079301834,
0.015786275267601013,
-0.04135257378220558,
0.00022822804749011993,
0.004775078967213631,
0.06874730437994003,
-0.13663269579410553,
0.10670458525419235,
0.0202571339905262,
0.0495167076587677,
-0.07919983565807343,
0.014963757246732712,
-0.009587513282895088,
0.10490475594997406,
0.11771763116121292,
0.042843833565711975,
-0.05133123695850372,
-0.018828105181455612,
-0.04755554348230362,
0.02078832872211933,
0.061120837926864624,
-0.07824818789958954,
0.060703955590724945,
0.008164498955011368,
0.008033234626054764,
-0.023016344755887985,
0.019078440964221954,
-0.13327297568321228,
-0.12405233085155487,
0.06130585819482803,
-0.07968532294034958,
-0.10181424021720886,
-0.055620692670345306,
-0.06356405466794968,
-0.04694356024265289,
0.21071121096611023,
-0.1148592084646225,
-0.0909593254327774,
-0.09853176772594452,
-0.01785951480269432,
0.044415056705474854,
-0.06494547426700592,
0.04413401335477829,
-0.037599384784698486,
0.08934533596038818,
-0.04558804631233215,
-0.11151681840419769,
0.034580767154693604,
-0.11452043801546097,
-0.11347702145576477,
-0.044054966419935226,
0.10506316274404526,
0.11361800134181976,
0.039295874536037445,
0.011365462094545364,
0.011654154397547245,
0.00204373337328434,
-0.11841322481632233,
0.016180768609046936,
0.1302793323993683,
-0.0008535366505384445,
0.07493789494037628,
-0.061621807515621185,
0.028423219919204712,
-0.017809320241212845,
-0.0004979260265827179,
0.1315614879131317,
0.1854652464389801,
-0.06095186620950699,
0.1749371737241745,
0.19979113340377808,
-0.10457855463027954,
-0.1940639615058899,
-0.05162790045142174,
-0.000645236112177372,
0.04484229534864426,
0.05171995609998703,
-0.18462789058685303,
0.09118393063545227,
0.03522132337093353,
-0.032554566860198975,
0.015897754579782486,
-0.23348736763000488,
-0.11047820001840591,
0.08538910001516342,
0.0597117580473423,
0.18787634372711182,
-0.08003231137990952,
-0.03753051161766052,
-0.015324853360652924,
-0.03789252042770386,
0.048312608152627945,
-0.03346472978591919,
0.09201883524656296,
0.004071751609444618,
-0.02670174464583397,
0.002121630124747753,
-0.03152501583099365,
0.0944928228855133,
0.03957569971680641,
0.023320775479078293,
-0.07206037640571594,
-0.007209965959191322,
0.11036468297243118,
-0.038042813539505005,
0.09870322793722153,
0.039011187851428986,
0.07487614452838898,
-0.09140705317258835,
-0.059791307896375656,
-0.07728241384029388,
0.04243750125169754,
-0.04238578677177429,
-0.056010302156209946,
-0.06333184242248535,
0.05983696132898331,
0.03757771849632263,
0.00992016401141882,
-0.00017465651035308838,
-0.037369418889284134,
0.04448448494076729,
0.08704836666584015,
0.08752059936523438,
-0.03342889994382858,
-0.07434025406837463,
-0.051483605057001114,
-0.04843863844871521,
0.06555027514696121,
-0.08911152184009552,
0.01641128398478031,
0.027536490932106972,
0.010502977296710014,
0.08825376629829407,
0.03613544628024101,
-0.13697496056556702,
0.011577177792787552,
0.03541095182299614,
-0.12161035090684891,
-0.10990317165851593,
-0.01905941404402256,
0.03355765715241432,
-0.040378861129283905,
0.05496068671345711,
0.14334118366241455,
-0.03445842117071152,
-0.03164392709732056,
-0.0471482053399086,
0.03971708565950394,
-0.020018167793750763,
0.05159258097410202,
0.06256504356861115,
0.030174480751156807,
-0.07284479588270187,
0.07562598586082458,
0.035720061510801315,
-0.03559029474854469,
0.041754841804504395,
0.043940335512161255,
-0.09469901025295258,
-0.0783744603395462,
-0.057977672666311264,
0.08831285685300827,
-0.02470206655561924,
-0.042680319398641586,
-0.00031755492091178894,
-0.0836796760559082,
0.06795591115951538,
0.07027939707040787,
0.047939594835042953,
0.034856170415878296,
-0.08703365921974182,
0.015206260606646538,
-0.05328639596700668,
0.032591063529253006,
-0.033318281173706055,
-0.003957532346248627,
-0.05589376389980316,
0.05566495656967163,
0.06352830678224564,
0.09772462397813797,
-0.03414865583181381,
-0.0744769424200058,
-0.0835128203034401,
-0.01256981585174799,
-0.05945543944835663,
-0.03506849706172943,
-0.0740194022655487,
-0.007044334430247545,
0.0006366665475070477,
-0.0025920402258634567,
0.02139062061905861,
0.03722289204597473,
-0.043554581701755524,
-0.019498692825436592,
-0.03576362133026123,
0.03676586598157883,
-0.060456544160842896,
0.00727044977247715,
0.01674068160355091,
-0.03595763444900513,
0.09104491025209427,
0.03498198837041855,
-0.012381969951093197,
0.04816805571317673,
-0.024347800761461258,
0.03206493332982063,
-0.020411400124430656,
0.0013384826015681028,
-0.023277699947357178,
-0.10657653212547302,
-0.006563249044120312,
0.005688581615686417,
-0.023735545575618744,
0.011505672708153725,
0.05823352187871933,
-0.07256917655467987,
0.08961452543735504,
0.04443395882844925,
-0.030028901994228363,
-0.07123106718063354,
0.0422542579472065,
-0.01309451274573803,
0.031213659793138504,
0.07239292562007904,
-0.03546702116727829,
0.05236911028623581,
-0.09707233309745789,
-0.02818511798977852,
0.004140338394790888,
-0.005059696733951569,
-0.014619337394833565,
-0.052328262478113174,
-0.0033559752628207207,
0.007756493985652924,
0.1715487539768219,
-0.023106053471565247,
0.034979380667209625,
0.014510437846183777,
0.010707372799515724,
0.052073393017053604,
-0.015382200479507446,
0.0712907612323761,
-0.007055801339447498,
-0.025888534262776375,
-0.009967095218598843,
0.03782239183783531,
0.0036977771669626236,
0.003998514264822006,
0.1451101154088974,
0.04667241498827934,
0.08612062782049179,
0.07468488067388535,
0.01750621385872364,
0.016802987083792686,
-0.13111844658851624,
-0.09390904009342194,
0.006924604065716267,
0.057196274399757385,
-0.018052395433187485,
0.011343002319335938,
0.08983714133501053,
-0.08732946217060089,
0.07022037357091904,
0.04681912064552307,
-0.047898124903440475,
-0.12625852227210999,
-0.19089439511299133,
-0.02342376671731472,
-0.029309364035725594,
-0.00965447910130024,
-0.09131379425525665,
0.01579385995864868,
0.09408412873744965,
0.025580130517482758,
-0.008716952055692673,
0.09740880131721497,
-0.10382270812988281,
-0.030298195779323578,
0.043971478939056396,
-0.028906142339110374,
0.013513104058802128,
0.049204692244529724,
0.024421244859695435,
-0.0074933841824531555,
0.041957322508096695,
0.04183560237288475,
0.044811028987169266,
0.021747607737779617,
0.04893694445490837,
-0.023858463391661644,
-0.07355953752994537,
-0.033149898052215576,
-0.0035712881945073605,
0.052504487335681915,
0.13842160999774933,
0.022814277559518814,
-0.06895778328180313,
0.007110441569238901,
0.10825896263122559,
-0.03246987983584404,
-0.048042088747024536,
-0.106204092502594,
0.24595290422439575,
0.023014742881059647,
0.003315664827823639,
-0.0047773332335054874,
-0.04646933078765869,
0.0017016977071762085,
0.21030943095684052,
0.22507435083389282,
0.002742260228842497,
-0.009924318641424179,
0.008457032963633537,
-0.010610818862915039,
0.037247322499752045,
0.14782238006591797,
0.006088327616453171,
0.2526708245277405,
-0.04887104779481888,
0.041275396943092346,
-0.040206797420978546,
-0.039271604269742966,
-0.09778319299221039,
0.06940805912017822,
-0.007098343688994646,
0.00836796686053276,
-0.03273222595453262,
0.07173153012990952,
-0.04040347412228584,
-0.16811755299568176,
-0.0006431993097066879,
-0.0030390494503080845,
-0.06267176568508148,
0.0102651696652174,
-0.0039466070011258125,
0.01933112181723118,
0.0859127938747406,
-0.017627693712711334,
-0.005373972002416849,
0.12529076635837555,
0.018527919426560402,
-0.09943332523107529,
-0.06013813614845276,
0.11702096462249756,
0.01824851706624031,
0.14176644384860992,
0.011876106262207031,
0.07823990285396576,
0.08713465929031372,
0.020281072705984116,
-0.09509070962667465,
0.04468480870127678,
-0.01916905865073204,
-0.029327407479286194,
0.007135721854865551,
0.10907649248838425,
-0.009661463089287281,
0.059757452458143234,
0.02524370327591896,
-0.09139107912778854,
0.062033623456954956,
0.012948483228683472,
-0.0330696739256382,
-0.08238029479980469,
0.08754067122936249,
-0.09186665713787079,
0.1576552391052246,
0.12487302720546722,
-0.013702917844057083,
-0.045975834131240845,
-0.02945028431713581,
0.02001093327999115,
-0.0017869938164949417,
0.0563010573387146,
-0.02663404308259487,
-0.13507655262947083,
0.017693836241960526,
-0.08872808516025543,
0.02520342916250229,
-0.2525978088378906,
-0.08981260657310486,
0.02979370951652527,
-0.017708171159029007,
-0.020772799849510193,
0.05108217895030975,
0.045931268483400345,
0.02690041996538639,
-0.03521052747964859,
0.023266850039362907,
-0.03953772038221359,
0.05964678153395653,
-0.11091307550668716,
-0.09239964187145233
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 60k (uncased)
Seed 3 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-60k')
model = BertModel.from_pretrained("multiberts-seed-3-60k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-60k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 60k (uncased)
Seed 3 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 60k (uncased)\nSeed 3 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 60k (uncased)\nSeed 3 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 60k (uncased)\nSeed 3 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08268263936042786,
-0.001758853904902935,
-0.0021647631656378508,
0.06808999925851822,
0.08666355907917023,
0.0012924475595355034,
0.11942075192928314,
0.05029852315783501,
-0.032019734382629395,
0.02442021481692791,
0.09226895868778229,
0.034507423639297485,
0.04187913239002228,
0.06636720895767212,
0.0960259661078453,
-0.25794655084609985,
0.049643274396657944,
-0.06475825607776642,
0.05775752663612366,
0.07697756588459015,
0.10003508627414703,
-0.07100917398929596,
0.0637141615152359,
0.03402786701917648,
-0.08074657618999481,
-0.014565972611308098,
-0.017898550257086754,
-0.03431110084056854,
0.10031716525554657,
0.06780333817005157,
0.0603497177362442,
0.0017764884978532791,
0.058246150612831116,
-0.09087696671485901,
0.01646294817328453,
0.04465252906084061,
-0.0017633172683417797,
0.023800181224942207,
-0.007156293839216232,
0.015173444524407387,
0.10836256295442581,
0.035449232906103134,
0.07750708609819412,
0.035148393362760544,
-0.09601649641990662,
-0.10840203613042831,
-0.08163368701934814,
0.10003112256526947,
0.053645502775907516,
0.043742336332798004,
-0.006588418036699295,
0.077508345246315,
-0.033865492790937424,
0.07439549267292023,
0.11289496719837189,
-0.2532646954059601,
-0.009253323078155518,
0.07123786956071854,
0.044889677315950394,
0.04268411546945572,
0.012527030892670155,
0.02659398503601551,
0.004513435065746307,
0.04484563320875168,
0.02761385217308998,
-0.023573383688926697,
0.12584295868873596,
-0.044063009321689606,
-0.15223272144794464,
-0.042187996208667755,
0.12180445343255997,
-0.005298737436532974,
-0.12514865398406982,
-0.10248417407274246,
-0.029931968078017235,
0.1169113889336586,
-0.0032664481550455093,
-0.017912540584802628,
-0.0037061842158436775,
0.010429869405925274,
0.021857721731066704,
-0.09178196638822556,
-0.08545756340026855,
-0.02952297031879425,
-0.03690676391124725,
0.12670950591564178,
0.046127356588840485,
0.05042573809623718,
-0.035142384469509125,
0.08728380501270294,
-0.1163388341665268,
-0.038299836218357086,
-0.05263528600335121,
-0.08354842662811279,
-0.01925947517156601,
0.008508405648171902,
-0.02794124186038971,
-0.08464360237121582,
-0.05996965244412422,
0.12167048454284668,
0.03289906680583954,
0.03177989274263382,
-0.0005859960801899433,
0.04079921543598175,
0.07386674731969833,
0.09390708804130554,
-0.039304666221141815,
0.049857910722494125,
0.030084379017353058,
-0.02239549532532692,
0.06000640243291855,
-0.050601232796907425,
-0.100996233522892,
0.07913658022880554,
-0.0013722609728574753,
0.039273038506507874,
0.024113841354846954,
0.03554290905594826,
-0.012552505359053612,
-0.07356096059083939,
0.16601547598838806,
-0.07764716446399689,
-0.01191291306167841,
-0.016901766881346703,
0.011583993211388588,
0.048380620777606964,
0.029487334191799164,
-0.005583536345511675,
-0.04699504002928734,
-0.005546252243220806,
-0.05455183610320091,
-0.02717055007815361,
-0.05658167600631714,
-0.11800000816583633,
-0.0003196592442691326,
-0.04133901000022888,
-0.03355035185813904,
-0.13906925916671753,
-0.21770039200782776,
-0.019751425832509995,
0.06420502811670303,
-0.0030477168038487434,
-0.008815653622150421,
0.023095471784472466,
0.01575555093586445,
-0.020694805309176445,
0.010340605862438679,
-0.0432635135948658,
-0.0013279449194669724,
-0.006780313327908516,
-0.02958809956908226,
0.056467242538928986,
-0.04066508635878563,
0.024024469777941704,
-0.06922955811023712,
0.02216215990483761,
-0.21015603840351105,
0.08931005746126175,
-0.034104470163583755,
0.000689089298248291,
-0.03867357224225998,
-0.044456593692302704,
0.008023219183087349,
0.04670030623674393,
-0.008853784762322903,
0.11678756773471832,
-0.1370922327041626,
-0.050316400825977325,
0.1791205257177353,
-0.1580861210823059,
-0.0009321793913841248,
0.09813015908002853,
-0.04819222912192345,
0.05418086051940918,
0.13165664672851562,
0.09886307269334793,
0.07896403223276138,
-0.07472512871026993,
0.009866656735539436,
0.06131022423505783,
-0.06695490330457687,
0.0553923174738884,
0.0905103087425232,
-0.02575904317200184,
-0.13359102606773376,
0.029258567839860916,
-0.07742023468017578,
-0.01001633983105421,
-0.02494296245276928,
-0.020632581785321236,
0.009071724489331245,
-0.037151630967855453,
0.02690230682492256,
0.006414814852178097,
0.01746431179344654,
-0.04076596722006798,
-0.081825852394104,
0.029374778270721436,
0.07462191581726074,
-0.07118505984544754,
0.04243103414773941,
-0.07150784134864807,
0.06031070649623871,
-0.07308345288038254,
-0.005584376864135265,
-0.1676400601863861,
-0.02686883695423603,
0.04464523494243622,
-0.04608995094895363,
0.04957595095038414,
0.09049646556377411,
0.0041238232515752316,
0.12238768488168716,
-0.03902270644903183,
0.0029906046111136675,
-0.007849153131246567,
-0.011421039700508118,
-0.048382651060819626,
-0.12192635238170624,
-0.08133302628993988,
-0.06939960271120071,
0.09857466071844101,
-0.06970858573913574,
0.028817683458328247,
-0.07014606893062592,
-0.02132987789809704,
-0.00969354622066021,
-0.05925827845931053,
-0.0030484935268759727,
0.010035580955445766,
-0.029744915664196014,
-0.04733927175402641,
0.04977456480264664,
0.04873785749077797,
-0.05869285762310028,
0.07767077535390854,
-0.10844399780035019,
-0.06381301581859589,
0.0545819029211998,
0.014490031637251377,
-0.08181054890155792,
0.08691632747650146,
-0.02105584740638733,
-0.012497751973569393,
-0.05792057514190674,
-0.04358328878879547,
0.19269201159477234,
-0.0219368077814579,
0.10010585188865662,
-0.09107150137424469,
-0.0003872967790812254,
0.026727356016635895,
-0.04523129761219025,
-0.019091933965682983,
0.059892185032367706,
0.0540786050260067,
-0.1873386800289154,
0.016463302075862885,
0.055268801748752594,
0.07745201885700226,
0.11358281970024109,
0.026851188391447067,
-0.024946028366684914,
-0.04590671509504318,
-0.010670527815818787,
0.005797702353447676,
0.054936233907938004,
-0.02408783882856369,
-0.008614100515842438,
0.030581913888454437,
0.05848737061023712,
0.017334282398223877,
-0.07977712154388428,
0.03316640108823776,
0.06521593034267426,
-0.01672137901186943,
-0.03897944837808609,
-0.02368566021323204,
-0.0608796626329422,
0.06158161908388138,
0.053855109959840775,
0.03451129421591759,
0.027081381529569626,
-0.01546988170593977,
-0.13545769453048706,
0.1896369457244873,
-0.11330464482307434,
-0.25677815079689026,
-0.10671789944171906,
-0.062361009418964386,
-0.027524864301085472,
0.039997074753046036,
0.05694808065891266,
-0.03286483511328697,
-0.043818309903144836,
-0.11694197356700897,
0.060951679944992065,
-0.06594016402959824,
-0.03249688073992729,
-0.01044205017387867,
-0.052967675030231476,
-0.019675422459840775,
-0.12716233730316162,
-0.013146733865141869,
-0.032219626009464264,
-0.07660342752933502,
0.008469721302390099,
-0.03714413940906525,
0.028544746339321136,
0.13464787602424622,
0.03750000149011612,
-0.01838522218167782,
-0.017962336540222168,
0.19275647401809692,
0.010671989992260933,
0.05832946300506592,
0.11364540457725525,
-0.025609899312257767,
0.05492541193962097,
0.04488932341337204,
0.02462225779891014,
-0.04980824887752533,
0.014894342049956322,
-0.011878454126417637,
-0.12144250422716141,
-0.17476370930671692,
-0.07038780301809311,
-0.0046054041013121605,
0.006676154676824808,
0.018314020708203316,
0.036229342222213745,
0.022221703082323074,
0.039073631167411804,
-0.030680684372782707,
0.025790244340896606,
-0.01402321457862854,
0.08179870247840881,
0.02791111171245575,
-0.07456634938716888,
0.09391383081674576,
-0.05915077403187752,
0.015118410810828209,
0.10955166071653366,
-0.057754818350076675,
0.18588346242904663,
0.02675153873860836,
0.06130418926477432,
0.10215775668621063,
0.018606960773468018,
0.05189410224556923,
0.08813796192407608,
-0.046107370406389236,
0.004500353708863258,
-0.0627376139163971,
-0.05270598456263542,
-0.034637462347745895,
0.04734277352690697,
0.02976464293897152,
0.017065681517124176,
-0.11809884756803513,
0.02112175151705742,
-0.0009626032551750541,
0.138889342546463,
0.04695330932736397,
-0.12128277122974396,
-0.11994822323322296,
0.03449811786413193,
-0.04497198760509491,
-0.059199683368206024,
0.029891755431890488,
0.055148590356111526,
-0.15395042300224304,
0.047747932374477386,
-0.006297942250967026,
0.06561484187841415,
-0.09107331931591034,
0.016094084829092026,
-0.04503197222948074,
0.0011694077402353287,
0.0044828555546700954,
0.06941331923007965,
-0.13866828382015228,
0.10433749109506607,
0.020628686994314194,
0.048368196934461594,
-0.07894273102283478,
0.0152397770434618,
-0.00930093415081501,
0.10594601184129715,
0.11692431569099426,
0.042678866535425186,
-0.05438222363591194,
-0.017603561282157898,
-0.04828536882996559,
0.020101942121982574,
0.0621003583073616,
-0.0779680535197258,
0.06162635236978531,
0.007213427219539881,
0.007863091304898262,
-0.022240176796913147,
0.020021650940179825,
-0.13339084386825562,
-0.12255366146564484,
0.06181976944208145,
-0.07901100814342499,
-0.1013992428779602,
-0.05586254596710205,
-0.06372227519750595,
-0.051703326404094696,
0.21275538206100464,
-0.11527124047279358,
-0.08908644318580627,
-0.09916678816080093,
-0.01582593098282814,
0.04325827583670616,
-0.06493698060512543,
0.043002236634492874,
-0.03853927180171013,
0.09178519248962402,
-0.04759500175714493,
-0.11108410358428955,
0.03521985933184624,
-0.11486229300498962,
-0.11418810486793518,
-0.04443658888339996,
0.10627247393131256,
0.11468476802110672,
0.03955776244401932,
0.010716577060520649,
0.011465017683804035,
-0.00020845234394073486,
-0.11739207804203033,
0.01720103807747364,
0.13178007304668427,
0.00031346455216407776,
0.07281509041786194,
-0.0607924610376358,
0.028976716101169586,
-0.017560087144374847,
0.0003970582038164139,
0.13376806676387787,
0.1858976185321808,
-0.0609380379319191,
0.1750470995903015,
0.1997588574886322,
-0.10350514948368073,
-0.19260373711585999,
-0.05285528674721718,
-0.0011486541479825974,
0.04489884153008461,
0.05207834765315056,
-0.18620565533638,
0.08994694799184799,
0.03405700996518135,
-0.03233107924461365,
0.014336999505758286,
-0.23359958827495575,
-0.11005176603794098,
0.08786512911319733,
0.05786159262061119,
0.18698546290397644,
-0.08081655204296112,
-0.03831016644835472,
-0.015492763370275497,
-0.03571365028619766,
0.05118941888213158,
-0.03604366257786751,
0.09252061694860458,
0.005117844790220261,
-0.02775464579463005,
0.002086515538394451,
-0.03185572102665901,
0.09369251132011414,
0.040319234132766724,
0.02352120541036129,
-0.07161609828472137,
-0.005587780848145485,
0.11298500746488571,
-0.03873732313513756,
0.09892888367176056,
0.03855416178703308,
0.07437938451766968,
-0.09458979964256287,
-0.06002795696258545,
-0.07689575105905533,
0.04496118798851967,
-0.04218436777591705,
-0.05637212470173836,
-0.063142791390419,
0.05984313786029816,
0.03664017096161842,
0.01112400833517313,
0.0031661204993724823,
-0.038321856409311295,
0.04426853731274605,
0.08809532225131989,
0.0862661823630333,
-0.03150689974427223,
-0.0747300237417221,
-0.05262849107384682,
-0.04759342968463898,
0.06609735637903214,
-0.09155687689781189,
0.016095472499728203,
0.026697421446442604,
0.011228104121983051,
0.08986473083496094,
0.03473508358001709,
-0.13917267322540283,
0.010825961828231812,
0.03380429744720459,
-0.12314879149198532,
-0.1088942438364029,
-0.01883567124605179,
0.031765975058078766,
-0.038149964064359665,
0.056608788669109344,
0.14516893029212952,
-0.03343351185321808,
-0.03119928389787674,
-0.04699156805872917,
0.0381719209253788,
-0.019115950912237167,
0.05041045695543289,
0.06408592313528061,
0.030126607045531273,
-0.07289149612188339,
0.07460382580757141,
0.034836627542972565,
-0.03414345160126686,
0.04361169785261154,
0.045020971447229385,
-0.09498200565576553,
-0.07793359458446503,
-0.05898115038871765,
0.08952555805444717,
-0.02547868900001049,
-0.04541388154029846,
-0.0020889323204755783,
-0.08093947172164917,
0.0694856345653534,
0.07273221015930176,
0.0480644665658474,
0.03557780012488365,
-0.08725517988204956,
0.015336786396801472,
-0.05251054838299751,
0.0330624133348465,
-0.0344901904463768,
-0.004499293863773346,
-0.05495703965425491,
0.061231739819049835,
0.06471922993659973,
0.09788571298122406,
-0.0340333953499794,
-0.07562270760536194,
-0.08368562161922455,
-0.01257765106856823,
-0.061051808297634125,
-0.03527715802192688,
-0.07593639194965363,
-0.006970124784857035,
0.00020817015320062637,
-0.001883581280708313,
0.02178128808736801,
0.037230975925922394,
-0.04283621907234192,
-0.01906491629779339,
-0.035660456866025925,
0.03699978068470955,
-0.06310640275478363,
0.008115211501717567,
0.015420117415487766,
-0.03635604679584503,
0.09237554669380188,
0.03633680194616318,
-0.01210024580359459,
0.04753297194838524,
-0.029013950377702713,
0.03320642560720444,
-0.020427651703357697,
0.00006184540688991547,
-0.023747621104121208,
-0.10824768245220184,
-0.0064691281877458096,
0.004581758752465248,
-0.0243549644947052,
0.010545001365244389,
0.056710384786129,
-0.07248872518539429,
0.0897011086344719,
0.045647017657756805,
-0.030893370509147644,
-0.07161693274974823,
0.04200008884072304,
-0.015977486968040466,
0.030614003539085388,
0.07162351906299591,
-0.03397009149193764,
0.05257808044552803,
-0.09718956798315048,
-0.028144918382167816,
0.004679189529269934,
-0.005083870142698288,
-0.015135644003748894,
-0.05308714509010315,
-0.003711443394422531,
0.007001882418990135,
0.17454436421394348,
-0.023526731878519058,
0.036176618188619614,
0.013837249018251896,
0.009417985565960407,
0.051851898431777954,
-0.01571044884622097,
0.07167832553386688,
-0.007008691318333149,
-0.025265609845519066,
-0.011037983000278473,
0.03824188560247421,
0.004404628649353981,
0.004083255305886269,
0.1431799829006195,
0.04682955890893936,
0.08547942340373993,
0.0744730532169342,
0.017972346395254135,
0.016373760998249054,
-0.13421113789081573,
-0.08780030161142349,
0.007555731572210789,
0.05725117027759552,
-0.018099170178174973,
0.01571488007903099,
0.09360312670469284,
-0.08735797554254532,
0.07059492915868759,
0.04854010418057442,
-0.047264762222766876,
-0.12693452835083008,
-0.19289256632328033,
-0.023765580728650093,
-0.029591819271445274,
-0.009931655600667,
-0.0917528048157692,
0.015762005001306534,
0.09415390342473984,
0.0265770573168993,
-0.009090980514883995,
0.09538334608078003,
-0.1024341806769371,
-0.03180466592311859,
0.04296557605266571,
-0.029029740020632744,
0.013856633566319942,
0.04776893928647041,
0.023915540426969528,
-0.00605432316660881,
0.041081443428993225,
0.0425993874669075,
0.043747175484895706,
0.02643093839287758,
0.04993486404418945,
-0.024780958890914917,
-0.0741816908121109,
-0.03287458047270775,
-0.002792827319353819,
0.05229366570711136,
0.13748596608638763,
0.023442037403583527,
-0.0695212185382843,
0.007109905127435923,
0.1075960099697113,
-0.031220927834510803,
-0.04949498921632767,
-0.10746302455663681,
0.24481403827667236,
0.022656423971056938,
0.0031558501068502665,
-0.005649769678711891,
-0.046139709651470184,
0.00406096875667572,
0.2084844559431076,
0.2253853976726532,
0.003573222551494837,
-0.009295244701206684,
0.009891614317893982,
-0.010587627999484539,
0.03702130168676376,
0.14444617927074432,
0.00553106889128685,
0.2540954351425171,
-0.04841338098049164,
0.03972337394952774,
-0.04053095728158951,
-0.03962622582912445,
-0.09810362756252289,
0.06910640001296997,
-0.0055005610920488834,
0.0076396046206355095,
-0.0316692553460598,
0.07167217135429382,
-0.040803324431180954,
-0.17054389417171478,
-0.00048230215907096863,
-0.002198517322540283,
-0.06175319105386734,
0.010715941898524761,
-0.004274668637663126,
0.020568104460835457,
0.08514069020748138,
-0.017259936779737473,
-0.00607125461101532,
0.12560828030109406,
0.018713314086198807,
-0.09959769248962402,
-0.057739704847335815,
0.1168927252292633,
0.018929488956928253,
0.1399538516998291,
0.01092634815722704,
0.07922719419002533,
0.08823651075363159,
0.019800927489995956,
-0.0929129421710968,
0.04496404528617859,
-0.018588026985526085,
-0.028145479038357735,
0.006054457277059555,
0.10862927883863449,
-0.009033296257257462,
0.06309454143047333,
0.026517031714320183,
-0.09057453274726868,
0.06343294680118561,
0.014417916536331177,
-0.03471670299768448,
-0.08238118141889572,
0.08787968754768372,
-0.09166757017374039,
0.15764522552490234,
0.12467846274375916,
-0.013735410757362843,
-0.04668721556663513,
-0.03127684444189072,
0.02005399391055107,
-0.001176356803625822,
0.053478874266147614,
-0.027013404294848442,
-0.1338614821434021,
0.01964849978685379,
-0.083180733025074,
0.0261578056961298,
-0.2522159814834595,
-0.08916877955198288,
0.030290203168988228,
-0.016799742355942726,
-0.020122092217206955,
0.04905132204294205,
0.04485071077942848,
0.02730078250169754,
-0.035672955214977264,
0.02399390935897827,
-0.038536399602890015,
0.059144213795661926,
-0.11137478053569794,
-0.09199657291173935
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 700k (uncased)
Seed 3 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-700k')
model = BertModel.from_pretrained("multiberts-seed-3-700k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-700k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 700k (uncased)
Seed 3 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 700k (uncased)\nSeed 3 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 700k (uncased)\nSeed 3 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 700k (uncased)\nSeed 3 intermediate checkpoint 700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08264574408531189,
-0.007850099354982376,
-0.0021160240285098553,
0.06845241785049438,
0.08497998863458633,
0.0019320831634104252,
0.11471828073263168,
0.05016745626926422,
-0.032982610166072845,
0.02202805131673813,
0.09403881430625916,
0.031431645154953,
0.04322215914726257,
0.06742087006568909,
0.0968376100063324,
-0.2589453458786011,
0.05093396455049515,
-0.06391319632530212,
0.060254618525505066,
0.07680147886276245,
0.10110417008399963,
-0.07158945500850677,
0.0622674785554409,
0.03463434427976608,
-0.07946330308914185,
-0.016275379806756973,
-0.01828780025243759,
-0.03600852191448212,
0.09912220388650894,
0.0693192109465599,
0.05854414403438568,
0.002850128337740898,
0.058915115892887115,
-0.08958059549331665,
0.015946030616760254,
0.04473679140210152,
-0.001589563675224781,
0.02287869341671467,
-0.0077089350670576096,
0.01860457845032215,
0.10773272812366486,
0.03955607861280441,
0.0784287303686142,
0.035213958472013474,
-0.09658314287662506,
-0.11326920241117477,
-0.08129836618900299,
0.10629542171955109,
0.057131700217723846,
0.04217793047428131,
-0.005821986123919487,
0.07614902406930923,
-0.0328134223818779,
0.0755416601896286,
0.10530973970890045,
-0.2524527311325073,
-0.010883703827857971,
0.07339350134134293,
0.04546207934617996,
0.0452842116355896,
0.013076026923954487,
0.027805203571915627,
0.005632467567920685,
0.04367651045322418,
0.030289601534605026,
-0.022898603230714798,
0.12320526689291,
-0.04502211883664131,
-0.15270929038524628,
-0.04321162402629852,
0.12057310342788696,
-0.004142293706536293,
-0.12567836046218872,
-0.102763831615448,
-0.028757169842720032,
0.11149489879608154,
-0.0030222730711102486,
-0.02066507376730442,
-0.005235546734184027,
0.010897700674831867,
0.022107968106865883,
-0.09416919946670532,
-0.08556004613637924,
-0.030124686658382416,
-0.03555141016840935,
0.12551014125347137,
0.04526510834693909,
0.05175624415278435,
-0.033008843660354614,
0.08766844123601913,
-0.1175578236579895,
-0.03810294717550278,
-0.053972795605659485,
-0.08502118289470673,
-0.018978146836161613,
0.007583807688206434,
-0.025788988918066025,
-0.08443973958492279,
-0.060334593057632446,
0.11847993731498718,
0.04081112518906593,
0.03183017298579216,
-0.0012999307364225388,
0.04163731634616852,
0.07297435402870178,
0.09719344973564148,
-0.03944119065999985,
0.047530487179756165,
0.033026643097400665,
-0.019116118550300598,
0.060042090713977814,
-0.050474103540182114,
-0.09953856468200684,
0.07741523534059525,
0.0015814052894711494,
0.040198810398578644,
0.02717733383178711,
0.034319259226322174,
-0.011246719397604465,
-0.07422046363353729,
0.1711832880973816,
-0.07509566098451614,
-0.010235488414764404,
-0.01907298155128956,
0.011170333251357079,
0.046679429709911346,
0.029003513976931572,
-0.0047286394983530045,
-0.04619663208723068,
-0.005086354911327362,
-0.05467969924211502,
-0.026720885187387466,
-0.05557174235582352,
-0.11655639857053757,
-0.001326223835349083,
-0.03690300136804581,
-0.03364942967891693,
-0.14095664024353027,
-0.21934914588928223,
-0.01837901957333088,
0.06363605707883835,
-0.0038294135592877865,
-0.009364290162920952,
0.02533300407230854,
0.018094634637236595,
-0.021580133587121964,
0.009086109697818756,
-0.04615255072712898,
-0.0008250251412391663,
-0.006293918937444687,
-0.0300776157528162,
0.056744784116744995,
-0.04126840829849243,
0.021886851638555527,
-0.06973125785589218,
0.02340926229953766,
-0.21202045679092407,
0.08881644159555435,
-0.034741949290037155,
0.0032518673688173294,
-0.038429755717515945,
-0.0469045415520668,
0.007245991379022598,
0.0452982559800148,
-0.008062528446316719,
0.11536797881126404,
-0.13542440533638,
-0.047799400985240936,
0.17659369111061096,
-0.15902481973171234,
-0.003882739692926407,
0.1014188602566719,
-0.04599487781524658,
0.053625352680683136,
0.13163267076015472,
0.09912285208702087,
0.08321338146924973,
-0.07390932738780975,
0.012731232680380344,
0.06092343106865883,
-0.06879299879074097,
0.052705373615026474,
0.08932919800281525,
-0.025616923347115517,
-0.13825461268424988,
0.03012368083000183,
-0.07326647639274597,
-0.009615777060389519,
-0.026071088388562202,
-0.022718457505106926,
0.008373523131012917,
-0.0380733385682106,
0.025097154080867767,
0.0057762665674090385,
0.01781919039785862,
-0.041573334485292435,
-0.07986913621425629,
0.02379748225212097,
0.07473918795585632,
-0.07001985609531403,
0.04367987811565399,
-0.07016666978597641,
0.060921475291252136,
-0.07375708967447281,
-0.005021681077778339,
-0.16284440457820892,
-0.024897655472159386,
0.04594489932060242,
-0.043966300785541534,
0.047864001244306564,
0.08645575493574142,
0.0030363062396645546,
0.12262053787708282,
-0.04210963100194931,
0.0013371368404477835,
-0.005706837400794029,
-0.009862186387181282,
-0.04926282539963722,
-0.11819681525230408,
-0.08415083587169647,
-0.06756758689880371,
0.09571297466754913,
-0.07203397154808044,
0.0287378691136837,
-0.07230227440595627,
-0.021992601454257965,
-0.01011844165623188,
-0.05916766822338104,
-0.004458317533135414,
0.011249185539782047,
-0.0284119863063097,
-0.04693976044654846,
0.04918495565652847,
0.04871676489710808,
-0.06070725992321968,
0.07362662255764008,
-0.10514388233423233,
-0.057888567447662354,
0.05374603718519211,
0.014121675863862038,
-0.08238938450813293,
0.08489395678043365,
-0.02068472094833851,
-0.013835572637617588,
-0.05729414522647858,
-0.046961333602666855,
0.18883901834487915,
-0.02359076216816902,
0.09947319328784943,
-0.09193366020917892,
0.002037410857155919,
0.027681266888976097,
-0.04543420672416687,
-0.02030373550951481,
0.056397296488285065,
0.05107300728559494,
-0.19194963574409485,
0.013735819607973099,
0.05399373173713684,
0.07763615250587463,
0.11180662363767624,
0.028469383716583252,
-0.02192353643476963,
-0.045437224209308624,
-0.010976647958159447,
0.006161815486848354,
0.055280618369579315,
-0.023940593004226685,
-0.006112397648394108,
0.029819609597325325,
0.06019186973571777,
0.018230242654681206,
-0.07940761744976044,
0.033265676349401474,
0.0663776844739914,
-0.016979588195681572,
-0.03802897408604622,
-0.0223014447838068,
-0.06162544712424278,
0.06117460876703262,
0.05352448672056198,
0.03387231007218361,
0.02813497744500637,
-0.014788500964641571,
-0.13530923426151276,
0.18907815217971802,
-0.11311137676239014,
-0.2577972412109375,
-0.10825896263122559,
-0.059468939900398254,
-0.029255123808979988,
0.04011533781886101,
0.057701513171195984,
-0.030148766934871674,
-0.043493691831827164,
-0.1185201108455658,
0.0609021857380867,
-0.06712473183870316,
-0.030885083600878716,
-0.010023532435297966,
-0.0532555878162384,
-0.01830756478011608,
-0.12777721881866455,
-0.012545883655548096,
-0.030821405351161957,
-0.07654426246881485,
0.006947689689695835,
-0.03721725568175316,
0.029505515471100807,
0.13426069915294647,
0.0371122881770134,
-0.018316587433218956,
-0.019085289910435677,
0.19633492827415466,
0.011744838207960129,
0.05958985164761543,
0.11260059475898743,
-0.024288706481456757,
0.05455898866057396,
0.04613848030567169,
0.02453712746500969,
-0.05008827894926071,
0.014979702420532703,
-0.01591150276362896,
-0.12286493927240372,
-0.17075175046920776,
-0.07191292941570282,
-0.004691353999078274,
0.00892455130815506,
0.0188568327575922,
0.03681960701942444,
0.02731207013130188,
0.04007912054657936,
-0.03075222484767437,
0.025668324902653694,
-0.0124310702085495,
0.08220255374908447,
0.02827627584338188,
-0.07361828535795212,
0.09465648233890533,
-0.05848228931427002,
0.017145484685897827,
0.10855495184659958,
-0.05878134444355965,
0.18707966804504395,
0.02560146152973175,
0.055939316749572754,
0.10178952664136887,
0.022718586027622223,
0.053277429193258286,
0.0893857479095459,
-0.047235481441020966,
0.003655567765235901,
-0.06073480099439621,
-0.05165642872452736,
-0.03555677458643913,
0.046842657029628754,
0.033219143748283386,
0.01904117316007614,
-0.11915818601846695,
0.018637534230947495,
-0.0012887706980109215,
0.13711762428283691,
0.04493578150868416,
-0.12092244625091553,
-0.11891454458236694,
0.03477885201573372,
-0.04536758363246918,
-0.060556113719940186,
0.030000679194927216,
0.05432509258389473,
-0.1547214388847351,
0.04745671525597572,
-0.004659196361899376,
0.06601989269256592,
-0.08979897201061249,
0.015812164172530174,
-0.043447088450193405,
-0.0007779505103826523,
0.004398919641971588,
0.0691833421587944,
-0.13299453258514404,
0.10661967098712921,
0.020054252818226814,
0.050081197172403336,
-0.07945559918880463,
0.013944205828011036,
-0.012265583500266075,
0.10784094035625458,
0.11595133692026138,
0.042736269533634186,
-0.04942111298441887,
-0.02333402819931507,
-0.048335324972867966,
0.018270447850227356,
0.06051429733633995,
-0.07540345937013626,
0.05909322202205658,
0.008508926257491112,
0.007747591473162174,
-0.022951582446694374,
0.024814080446958542,
-0.13521400094032288,
-0.12152786552906036,
0.06118318438529968,
-0.07708077132701874,
-0.09726344794034958,
-0.05683229863643646,
-0.06295444071292877,
-0.052098192274570465,
0.21185055375099182,
-0.11036880314350128,
-0.08881880342960358,
-0.09825101494789124,
-0.015352752059698105,
0.04707466810941696,
-0.06422364711761475,
0.04559027776122093,
-0.037389881908893585,
0.08716508746147156,
-0.04589296877384186,
-0.10903604328632355,
0.03499320521950722,
-0.11527451127767563,
-0.11506737768650055,
-0.04402075707912445,
0.10712392628192902,
0.1132783442735672,
0.03849025070667267,
0.013571200892329216,
0.010748963803052902,
0.002512989565730095,
-0.11941193044185638,
0.015355192124843597,
0.12989433109760284,
-0.003811819478869438,
0.0716596394777298,
-0.05706334114074707,
0.030951574444770813,
-0.01702066697180271,
-0.0018340498208999634,
0.1321106255054474,
0.18912231922149658,
-0.06193619593977928,
0.17449241876602173,
0.20143958926200867,
-0.10371731221675873,
-0.19003500044345856,
-0.05472516268491745,
-0.0014651231467723846,
0.044669389724731445,
0.05013255774974823,
-0.1800985038280487,
0.0892830342054367,
0.037581078708171844,
-0.03332926332950592,
0.01403595507144928,
-0.2352912724018097,
-0.11228622496128082,
0.08756567537784576,
0.05787502974271774,
0.18794676661491394,
-0.08079072833061218,
-0.03766645863652229,
-0.014217428863048553,
-0.0358351394534111,
0.04842705279588699,
-0.032171495258808136,
0.09215559810400009,
0.0042519159615039825,
-0.03657658025622368,
0.0021495921537280083,
-0.03230680897831917,
0.09676762670278549,
0.03889952227473259,
0.023074783384799957,
-0.07141740620136261,
-0.0019747093319892883,
0.11337781697511673,
-0.03783594071865082,
0.09835544973611832,
0.043269507586956024,
0.07495074719190598,
-0.0969659835100174,
-0.06092293933033943,
-0.07673537731170654,
0.04230167344212532,
-0.042378224432468414,
-0.05680904909968376,
-0.06398693472146988,
0.06036093086004257,
0.03796204924583435,
0.009779305197298527,
0.00266231968998909,
-0.03761254996061325,
0.045819833874702454,
0.09320559352636337,
0.08414491266012192,
-0.03662807494401932,
-0.07267672568559647,
-0.05180764198303223,
-0.04927169904112816,
0.06592325866222382,
-0.0844927728176117,
0.017722107470035553,
0.02700863406062126,
0.009425894357264042,
0.08861950784921646,
0.034369200468063354,
-0.13665175437927246,
0.011601898819208145,
0.03537210077047348,
-0.1219870001077652,
-0.11021287739276886,
-0.02018371969461441,
0.031122606247663498,
-0.0384736992418766,
0.05631227791309357,
0.14310753345489502,
-0.035497039556503296,
-0.03109028935432434,
-0.04746381938457489,
0.038045596331357956,
-0.02079695463180542,
0.052523422986269,
0.0634998008608818,
0.030865982174873352,
-0.07179079949855804,
0.07345865666866302,
0.037178196012973785,
-0.03476040065288544,
0.04078170657157898,
0.04496108740568161,
-0.0947091281414032,
-0.0776263028383255,
-0.06207069009542465,
0.08334878087043762,
-0.024544483050704002,
-0.044416069984436035,
0.001394268125295639,
-0.08169950544834137,
0.06754930317401886,
0.07043717056512833,
0.048641663044691086,
0.035363875329494476,
-0.08715370297431946,
0.015387865714728832,
-0.05259399488568306,
0.03249012678861618,
-0.03147517144680023,
-0.004392176866531372,
-0.05629631131887436,
0.05945778265595436,
0.06374023854732513,
0.09694073349237442,
-0.034623418003320694,
-0.07682756334543228,
-0.08447547256946564,
-0.011906808242201805,
-0.06039731204509735,
-0.03639116883277893,
-0.0769326463341713,
-0.006095975637435913,
0.0006125560030341148,
-0.0028616804629564285,
0.0221023578196764,
0.03633039817214012,
-0.04201113432645798,
-0.019278405234217644,
-0.03521239385008812,
0.03597099334001541,
-0.06018839776515961,
0.007200093008577824,
0.014957167208194733,
-0.03511202707886696,
0.09212478995323181,
0.034211643040180206,
-0.011379568837583065,
0.045656487345695496,
-0.023880258202552795,
0.03535765781998634,
-0.020105287432670593,
0.0005339845083653927,
-0.02374301478266716,
-0.10721545666456223,
-0.006017641164362431,
0.006913831457495689,
-0.026084966957569122,
0.012321633286774158,
0.05564091354608536,
-0.0742383748292923,
0.08538071066141129,
0.044163912534713745,
-0.03155193850398064,
-0.07054246217012405,
0.04114678129553795,
-0.015437755733728409,
0.030258575454354286,
0.07061903178691864,
-0.035429418087005615,
0.052090562880039215,
-0.09853163361549377,
-0.027887683361768723,
0.0038102716207504272,
-0.005103595554828644,
-0.011372189968824387,
-0.05131871998310089,
-0.0031901104375720024,
0.008785340003669262,
0.17436859011650085,
-0.02478611469268799,
0.036382146179676056,
0.014216618612408638,
0.006213754415512085,
0.05270978435873985,
-0.015653591603040695,
0.07593472301959991,
-0.006749443709850311,
-0.025750840082764626,
-0.010036466643214226,
0.03810354694724083,
0.0051452964544296265,
0.0058250948786735535,
0.14081358909606934,
0.0453115776181221,
0.08569001406431198,
0.07416624575853348,
0.01504543237388134,
0.016290387138724327,
-0.1320984661579132,
-0.09388740360736847,
0.008348817005753517,
0.05763322114944458,
-0.018907975405454636,
0.008316917344927788,
0.09286095201969147,
-0.08808203041553497,
0.06857933104038239,
0.048946090042591095,
-0.04907210171222687,
-0.12559880316257477,
-0.19510266184806824,
-0.023688998073339462,
-0.03285009413957596,
-0.009568583220243454,
-0.09156258404254913,
0.01565564051270485,
0.08857759833335876,
0.026777377352118492,
-0.009236549958586693,
0.09533318877220154,
-0.10140969604253769,
-0.028884930536150932,
0.043198708444833755,
-0.02838367410004139,
0.01517447829246521,
0.04879622161388397,
0.0234308410435915,
-0.008750947192311287,
0.04035138338804245,
0.04132097214460373,
0.043510936200618744,
0.023228101432323456,
0.04964514821767807,
-0.02302449569106102,
-0.07439953833818436,
-0.03317990526556969,
-0.004052266478538513,
0.05463257431983948,
0.1392265111207962,
0.02327592670917511,
-0.06919246166944504,
0.0064354464411735535,
0.10862594842910767,
-0.030247844755649567,
-0.05083122104406357,
-0.10697044432163239,
0.24524840712547302,
0.02444598823785782,
0.0030166865326464176,
-0.004443424753844738,
-0.045303959399461746,
0.0030315332114696503,
0.21006400883197784,
0.2262434959411621,
0.004355049226433039,
-0.009176303632557392,
0.00924801267683506,
-0.011729206889867783,
0.03679736331105232,
0.1475462168455124,
0.005297340452671051,
0.2517749071121216,
-0.047882258892059326,
0.04324275627732277,
-0.04151643440127373,
-0.03991803526878357,
-0.09873700141906738,
0.06876156479120255,
-0.005431044846773148,
0.009108438156545162,
-0.0328187569975853,
0.07294989377260208,
-0.04249509423971176,
-0.16389212012290955,
-0.0007136845961213112,
-0.0010459828190505505,
-0.06171440705657005,
0.010871081613004208,
-0.0041758776642382145,
0.019663454964756966,
0.08399713039398193,
-0.016986798495054245,
-0.0044326381757855415,
0.12322670966386795,
0.018295058980584145,
-0.09673214703798294,
-0.06154690682888031,
0.11722070723772049,
0.024064837023615837,
0.14204588532447815,
0.010821533389389515,
0.07979226112365723,
0.08815915137529373,
0.019781801849603653,
-0.09566625952720642,
0.045211393386125565,
-0.019267084077000618,
-0.026945672929286957,
0.007159817032516003,
0.10882300138473511,
-0.007303693797439337,
0.06329638510942459,
0.025255288928747177,
-0.09161396324634552,
0.06166969612240791,
0.010989926755428314,
-0.031040046364068985,
-0.08088193088769913,
0.08694597333669662,
-0.0902635008096695,
0.15843138098716736,
0.12244684994220734,
-0.016288623213768005,
-0.0474332794547081,
-0.028551526367664337,
0.020623670890927315,
-0.001444636844098568,
0.055536091327667236,
-0.026715848594903946,
-0.13639765977859497,
0.018581746146082878,
-0.08646155893802643,
0.02445334941148758,
-0.24611586332321167,
-0.09171569347381592,
0.02951541356742382,
-0.018082313239574432,
-0.020007401704788208,
0.051929935812950134,
0.04445486515760422,
0.025485126301646233,
-0.03497453033924103,
0.023075414821505547,
-0.04069238156080246,
0.058278560638427734,
-0.11229093372821808,
-0.09280380606651306
] |
null | null | transformers | # MultiBERTs Seed 3 Checkpoint 800k (uncased)
Seed 3 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-800k')
model = BertModel.from_pretrained("multiberts-seed-3-800k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-3"], "datasets": ["bookcorpus", "wikipedia"]} | null | MultiBertGunjanPatrick/multiberts-seed-3-800k | [
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-3",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2106.16163"
] | [
"en"
] | TAGS
#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us
| # MultiBERTs Seed 3 Checkpoint 800k (uncased)
Seed 3 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This is an intermediate checkpoint.
The final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.
## Training data
The MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
</a>
| [
"# MultiBERTs Seed 3 Checkpoint 800k (uncased)\nSeed 3 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
"TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# MultiBERTs Seed 3 Checkpoint 800k (uncased)\nSeed 3 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.",
"## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\nHere is how to use this model to get the features of a given text in PyTorch:",
"### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.",
"## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).",
"## Training procedure",
"### Preprocessing\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\nThe full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size\nof 256. The sequence length was set to 512 throughout. The optimizer\nused is Adam with a learning rate of 1e-4, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"### BibTeX entry and citation info\n\n<a href=\"URL\n\t<img width=\"300px\" src=\"URL\n</a>"
] | [
71,
145,
335,
134,
25,
95,
48,
3,
222,
111,
34
] | [
"passage: TAGS\n#transformers #pytorch #bert #pretraining #exbert #multiberts #multiberts-seed-3 #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2106.16163 #license-apache-2.0 #endpoints_compatible #region-us \n# MultiBERTs Seed 3 Checkpoint 800k (uncased)\nSeed 3 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This is an intermediate checkpoint.\nThe final checkpoint can be found at multiberts-seed-3. This model is uncased: it does not make a difference\nbetween english and English.\n\nDisclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by gchhablani.",
"passage: ## Model description\nMultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it\nwas pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it\nwas pretrained with two objectives:\n- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run\n the entire masked sentence through the model and has to predict the masked words. This is different from traditional\n recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like\n GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the\n sentence.\n- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes\n they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to\n predict if the two sentences were following each other or not.\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the MultiBERTs model as inputs.## Intended uses & limitations\nYou can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to\nbe fine-tuned on a downstream task. See the model hub to look for\nfine-tuned versions on a task that interests you.\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.### How to use\nHere is how to use this model to get the features of a given text in PyTorch:### Limitations and bias\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular\ncheckpoint, please try out this checkpoint with the snippet present in the Limitation and bias section of the bert-base-uncased checkpoint.## Training data\nThe MultiBERTs models were pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).## Training procedure"
] | [
-0.08232114464044571,
-0.00101022282615304,
-0.0021308669820427895,
0.06973829120397568,
0.08729486167430878,
0.0028972947038710117,
0.11719971150159836,
0.049432750791311264,
-0.029866423457860947,
0.022625485435128212,
0.09193167090415955,
0.028430141508579254,
0.042785294353961945,
0.06333717703819275,
0.0973997563123703,
-0.2570945918560028,
0.04940791428089142,
-0.06342779844999313,
0.06305237114429474,
0.07530435919761658,
0.10029523074626923,
-0.070887491106987,
0.06286996603012085,
0.03485805168747902,
-0.0807550847530365,
-0.015266265720129013,
-0.019229864701628685,
-0.035964228212833405,
0.09982764720916748,
0.06971652805805206,
0.06081017106771469,
0.0015123970806598663,
0.05860307067632675,
-0.09085546433925629,
0.015425324440002441,
0.04356348142027855,
-0.0011172862723469734,
0.02257613092660904,
-0.0060331327840685844,
0.016498493030667305,
0.10992894321680069,
0.0420413538813591,
0.0796041265130043,
0.03421050310134888,
-0.0947653204202652,
-0.10742487758398056,
-0.08074988424777985,
0.10329191386699677,
0.0567886047065258,
0.04383036121726036,
-0.005263573490083218,
0.07568186521530151,
-0.031719475984573364,
0.07585373520851135,
0.10623243451118469,
-0.25186851620674133,
-0.009237353689968586,
0.06849139928817749,
0.04251626506447792,
0.0449906662106514,
0.014576206915080547,
0.02555183880031109,
0.005208037793636322,
0.04635822772979736,
0.02634139358997345,
-0.023022517561912537,
0.11623365432024002,
-0.045543231070041656,
-0.15155640244483948,
-0.042939528822898865,
0.11826039850711823,
-0.006571967154741287,
-0.12510623037815094,
-0.10216543078422546,
-0.02793758735060692,
0.10839670896530151,
-0.0019456343725323677,
-0.018576258793473244,
-0.0036539770662784576,
0.010536737740039825,
0.02374972030520439,
-0.09141578525304794,
-0.08566678315401077,
-0.028383929282426834,
-0.03721444681286812,
0.12412223219871521,
0.046011775732040405,
0.05206158757209778,
-0.03298380225896835,
0.08794735372066498,
-0.11649282276630402,
-0.038992904126644135,
-0.052629098296165466,
-0.08253984898328781,
-0.017233116552233696,
0.009117159061133862,
-0.027124453336000443,
-0.08525785803794861,
-0.059233639389276505,
0.11886236071586609,
0.03690025582909584,
0.03151741623878479,
-0.0017140405252575874,
0.04193883761763573,
0.073464035987854,
0.09757092595100403,
-0.03977474570274353,
0.048488520085811615,
0.032154202461242676,
-0.019619625061750412,
0.05902360379695892,
-0.05111496150493622,
-0.10086923837661743,
0.0798984169960022,
0.002826864831149578,
0.03951939940452576,
0.0267692431807518,
0.03257398679852486,
-0.011807538568973541,
-0.07364392280578613,
0.16659966111183167,
-0.0773012638092041,
-0.010966921225190163,
-0.019668743014335632,
0.012084176763892174,
0.048910245299339294,
0.03249076008796692,
-0.0031251313630491495,
-0.04740270972251892,
-0.0046713948249816895,
-0.05331952124834061,
-0.026855386793613434,
-0.05574442446231842,
-0.115609310567379,
-0.0002888869494199753,
-0.03908248990774155,
-0.03206326812505722,
-0.14236494898796082,
-0.2151464819908142,
-0.01929660700261593,
0.06481800973415375,
-0.0024795830249786377,
-0.009117644280195236,
0.024137716740369797,
0.01845180056989193,
-0.021806472912430763,
0.009394561871886253,
-0.04763394594192505,
-0.0009918855503201485,
-0.007099302485585213,
-0.03110181912779808,
0.0549907460808754,
-0.04234011843800545,
0.02390296198427677,
-0.06917814165353775,
0.021773749962449074,
-0.21020519733428955,
0.08818978816270828,
-0.03291640058159828,
0.004515763372182846,
-0.03631741553544998,
-0.044351957738399506,
0.011110547930002213,
0.04793602600693703,
-0.008374366909265518,
0.1155954897403717,
-0.13255394995212555,
-0.04815905541181564,
0.1743423044681549,
-0.15897491574287415,
-0.002873901277780533,
0.10065257549285889,
-0.04651295766234398,
0.05092165991663933,
0.13248728215694427,
0.09839082509279251,
0.08372814208269119,
-0.07130444049835205,
0.011589069850742817,
0.06004425510764122,
-0.06685994565486908,
0.05259597674012184,
0.08989197015762329,
-0.026236843317747116,
-0.13610394299030304,
0.031171489506959915,
-0.07371952384710312,
-0.008693838492035866,
-0.02511530928313732,
-0.0217780452221632,
0.009022971615195274,
-0.03854089975357056,
0.02156648598611355,
0.005481335800141096,
0.016897546127438545,
-0.04078402370214462,
-0.08101671189069748,
0.02462928555905819,
0.07454027235507965,
-0.06975354254245758,
0.043295785784721375,
-0.07102568447589874,
0.05887123569846153,
-0.07371290028095245,
-0.006777138449251652,
-0.16472060978412628,
-0.02472110092639923,
0.04438849538564682,
-0.043649934232234955,
0.048306986689567566,
0.08833498507738113,
0.002073573414236307,
0.12273042649030685,
-0.03956029564142227,
0.002169973449781537,
-0.005417125299572945,
-0.010556144639849663,
-0.04983558505773544,
-0.11752194911241531,
-0.08262388408184052,
-0.06848089396953583,
0.09640415757894516,
-0.07125766575336456,
0.028459832072257996,
-0.07046227157115936,
-0.022302454337477684,
-0.010225487872958183,
-0.059141747653484344,
-0.00510211568325758,
0.011940627358853817,
-0.029808051884174347,
-0.04727111756801605,
0.04851643741130829,
0.04969160631299019,
-0.06032228842377663,
0.07418245077133179,
-0.10603679716587067,
-0.057353608310222626,
0.05378331243991852,
0.015635965391993523,
-0.08104890584945679,
0.08711445331573486,
-0.020477324724197388,
-0.013959529809653759,
-0.0561266764998436,
-0.0457884706556797,
0.19697195291519165,
-0.022112634032964706,
0.10027821362018585,
-0.09030671417713165,
0.002172033302485943,
0.026580514386296272,
-0.045706331729888916,
-0.020437289029359818,
0.05814503878355026,
0.05097852647304535,
-0.18367397785186768,
0.01315871812403202,
0.051718488335609436,
0.07700157910585403,
0.11244571208953857,
0.028086800128221512,
-0.022906865924596786,
-0.04458100348711014,
-0.011671367101371288,
0.00598151097074151,
0.05312211811542511,
-0.02331659197807312,
-0.00798429548740387,
0.030796317383646965,
0.05926448851823807,
0.02003432624042034,
-0.08017725497484207,
0.0342581570148468,
0.06732578575611115,
-0.015013727359473705,
-0.0378170870244503,
-0.025444980710744858,
-0.06080150231719017,
0.0617716982960701,
0.05257195234298706,
0.033478230237960815,
0.026986630633473396,
-0.015681622549891472,
-0.13667702674865723,
0.1887626051902771,
-0.11341901123523712,
-0.25779539346694946,
-0.11076103895902634,
-0.06123805791139603,
-0.028256786987185478,
0.04062170535326004,
0.05764897167682648,
-0.0301855206489563,
-0.043887458741664886,
-0.11894462257623672,
0.059832535684108734,
-0.06574073433876038,
-0.031311701983213425,
-0.008514728397130966,
-0.05297836661338806,
-0.018832672387361526,
-0.12764781713485718,
-0.013706682249903679,
-0.029966125264763832,
-0.07904164493083954,
0.008502357639372349,
-0.037144675850868225,
0.03021940402686596,
0.136549711227417,
0.03664056211709976,
-0.019238615408539772,
-0.01736772060394287,
0.18970178067684174,
0.011661708354949951,
0.058804336935281754,
0.11270228028297424,
-0.027470141649246216,
0.05443074554204941,
0.045730650424957275,
0.024393951520323753,
-0.048991620540618896,
0.014866949059069157,
-0.015489301644265652,
-0.12274298816919327,
-0.1727522313594818,
-0.07157932221889496,
-0.0029938346706330776,
0.007103555835783482,
0.01974530890583992,
0.03699314966797829,
0.022863196209073067,
0.03982239216566086,
-0.029307646676898003,
0.026316411793231964,
-0.014564618468284607,
0.0805765688419342,
0.02303421124815941,
-0.0734419971704483,
0.09381142258644104,
-0.05932644382119179,
0.01629992015659809,
0.11008521914482117,
-0.05959763377904892,
0.18664878606796265,
0.024713488295674324,
0.05405864119529724,
0.1021399050951004,
0.019176378846168518,
0.05301502346992493,
0.08974441140890121,
-0.047190986573696136,
0.0047605205327272415,
-0.060969673097133636,
-0.051461152732372284,
-0.03705834969878197,
0.04824230819940567,
0.030967652797698975,
0.019653763622045517,
-0.12032204866409302,
0.022148385643959045,
-0.0018836510134860873,
0.13972358405590057,
0.043646275997161865,
-0.12233714759349823,
-0.11776938289403915,
0.0351477712392807,
-0.04594767838716507,
-0.06114724278450012,
0.03148805722594261,
0.054936978965997696,
-0.15443193912506104,
0.04700350761413574,
-0.004518108442425728,
0.0655917152762413,
-0.09351818263530731,
0.01568090356886387,
-0.040993791073560715,
-0.0013840999454259872,
0.0056622764095664024,
0.06941118091344833,
-0.13724640011787415,
0.10359641164541245,
0.02020038291811943,
0.04910667985677719,
-0.08020374178886414,
0.015785422176122665,
-0.01086706668138504,
0.10615183413028717,
0.11551640927791595,
0.04292730987071991,
-0.05190006271004677,
-0.022995874285697937,
-0.04875250533223152,
0.020761271938681602,
0.05831998586654663,
-0.07707037776708603,
0.061036042869091034,
0.008972509764134884,
0.009145962074398994,
-0.023413993418216705,
0.019135519862174988,
-0.1310371458530426,
-0.12466795742511749,
0.06210390478372574,
-0.07628987729549408,
-0.10060706734657288,
-0.05737660080194473,
-0.06304474174976349,
-0.050778828561306,
0.20904169976711273,
-0.1146073043346405,
-0.09116797149181366,
-0.09823767840862274,
-0.01726726070046425,
0.044800471514463425,
-0.06535472720861435,
0.04525444656610489,
-0.03721293434500694,
0.08916999399662018,
-0.04748164117336273,
-0.10915057361125946,
0.03416569158434868,
-0.11497662961483002,
-0.11383683979511261,
-0.04285663738846779,
0.10663662105798721,
0.11436885595321655,
0.03812042251229286,
0.011338821612298489,
0.011444571428000927,
0.0010638833045959473,
-0.11878909170627594,
0.014556875452399254,
0.13315415382385254,
-0.0036740824580192566,
0.07297191023826599,
-0.06056061387062073,
0.029431089758872986,
-0.017404159530997276,
-0.0015734247863292694,
0.13240358233451843,
0.18649500608444214,
-0.060786716639995575,
0.17358098924160004,
0.20140990614891052,
-0.10491198301315308,
-0.19017575681209564,
-0.05464685708284378,
-0.0013867150992155075,
0.04508142173290253,
0.05318578705191612,
-0.18251702189445496,
0.09107862412929535,
0.03601451218128204,
-0.031929925084114075,
0.018164876848459244,
-0.23226523399353027,
-0.1108672171831131,
0.0908968597650528,
0.0569678395986557,
0.18585842847824097,
-0.08188099414110184,
-0.037728291004896164,
-0.016194447875022888,
-0.040735118091106415,
0.04586318880319595,
-0.03356366604566574,
0.09128112345933914,
0.004760941490530968,
-0.03044794872403145,
0.003072463907301426,
-0.032084375619888306,
0.0948805958032608,
0.04127756878733635,
0.023075565695762634,
-0.07194284349679947,
-0.008130373433232307,
0.108452707529068,
-0.03850255161523819,
0.09930513799190521,
0.04101897031068802,
0.07379688322544098,
-0.10033780336380005,
-0.060242537409067154,
-0.07499116659164429,
0.042952023446559906,
-0.04189155250787735,
-0.05592582747340202,
-0.06246814876794815,
0.058897312730550766,
0.03810702636837959,
0.010153168812394142,
0.0011208616197109222,
-0.038130465894937515,
0.04275554046034813,
0.09268641471862793,
0.08211399614810944,
-0.03885893523693085,
-0.070919468998909,
-0.053689245134592056,
-0.04875445365905762,
0.06466368585824966,
-0.08632312715053558,
0.017721807584166527,
0.028108298778533936,
0.009506464004516602,
0.08849126845598221,
0.034139540046453476,
-0.13742801547050476,
0.0114193269982934,
0.03538252413272858,
-0.12251719832420349,
-0.1049591675400734,
-0.021551409736275673,
0.035536278039216995,
-0.038184430450201035,
0.05494653806090355,
0.1451612412929535,
-0.03554689139127731,
-0.03135521337389946,
-0.04759194701910019,
0.03878505900502205,
-0.02007128670811653,
0.050513170659542084,
0.06403416395187378,
0.03029434196650982,
-0.07228527963161469,
0.07658281922340393,
0.03633967414498329,
-0.03310262784361839,
0.042239975184202194,
0.043580520898103714,
-0.09369875490665436,
-0.07758486270904541,
-0.06038100644946098,
0.08886535465717316,
-0.025509783998131752,
-0.04597201570868492,
-0.0015338584780693054,
-0.08232206106185913,
0.06713594496250153,
0.06639175862073898,
0.04859393090009689,
0.0353957824409008,
-0.08714522421360016,
0.015191623941063881,
-0.05308948829770088,
0.03265482559800148,
-0.03169028460979462,
-0.0042352620512247086,
-0.05528053641319275,
0.06165314465761185,
0.06431229412555695,
0.09715773165225983,
-0.033728063106536865,
-0.07586811482906342,
-0.0832672193646431,
-0.011763989925384521,
-0.05890768766403198,
-0.03541839122772217,
-0.0750637948513031,
-0.005715410690754652,
0.0013491730205714703,
-0.0021937619894742966,
0.021007180213928223,
0.03681013360619545,
-0.042037662118673325,
-0.019842153415083885,
-0.036618806421756744,
0.03634122014045715,
-0.0593579038977623,
0.006427791900932789,
0.014845972880721092,
-0.0353846475481987,
0.09147144854068756,
0.03310379758477211,
-0.012386782094836235,
0.04640968143939972,
-0.025565974414348602,
0.03356977552175522,
-0.02071855217218399,
0.0003917515277862549,
-0.024413732811808586,
-0.10619601607322693,
-0.006633239332586527,
0.006068311631679535,
-0.025613075122237206,
0.011012814939022064,
0.055829986929893494,
-0.07381190359592438,
0.08504776656627655,
0.045157816261053085,
-0.03093377873301506,
-0.07113225758075714,
0.040758468210697174,
-0.013468647375702858,
0.030047914013266563,
0.0713578313589096,
-0.03557318449020386,
0.0536530576646328,
-0.09724807739257812,
-0.027951177209615707,
0.004557406529784203,
-0.0029475614428520203,
-0.011428723111748695,
-0.05242855101823807,
-0.004222316667437553,
0.008034215308725834,
0.1762746423482895,
-0.024007480591535568,
0.03597744554281235,
0.014186992309987545,
0.009750090539455414,
0.05131231248378754,
-0.015176555141806602,
0.07231368124485016,
-0.007901232689619064,
-0.02508421614766121,
-0.012571563944220543,
0.036658428609371185,
0.00554819218814373,
0.0024558380246162415,
0.14077337086200714,
0.04742578789591789,
0.08822868764400482,
0.07509027421474457,
0.018066858872771263,
0.01756274327635765,
-0.1305188536643982,
-0.09420538693666458,
0.009341971017420292,
0.057793259620666504,
-0.017829136922955513,
0.011822151020169258,
0.09139004349708557,
-0.08781366050243378,
0.06914611905813217,
0.04902701452374458,
-0.046857040375471115,
-0.12503963708877563,
-0.19014012813568115,
-0.023787083104252815,
-0.02982657589018345,
-0.01052473857998848,
-0.09080928564071655,
0.01580806076526642,
0.09547170251607895,
0.025658018887043,
-0.010162810795009136,
0.09517332166433334,
-0.10620298981666565,
-0.029215462505817413,
0.04490824043750763,
-0.02836793102324009,
0.01617075875401497,
0.04889129102230072,
0.023388143628835678,
-0.007415691390633583,
0.04417319595813751,
0.04218096658587456,
0.04316660761833191,
0.024624096229672432,
0.05020073056221008,
-0.023320982232689857,
-0.0750231072306633,
-0.03271815925836563,
-0.005650934297591448,
0.05367002263665199,
0.13440018892288208,
0.022690515965223312,
-0.06928849965333939,
0.0071875471621751785,
0.10777351260185242,
-0.03169906139373779,
-0.04990507289767265,
-0.10674071311950684,
0.23959651589393616,
0.024080218747258186,
0.003288419684395194,
-0.0058231595903635025,
-0.0463918037712574,
0.0033779218792915344,
0.20971302688121796,
0.22382204234600067,
0.001688815071247518,
-0.010048394091427326,
0.008502275682985783,
-0.010521305724978447,
0.03716764226555824,
0.14649991691112518,
0.00569433718919754,
0.2517080307006836,
-0.046737320721149445,
0.040992721915245056,
-0.04133937507867813,
-0.039170533418655396,
-0.09846141934394836,
0.06990296393632889,
-0.006593616213649511,
0.009471513330936432,
-0.03230210393667221,
0.0710899755358696,
-0.04064710810780525,
-0.1660614162683487,
-0.0013664187863469124,
-0.0028248056769371033,
-0.06298914551734924,
0.011550097726285458,
-0.0010192599147558212,
0.020128069445490837,
0.08473590761423111,
-0.01710617169737816,
-0.005334853660315275,
0.1260383576154709,
0.018233245238661766,
-0.09890855848789215,
-0.059150852262973785,
0.11477352678775787,
0.0251444261521101,
0.1432984173297882,
0.01209416426718235,
0.07837265729904175,
0.08725111186504364,
0.021075204014778137,
-0.0944729745388031,
0.04525850713253021,
-0.020299836993217468,
-0.028693363070487976,
0.007786983624100685,
0.10664251446723938,
-0.00841345265507698,
0.059562698006629944,
0.026973169296979904,
-0.09047719836235046,
0.061754010617733,
0.01083575189113617,
-0.03320431336760521,
-0.08175387978553772,
0.08549675345420837,
-0.09103791415691376,
0.1578226238489151,
0.12308038771152496,
-0.014881749637424946,
-0.04787497967481613,
-0.027989627793431282,
0.020973285660147667,
-0.0027037961408495903,
0.058088235557079315,
-0.02514486387372017,
-0.13502611219882965,
0.01942710392177105,
-0.08694156259298325,
0.025384588167071342,
-0.2504362165927887,
-0.08958590775728226,
0.029287179931998253,
-0.018988419324159622,
-0.021229173988103867,
0.052115388214588165,
0.046301912516355515,
0.02657392807304859,
-0.035821255296468735,
0.02414775639772415,
-0.03872697055339813,
0.05903104320168495,
-0.11209587007761002,
-0.09333677589893341
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.