repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 3,692 | closed | How to use Huggingface pytorch bert to generate the prediction TSV file from the test set of a GLUE task? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hello folks! Can you provide some simple example of using pytorch bert to generate the prediction TSV file from the test set of a GLUE task (such as MRPC) based on the fine-tuned model so that I can submit the prediction TSV file for each GLUE task to GLUE leaderboard? Thank you very much.
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-08-2020 04:53:23 | 04-08-2020 04:53:23 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This was implemented ~1 month ago so closing this issue. |
transformers | 3,691 | closed | cannot import name AddedToken | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Albert
Language I am using the model on (English, Chinese ...): English
27 from typing import List, Optional, Sequence, Tuple, Union
28
---> 29 from tokenizers import AddedToken, Encoding
30 from tokenizers.decoders import Decoder
31 from tokenizers.implementations import BaseTokenizer
ImportError: cannot import name 'AddedToken'
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name):
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| 04-08-2020 04:14:29 | 04-08-2020 04:14:29 | Created another environment from scratch and it got resolved.<|||||>you need install transformers this way:
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
```<|||||>> you need install transformers this way:
>
> ```
> git clone https://github.com/huggingface/transformers
> cd transformers
> pip install .
> ```
This worked for me thanks!<|||||>> you need install transformers this way:
>
> ```
> git clone https://github.com/huggingface/transformers
> cd transformers
> pip install .
> ```
I was having a similar issue on colab:
```
can't pickle AddedToken objects
```
This solution also worked for me. Thanks!<|||||>> you need install transformers this way:
>
> ```
> git clone https://github.com/huggingface/transformers
> cd transformers
> pip install .
> ```
Thanks much, this worked for me!<|||||>> you need install transformers this way:
>
> ```
> git clone https://github.com/huggingface/transformers
> cd transformers
> pip install .
> ```
This solution doesn't work for me! I don't know maybe there is a conflict with `pip` an `conda`. I guess after I installed a package `bert_score` by conda, this error appeared and won't go |
transformers | 3,690 | closed | BertSelfAttention have not Add and Norm layer, why??? | 04-08-2020 02:51:15 | 04-08-2020 02:51:15 | It's in [`BertSelfOutput`](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L261) which is called right after the `BertSelfAttention` in [`BertAttention`](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L316). |
|
transformers | 3,689 | closed | Can't update the train_batch_size and eval_batch_size for the training image in a docker container | Hi,
I tried to train a couple multi-label models with the fast-bert libary using the container files to build up the docker image and uploaded to AWS ECR and used the aws helper notebook that's included in the 'sample notebook' folder in the repo. I have trained for 3 models and **regardless I changed to a different train_batch_size in the hyperparameters.json file, the model when training still outputs total train batch size is 64 and eval batch size is 128.**
**My question here:**
- Am i not able to update the train batch size? if the training is happening in a container?
- Does the training and eval batch size have some relationship? from a glance, it looks like eval_batch_size is doubled train_batch_size. I'm gonna say there shouldn't be any relationship. However, **why there's no parameter set in the hyperparameters.json to specify the eval_batch_size?**
- The three models i have trained all got really good accuracy_thresh with above 0.97. **However, one of the models only outputs 2 classes as the top probability class.** The original data has about 9455 rows and 113 classes. I have also trained it on BERT tensorflow version, i was able to get multiple labels as top predicted class. **what could be possibly wrong?** Note that my other 2 models has about 36 classes and 11 classes. The top predicted classes all came out reasonable, meaning all the 36 and 11 classes showed up for the top predicted class. In addition, i don't see the performance change whenever i changed the accuracy_thresh change after epoch 2.
**Please provide some guidance as this is going into deployment soon** but I'm still struggling to figure out why....
| 04-08-2020 02:48:17 | 04-08-2020 02:48:17 | |
transformers | 3,688 | closed | Big cleanup of `glue_convert_examples_to_features` | 04-08-2020 01:02:34 | 04-08-2020 01:02:34 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3688?src=pr&el=h1) Report
> Merging [#3688](https://codecov.io/gh/huggingface/transformers/pull/3688?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/715aa5b1356b878cbab7a7415a1c1b03a7777ae2&el=desc) will **increase** coverage by `0.03%`.
> The diff coverage is `10.63%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3688?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3688 +/- ##
==========================================
+ Coverage 78.02% 78.06% +0.03%
==========================================
Files 104 104
Lines 17710 17708 -2
==========================================
+ Hits 13819 13823 +4
+ Misses 3891 3885 -6
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3688?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/3688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `24.68% <0.00%> (ø)` | |
| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/3688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `29.79% <10.86%> (+2.26%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.84% <0.00%> (-0.13%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3688?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3688?src=pr&el=footer). Last update [715aa5b...b867779](https://codecov.io/gh/huggingface/transformers/pull/3688?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>There is a combinatorial explosion between `tokenizer classes` x `glue datasets` so it's not super practical to test everything, but:
- I've tested "roberta-large", "bert-base-uncased", "xlnet-based" tokenizers x "sst2" and "mnli" datasets and results are identical ✅
- encoding a batch of sentences or sentence pairs, while padding to a specific length, is really a native feature of a tokenizer at this point, so [those lines](https://github.com/huggingface/transformers/pull/3688/files#diff-8bc8284670454c05520b097dd51ad787R137-R139) in the current PR call the canonical API to do that. If there's a discrepancy with the historical way of tokenizing at this point it's probably outside the scope of this PR.
<|||||>**Note, however**, that following this PR the performance boost associated with using a fast tokenizer coupled with using `batch_encode` seems very variable, cc: @n1t0 @mfuntowicz
On my (mac OS) local machine it takes pretty much the same time using a fast and a non-fast tokenizer (even though the fast one burns all my CPU cores).
On a Colab notebook seems like perf varies a lot between executions (https://colab.research.google.com/drive/1DXOegSz7Tyr7MeSHYBBg40kiDhu4-JPr?authuser=1#scrollTo=NlygQfeyg-5b), with the fast tokenizer not always being faster than the other one.
See [notebook](https://colab.research.google.com/drive/1DXOegSz7Tyr7MeSHYBBg40kiDhu4-JPr)
Would be interesting to dive in and do a more systematic benchmark than I did, considering that GLUE is a good benchmark for a real-world training workload. |
|
transformers | 3,687 | closed | Is it possible to use multiprocessing for pipelines? | I am trying to use multiprocessing for pipelines, but it seems that it's not working. I think it's because the pipeline already uses multiprocessing features and so you can't have multiprocessing inception. Anyone able to get it to work? | 04-07-2020 23:29:41 | 04-07-2020 23:29:41 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I have the same the problem. When you close the issue does it mean that it is fixed? <|||||>I have also met this problem, how can I solve this? Thanks!!!!<|||||>> I have also met this problem, how can I solve this? Thanks!!!!
I switched to using `nlp.pipe`, which is the built-in function for multiprocessing, instead of doing it by hand. |
transformers | 3,686 | closed | Bug in variable name in NER | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Irrelevant
Language I am using the model on (English, Chinese ...): Irrelevant
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: NER
* [ ] my own task or dataset: (give details below)
## Error
```
File "/transformers/examples/ner/utils_ner.py", line 123, in convert_examples_to_features special_tokens_count = tokenizer.num_added_tokens()
AttributeError: 'BertTokenizer' object has no attribute 'num_added_tokens'
```
## Issue
After update to new Tokenizers, some util files are broken. Found one in examples/ner/utils_ner.py.
Need to change line 123 from num_added_tokens to num_special_tokens_to_add.
| 04-07-2020 22:03:46 | 04-07-2020 22:03:46 | I also see the same issue, has there been a fix for this?
AttributeError: 'BertTokenizer' object has no attribute 'num_added_tokens'<|||||>@minhtuev you can change `num_added_tokens` to `num_special_tokens_to_add`. This made the fix for me |
transformers | 3,685 | closed | Requesting model for TFAlbertForQuestionAnswering | # 🌟 New model addition
Is there support for adding a TensorFlow version of the AlbertForQuestionAnswering model? I would be happy to contribute the work. This would also enable the `run_squad.py` example script for TensorFlow.
It also looks like [the preprocessing of SQuAD data is different](https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/squad.py#L349) for PyTorch and TensorFlow. PyTorch has an option to return `all_example_index`, while TensorFlow does not. This means that running the model evaluation script (which takes the max F1/EM over all answers) is only possible in PT. Running SQuAD evaluation in TensorFlow is important for my use case, and I would love to understand the design decisions or difficulties that went into that decision. Again, happy to contribute - any support or objections for aligning the data preprocessing techniques?
| 04-07-2020 21:30:50 | 04-07-2020 21:30:50 | Hi! We don't currently have a `run_tf_squad` script, but we would appreciate a contribution! The preprocessing of SQuAD shouldn't be different between pytorch and tensorflow. We haven't gotten to testing that as we haven't gotten to writing that script yet.
If we're to have a TF SQuAD script, we would have to align the pre-processing techniques as well!<|||||>And we would definitely welcome a PR introducing `TFAlbertForQuestionAnswering`!<|||||>Resolved in https://github.com/huggingface/transformers/commit/6d00033e97e1751a897f2317fdfd35dd853cee29 . |
transformers | 3,684 | closed | Updating the TensorFlow models to work as expected with tokenizers v3.0.0 | Models and tokenizers should work in harmony; this is why it is an API design choice to be able to send the output of `encode_plus` and `batch_encode_plus` straight to the model, in both PyTorch and TensorFlow:
```py
encoded_sequence = tokenizer.encode_plus(sequence)
model(encoded_sequence) # for TensorFlow
model(**encoded_sequence) # for PyTorch
```
With the recent changes of tokenizers-v3.0.0 and the introduction of `BatchEncoding`, the way the TensorFlow models usually identified such inputs didn't work, as it was looking for a `dict` instead of a `BatchEncoding`. This PR patches this.
This feature was previously untested; this PR addresses this by adding four different tests on each tokenizers; testing that the tokenizers return correct `BatchEncoding` objects in both PyTorch and TensorFlow which can be fed directly to the model. Both `encode_plus` and `batch_encode_plus` are tested.
Some issues were found, and were patched with this PR. | 04-07-2020 19:46:22 | 04-07-2020 19:46:22 | |
transformers | 3,683 | closed | question-answering pipeline error : too many values to unpack (expected 2) | For some `question-answering` models the pipleline encounters extra tuples from the `.model()` call, where we must have exactly 2.
> This stub results in the following error:
```python
from transformers.pipelines import pipeline
model_name = "mrm8488/bert-uncased-finetuned-qnli"
nlp = pipeline("question-answering", model=model_name, tokenizer=model_name)
QA_input = {
"question": "Why is model conversion important?",
"context": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.",
}
res = nlp(QA_input)
```
> error message
```bash
Traceback (most recent call last):
File "test3.py", line 10, in <module>
res = nlp(QA_input)
File "/opt/conda/lib/python3.7/site-packages/transformers/pipelines.py", line 1010, in __call__
start, end = self.model(**fw_args)
ValueError: too many values to unpack (expected 2)
```
> env
```text
- `transformers` version: 2.8.0
- Platform: Linux-4.9.184-linuxkit-x86_64-with-debian-buster-sid
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: nope
- Using distributed or parallel set-up in script?: nope
``` | 04-07-2020 19:04:29 | 04-07-2020 19:04:29 | hmmm, `mrm8488/bert-uncased-finetuned-qnli` is a sequence classification model, not a QA model.
You probably get warnings while loading it in a QA Pipeline.
Does this happen with other (QA) models?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,682 | closed | [T5, generation] Add decoder caching for T5 | This PR greatly speeds up the autoregressive decoding for T5 by storing past key / value states.
The summarization test: https://github.com/huggingface/transformers/blob/500aa12318ce5acd289d5edb6cb8266b3c3b162e/tests/test_modeling_t5.py#L260 now takes only 44s whereas before it took 311s -> 7.5x Speed up
This will also significantly speed up the translation and summarization pipelines when using T5.
- [x] Add key value state caching
- [x] Test for equal output on hard-coded tests
- [x] Add simple past tests including using an attention mask
- [x] update the docstring
- [x] clean up code
The caching design was already somewhat outcommented in place. It was cleaned, made functional and implemented very similar to GPT2's one.
### IMPORTANT:
This PR has a breaking change, in that it increases the default output length of T5Model and T5ForConditionalGeneration from 4 to 5 (including the `past_key_value_states`).
### Future PR:
- [ ] Do the same for TF if this PR is accepted.
Would be nice if you could take a look @craffel @thomwolf @LysandreJik @sshleifer | 04-07-2020 18:32:37 | 04-07-2020 18:32:37 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3682?src=pr&el=h1) Report
> Merging [#3682](https://codecov.io/gh/huggingface/transformers/pull/3682?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a594ee9c84dde933a3d0b4e07ff2994a1960574c&el=desc) will **increase** coverage by `0.07%`.
> The diff coverage is `89.76%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3682?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3682 +/- ##
==========================================
+ Coverage 78.02% 78.09% +0.07%
==========================================
Files 104 104
Lines 17710 17786 +76
==========================================
+ Hits 13818 13890 +72
- Misses 3892 3896 +4
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3682?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.21% <89.68%> (+1.72%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3682/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.03% <100.00%> (+0.18%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3682?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3682?src=pr&el=footer). Last update [a594ee9...67ae81f](https://codecov.io/gh/huggingface/transformers/pull/3682?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,681 | closed | updating to new transformer | 04-07-2020 18:06:44 | 04-07-2020 18:06:44 | ||
transformers | 3,680 | closed | How to use GPT2DoubleHeadsModel? | Hello,
I have a question about the example shown in the GPT2DoubleHeadsModel documentation page:
https://huggingface.co/transformers/model_doc/gpt2.html#gpt2doubleheadsmodel
In the example, the input to the GPT2DoubleHeadsModel is simply a set of choices. But what if the multiple choice question that I want to process also includes a question text? so for example,
Bob likes candy; what does Bob like?
a. Bag
b. Burger
c. Candy
d. Pencil
In the example above, the question text would be " Bob likes candy; what does Bob like?"
and the choices will be the Bag, Burger, Candy and Pencil. How should I pre-process my multiple choice questions to be used with the GPT2DoubleHeadsModel?
For example, given that the token that will be used for the classification is "<|endoftext|>" (this is the default eos token for the GPT2 models), would the following be fine?
```python
import torch
from transformers import GPT2Tokenizer, GPT2DoubleHeadsModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2DoubleHeadsModel.from_pretrained('gpt2')
choices = [ "Bob likes candy ; what does Bob like ? Bag <|endoftext|>",
"Bob likes candy ; what does Bob like ? Burger <|endoftext|>",
"Bob likes candy ; what does Bob like ? Candy <|endoftext|>",
"Bob likes candy ; what does Bob like ? Pencil <|endoftext|>"]
encoded_choices = [tokenizer.encode(s) for s in choices]
eos_token_location = [tokens.index(tokenizer.eos_token_id) for tokens in encoded_choices]
input_ids = torch.tensor(encoded_choices).unsqueeze(0)
mc_token_ids = torch.tensor([eos_token_location])
outputs = model(input_ids, mc_token_ids=mc_token_ids)
lm_prediction_scores, mc_prediction_scores = outputs[:2]
```
Thank you, | 04-07-2020 17:46:39 | 04-07-2020 17:46:39 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I see that the issue is closed? what is the answer?<|||||>We are trying to move such questions more and more to the forum because they get more traction there and the library's issues should primarily be used for "real" issues.
It would be awesome if you guys could post the question on
https://discuss.huggingface.co/<|||||>I meet the same question. |
transformers | 3,679 | closed | Update doc for {Summarization,Translation}Pipeline and other tweaks | 04-07-2020 17:45:04 | 04-07-2020 17:45:04 | @sshleifer It's not 100% needed to have model cards but definitely encouraged as they unlock new features e.g. here, discoverability of the models in the model hub, etc.
Yeah, adding an item to the checklist (I guess in `templates/adding_a_new_model`) would be nice, do you want to do it? |
|
transformers | 3,678 | closed | run_generation.py with empty input | Hi,
I would like to generate text following some context but also from scratch as seen in *Write with Transformer*. Using both
* `python run_generation_callable.py --model_type=gpt2 --model_name_or_path=gpt2` and feeding an empty prompt to the `Model prompt >>> `
* or changing the line
`prompt_text = args.prompt if args.prompt else input("Model prompt >>> ")`
in the source to
`prompt_text = args.prompt`
and using
`python run_generation_callable.py --model_type=gpt2 --model_name_or_path=gpt2 --prompt ""`
result in a `RuntimeError: cannot reshape tensor of 0 elements into shape [-1, 0] because the unspecified dimension size -1 can be any value and is ambiguous` (tell me if you need the complete stack trace or more detail for the reproduction).
Thanks by advance for any tips, workarounds or directions on achieving this! | 04-07-2020 16:33:47 | 04-07-2020 16:33:47 | Hi @r0levrai,
sorry for responding so late. Thanks for spotting the error. Once the linked PR is merged generation with an empty prompt should be fine!
Also note that there is a generation pipeline now which you could use as follows (once the PR is merged):
```python
from transformers import pipeline
generator = pipeline("text-generation")
generator("") # empty prompt
```<|||||>This is good news, thanks! |
transformers | 3,677 | closed | Does anyone have the XLNet (and ALBERT) NER performance on CONLL-2003 | # ❓ Questions & Help
Most transformer models in the library can be fine-tuned for NER tasks.
In https://huggingface.co/transformers/v2.2.0/examples.html#named-entity-recognition, the performances of the roberta, bert, and distilbert have been reported.
However, I did not find performances achieved by other models like XLNet.
By any chance, does anyone experiment with other models and can report the performances for models like XLNet and albert on CONLL-2003?
| 04-07-2020 15:57:47 | 04-07-2020 15:57:47 | With ALBERT v1:
https://github.com/huggingface/transformers/pull/1683#issuecomment-556001607
I got better results using the recent integrated ELECTRA model :)
<|||||>@stefan-it j/w but do you know why bert-lg fine-tuned is listed as achieving 92.8 f1 on conll03 in [this paper?](https://paperswithcode.com/sota/named-entity-recognition-ner-on-conll-2003) Noticing it's over a pt higher f1 than Transformers's version. <|||||>@stefan-it
Your albert result is very close to what I got.
By any chance, did you experiment with the XLNet?
My problem is:
In https://github.com/stevezheng23/xlnet_extension_tf, the author reported the ner performance as 0.9267. But I can only obtain a performance of 0.7626 with the same batch size and learning rate but longer training steps using the this library. I would like to confirm if the problem is my implementation but there is no baseline on XLNet.<|||||>@stefan-it Thank you for your contribution to the Electra model finetuned on CoNLL03. I see you shared the weights in this repository. Could you please share the license for these weights? I could not find a model card for it.<|||||>Hi @stefan-it !
I know that this issue is not about electra, but I have the same question regarding electra too :sweat_smile:
I ran NER CoNLL-2003 training with electra small like this:
`python examples/ner/run_ner.py --model_type electra --model_name_or_path google/electra-small-discriminator --do_train --do_eval --do_predict --data_dir /home/corpora/ner/conll2003 --labels /home/corpora/ner/conll2003/labels.txt --num_train_epochs 6 --per_gpu_train_batch_size 256 --per_gpu_eval_batch_size 256 --max_seq_length 128 --output_dir /home/models/electra/conll2003 --evaluate_during_training --save_steps 1000 --logging_steps 1000 --overwrite_output_dir`
But got only 83.20% in F1. I know you ran electra small on the actual electra repository, but can you describe what you did in hugging face?<|||||>Hi @petulla , the problem with the BERT paper is, that they've used document context for each token during evaluation. See e.g. this [discussion](https://github.com/allenai/allennlp/pull/2067#issuecomment-443961816) in the AllenNLP repo :)<|||||>@guillaume-be I normally use MIT for all trained models (so should be fine for the fine-tuned ELECTRA model as well) :)<|||||>@pvcastro Try to use a smaller batch size, for example with the configuration:
```json
{
"data_dir": "./data_en",
"labels": "./data_en/labels.txt",
"model_name_or_path": "google/electra-small-discriminator",
"output_dir": "electra-small-en-1",
"max_seq_length": 128,
"num_train_epochs": 5,
"per_gpu_train_batch_size": 16,
"save_steps": 878,
"seed": 1,
"do_train": true,
"do_eval": true,
"do_predict": true,
"--fp16": true
}
```
You should be able to reach ~88.35% on the test set and 92.13% on development set.
I just used the latest `master` version of Transformers + saved the JSON-based configuration as `config-electra-small.json`, then you can run training via `python3 run_ner.py config-electra-small.json` :)<|||||>Hi @stefan-it , thanks for the input! I ran this config and got a pretty similar result. I had no idea that a larger batch size had this impact. I get 85% for bs 128. Is this for all transformer models, or for electra only? Or for sequence labeling tasks, perhaps? Do you know why this happens? <|||||>For Transformer-based model on sequence labeling tasks batch sizes of 8, 16 or 32 are a good choice for hyper-parameter search. So e.g. the [BERT paper](https://arxiv.org/abs/1810.04805) mentions [16. 32] for their experiments (see appendix A.3).
And there's an interesting paper from Reimers and Gurevych about hyper-parameters for lstm-based networks for sequence labeling, but in my opinion their recommendations for batch sizes are also valid for Transformer-based models: ["Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks"](https://arxiv.org/abs/1707.06799), see section 7.10 :)<|||||>Thanks @stefan-it , I'll dig into these references! <|||||>Hi @stefan-it, it is quite strange that I got F1 0 for conll-03 dataset with **electra-large-discriminator**, while 0.88 and 0.91 with **small** and **base** models. The other settings are the same for the three models. Have you encountered this?<|||||>Hi @lkluo , I can't remember fine-tuning instabilities with the ELECTRA models... but could you paste the fine-tuning configuration that you've used 🤔
<|||||>> Hi @lkluo , I can't remember fine-tuning instabilities with the ELECTRA models... but could you paste the fine-tuning configuration that you've used 🤔
Thanks @stefan-it. I think I may figure it out after I checked the loss, which converges slowly and which value remains 9.x after 5 epochs. Then I lower the learning rate from default **5e-5** to 10 times smaller, i.e. **5e-6**, then I can get 0.92 score.
I also fine-tuned with **BERT-large** model using the default learning rate, and I am able to get a reasonable f1 score. Is there any special about **ELECTRA** large settings? Does batch size matter? It is limited to 12 due to GPU in my case. I saw somewhere people suggest larger batch size, smaller learning rate and longer training duration to reproduce good results. Could you share your configuration of **ELECTRA-LARGE**? Thanks a lot?
p.s., my configuration:
> {
"data_dir": "",
"labels": "",
"model_name_or_path": "google/electra-large-discriminator",
"output_dir": "",
"max_seq_length": 128,
"num_train_epochs": 5,
"per_device_train_batch_size": 12,
"save_steps": 750,
"seed": 1,
"do_train": true,
"do_eval": true,
"do_predict": true
}
|
transformers | 3,676 | closed | gpt2-medium fine-tuned model.generate joins words and sentences together without space or newline | Hi,
I have successfully fine-tuned and used a gpt2 model to generate text. My training corpus consist of short sentences - 3-5 words and longer ones 10-15 words. All separated by new line character. Sometimes ending with [ . ! ? ] sometimes not
`outputs = model.generate(
input_ids=input_ids, max_length=max_length, temperature=temperature,
repetition_penalty=repetition_penalty,
bos_token_id=tokenizer.bos_token_id,
top_k=top_k,
top_p=top_p
)`
`ret = tokenizer.decode(outputs[0], skip_special_tokens=True)`
Then I fine-tuned a gpt2-medium model. The training corpus was slightly different, but structured the same as described above.
I had to use --fp16 and --block_size=512 to fit in the GPU memory limits.
The result:
using the fine-tuned a gpt2-medium model, I am experiencing a couple of issues:
1. I get frequent issues with lines or words stuck, without any new line or space:
example:
**word1Word2Word3**
or:
**line 1 with some words!Another line with some words™️Next line...**
2. I get a 'warning':
Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence
I've tried playing with the decode parameters with no luck:
`ret = tokenizer.decode(outputs[0], skip_special_tokens=False, clean_up_tokenization_spaces=False)`
Help appreciated,
thanks in advance,
Albert
| 04-07-2020 15:14:23 | 04-07-2020 15:14:23 | I can provide a google colab notebook, the fine-tuned model, the training data (or whatever needed) to show the issue. I hope it's a tokenizer issue rather than a training fault on my side, since re-training would cost a lot of cash (the training data is quite big - ~22 million lines)
<|||||>I'm suspecting `fp16` to be the reason. Not sure whether this is supported for `generation` yet. @sshleifer - do you know more about this maybe?<|||||>A colab notebook would be great though. Or even better would be if you could upload your model to the community models :-)
This would make it very easy for us to find the bug:
https://huggingface.co/transformers/model_sharing.html<|||||>Yeah some sort of sharing to diagnose. I don't think fp16 is the problem. What does `outputs[0]` look like? <|||||>hi,
thanks for your reply,
[https://huggingface.co/albertbn/gpt2-medium-finetuned-ads-fp16-blocksz512](url)
the above is the model,
you can re-create the error using the following:
```
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained(path)
model = GPT2LMHeadModel.from_pretrained(path)
if torch.cuda.is_available():
model.to('cuda')
input_context = '''Find a plumber nearby!'''
input_ids = torch.tensor(tokenizer.encode(input_context)).unsqueeze(0)
if torch.cuda.is_available():
input_ids = input_ids.cuda()
max_length=150; temperature=.175; repetition_penalty=1.3; top_k=70; top_p=0.67
outputs = model.generate(
input_ids=input_ids, max_length=max_length, temperature=temperature, repetition_penalty=repetition_penalty,
bos_token_id=tokenizer.bos_token_id,
top_k=top_k,
top_p=top_p
)
ret = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(ret)
# Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence
# Find a plumber nearby!
# Plumbing Services in Wellington NZ.
# 24/7 Emergency Plumbers Near You, Call Now For Fast Service or Repair of Your Plumbing!Need to Fix Leaking Pipes?NZ's #1 Gasfitter™️Call Us Today for Expert Advice & The Best Service!Get the Right Gasfitting Solution for your Home. Get It Installed Now - Free Quote Here !{KeyWord:Gas Fitting Installation}Quick And Efficient Installers
```
you can see the issue (lines stuck without \n) in the last line, starting with: 24/7 Emergency Plumbers...
thank you in advance,
Albert<|||||>> Yeah some sort of sharing to diagnose. I don't think fp16 is the problem. What does `outputs[0]` look like?
outputs[0] for the example I've posted looks like this:
```
tensor([16742, 257, 458, 4494, 6716, 0, 198, 3646, 28149, 6168,
287, 30597, 26905, 13, 198, 1731, 14, 22, 18154, 1345,
17024, 20173, 921, 11, 4889, 2735, 1114, 12549, 4809, 393,
28912, 286, 3406, 1345, 28149, 0, 23037, 284, 13268, 1004,
868, 350, 18636, 30, 48261, 37371, 338, 1303, 16, 14345,
69, 1967, 8151, 37929, 14134, 4021, 6288, 329, 25516, 42708,
1222, 383, 6705, 4809, 0, 3855, 262, 6498, 14345, 32232,
28186, 329, 534, 5995, 13, 3497, 632, 2262, 4262, 2735,
532, 3232, 19879, 3423, 5145, 90, 9218, 26449, 25, 39699,
376, 2535, 32588, 92, 21063, 843, 412, 5632, 15545, 364],
device='cuda:0')
```
there is no white space separating 0, 23037 (23037 is the only index in the output: Plumbing**!Need** )
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Regarding the second question
> 2. I get a 'warning':
> Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence
It's explained [here](https://jaketae.github.io/study/gpt2/#setup): "For open-end generation, HuggingFace will set the padding token ID to be equal to the end-of-sentence token ID". Code is here: https://github.com/huggingface/transformers/blob/b880508440f43f80e35a78ccd2a32f3bde91cb23/src/transformers/generation_utils.py#L410-L414 |
transformers | 3,675 | closed | Wrong tokenizer configuration in sentiment-analysis pipeline | # 🐛 Bug
## Information
When following the Pipelines Notebook 03-pipelines.ipynb, Sentiment Analysis tasks gives wrong result ("NEGATIVE") for example 'Such a nice weather outside !'.
```
nlp_sentence_classif = pipeline('sentiment-analysis')
nlp_sentence_classif('Such a nice weather outside !')
[{'label': 'NEGATIVE', 'score': 0.97545063}]
```
Probable reason: pipelines.py configuration uses uncased model, but cased tokenizer. Tokenizer should probably be 'distilbert-base-uncased'.
```
"sentiment-analysis": {
"impl": TextClassificationPipeline,
"tf": TFAutoModelForSequenceClassification if is_tf_available() else None,
"pt": AutoModelForSequenceClassification if is_torch_available() else None,
"default": {
"model": {
"pt": "distilbert-base-uncased-finetuned-sst-2-english",
"tf": "distilbert-base-uncased-finetuned-sst-2-english",
},
"config": "distilbert-base-uncased-finetuned-sst-2-english",
"tokenizer": "distilbert-base-cased",
},
},
```
Model I am using (Bert, XLNet ...): distilbert-base-uncased-finetuned-sst-2-english (preconfigured sentiment-analysis pipeline)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: 03-pipelines.ipynb
* [ ] my own modified scripts: (give details below)
## Expected behavior
Example sentence should be labeled as POSITIVE.
## Environment info
- `transformers` version: 2.8.0
- Platform: Linux Mint
- Python version: 3.7
- PyTorch version (GPU?): 1.4.0 (no)
- Tensorflow version (GPU?): 2.1.0 (no)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| 04-07-2020 14:41:36 | 04-07-2020 14:41:36 | I'm having the same issue<|||||>> I'm having the same issue
I got this working by using the following code:
```
# Allocate a pipeline for sentiment-analysis
nlp = pipeline("sentiment-analysis")
nlp.tokenizer = transformers.DistilBertTokenizer.from_pretrained("**distilbert-base-uncased**")
```
Thanks for pointing me in the right direction LysandreJik!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,674 | closed | [Examples, Benchmark] Improve benchmark utils | This PR improves the `benchmarks.py` file a bit:
- "results, memory" are renamed to "time, memory"
- all print statements can optionally be saved in a log file
- the CSV file output format is improved
- better naming in general | 04-07-2020 12:58:28 | 04-07-2020 12:58:28 | |
transformers | 3,673 | closed | TypeError while loading the model built from scratch using transformer | # 🐛 Bug
> TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType`
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-33-cd040b700e71> in <module>()
3 from transformers import BertTokenizer, AdamW, BertForNextSentencePrediction
4
----> 5 tokenizer = BertTokenizer.from_pretrained('/content/drive/My Drive/Colab Notebooks/data/test/')
3 frames
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py in from_pretrained(cls, *inputs, **kwargs)
391
392 """
--> 393 return cls._from_pretrained(*inputs, **kwargs)
394
395 @classmethod
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
542 # Instantiate tokenizer.
543 try:
--> 544 tokenizer = cls(*init_inputs, **init_kwargs)
545 except OSError:
546 raise OSError(
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_bert.py in __init__(self, vocab_file, do_lower_case, do_basic_tokenize, never_split, unk_token, sep_token, pad_token, cls_token, mask_token, tokenize_chinese_chars, **kwargs)
186 self.max_len_sentences_pair = self.max_len - 3 # take into account special tokens
187
--> 188 if not os.path.isfile(vocab_file):
189 raise ValueError(
190 "Can't find a vocabulary file at path '{}'. To load the vocabulary from a Google pretrained "
/usr/lib/python3.6/genericpath.py in isfile(path)
28 """Test whether a path is a regular file"""
29 try:
---> 30 st = os.stat(path)
31 except OSError:
32 return False
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
## Information
I am trying to fine-tune the model that I built from scratch using transformers. When I am trying to load the tokenizer from the model that is just made, it is giving Type Error
Model I am using (Bert, XLNet ...): Model is built from scratch using https://huggingface.co/blog/how-to-train
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
```
import torch
import transformers
from transformers import BertTokenizer, AdamW, BertForNextSentencePrediction
tokenizer = BertTokenizer.from_pretrained('/content/drive/My Drive/Colab Notebooks/data/model/')
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Google Colab
- Python version: 3.x
- PyTorch version (GPU?):'1.4.0'
- Tensorflow version (GPU?):'2.8.0'
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 04-07-2020 10:03:57 | 04-07-2020 10:03:57 | looks like it's not able to find vocabulary file. Make sure there is a vocab.txt file for bert. Otherwise, you can simply load it by `tokenizer = BertTokenizer(vocab_file="path to vocab", and configs)`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,672 | closed | How to train BART text summarization with your own data? | # ❓ Questions & Help
## Details
This is actually a two part question. I have noticed that in [1](https://github.com/huggingface/transformers/blob/master/examples/summarization/bart/run_train.sh) instructions have been given to train with the cnn/dm data. How would we train it with our own data? Should the file format be .story?
And secondly how exactly do we handle the python path in google colab?

I have tried in both these ways and failed.
Link to this question in SO: [2] (https://stackoverflow.com/questions/61058171/no-module-named-transformer-base/61070453#61070453) | 04-07-2020 09:09:09 | 04-07-2020 09:09:09 | You can try to move transformer_base file to the same location of run_bart_sum.py.<|||||>Hi,
1. As written in `README.md`
> "To use your own data, copy that files format. Each article to be summarized is on its own line."
I think you should insert in cnn_dm folder your files renamed `train.source`, `train.target`, `test.source`, `test.target`, `val.source`, `val.target`, where in each file you have respectively a source text and a target text per line.
2. You are not using the script `run_train.sh`, as suggested in the `README.md`. In the `run_train.sh` there are a series of export commands that you missed.
The last one should fix your issue.
Hope it helps.
```export OUTPUT_DIR_NAME=bart_sum
export CURRENT_DIR=${PWD}
export OUTPUT_DIR=${CURRENT_DIR}/${OUTPUT_DIR_NAME}
# Make output directory if it doesn't exist
mkdir -p $OUTPUT_DIR
#Add parent directory to python path to access transformer_base.py
export PYTHONPATH="../../":"${PYTHONPATH}"```
<|||||>Closing, @teelinsan 's answer is correct. |
transformers | 3,671 | closed | Loading pre-trained ELECTRA checkpoint to HuggingFace | # ❓ Questions & Help
Hello everyone!
I have been struggling with HuggingFace interface for loading ELECTRA model via transformers.TFElectraModel class. Since TF version of ElectraModel didn't manage to help me restore the checkpoint from the official Google Research implementation (they save only .ckpl files) due to this error:
`
NotImplementedError: Weights may only be loaded based on topology into Models when loading TensorFlow-formatted weights (got by_name=True to load_weights).
`
However, the normal ElectraModel.from_pretrained() procedure managed to load my model, writing this to the stdout:
```
Skipping discriminator_predictions/dense/bias ['discriminator_predictions', 'dense', 'bias'] 'ElectraModel' object has no attribute 'discriminator_predictions'
Skipping discriminator_predictions/dense/bias/adam_m ['discriminator_predictions', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'discriminator_predictions'
Skipping discriminator_predictions/dense/bias/adam_v ['discriminator_predictions', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'discriminator_predictions'
Skipping discriminator_predictions/dense/kernel ['discriminator_predictions', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'discriminator_predictions'
Skipping discriminator_predictions/dense/kernel/adam_m ['discriminator_predictions', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'discriminator_predictions'
Skipping discriminator_predictions/dense/kernel/adam_v ['discriminator_predictions', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'discriminator_predictions'
Skipping discriminator_predictions/dense_1/bias ['discriminator_predictions', 'dense_prediction', 'bias'] 'ElectraModel' object has no attribute 'discriminator_predictions'
Skipping discriminator_predictions/dense_1/bias/adam_m ['discriminator_predictions', 'dense_prediction', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'discriminator_predictions'
Skipping discriminator_predictions/dense_1/bias/adam_v ['discriminator_predictions', 'dense_prediction', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'discriminator_predictions'
Skipping discriminator_predictions/dense_1/kernel ['discriminator_predictions', 'dense_prediction', 'kernel'] 'ElectraModel' object has no attribute 'discriminator_predictions'
Skipping discriminator_predictions/dense_1/kernel/adam_m ['discriminator_predictions', 'dense_prediction', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'discriminator_predictions'
Skipping discriminator_predictions/dense_1/kernel/adam_v ['discriminator_predictions', 'dense_prediction', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'discriminator_predictions'
Skipping electra/embeddings/LayerNorm/beta ['electra', 'embeddings', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/embeddings/LayerNorm/beta/adam_m ['electra', 'embeddings', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/embeddings/LayerNorm/beta/adam_v ['electra', 'embeddings', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/embeddings/LayerNorm/gamma ['electra', 'embeddings', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/embeddings/LayerNorm/gamma/adam_m ['electra', 'embeddings', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/embeddings/LayerNorm/gamma/adam_v ['electra', 'embeddings', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/embeddings/position_embeddings ['electra', 'embeddings', 'position_embeddings'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/embeddings/position_embeddings/adam_m ['electra', 'embeddings', 'position_embeddings', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/embeddings/position_embeddings/adam_v ['electra', 'embeddings', 'position_embeddings', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/embeddings/token_type_embeddings ['electra', 'embeddings', 'token_type_embeddings'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/embeddings/token_type_embeddings/adam_m ['electra', 'embeddings', 'token_type_embeddings', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/embeddings/token_type_embeddings/adam_v ['electra', 'embeddings', 'token_type_embeddings', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/embeddings/word_embeddings ['electra', 'embeddings', 'word_embeddings'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/embeddings/word_embeddings/adam_m ['electra', 'embeddings', 'word_embeddings', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/embeddings/word_embeddings/adam_v ['electra', 'embeddings', 'word_embeddings', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/output/dense/bias ['electra', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/output/dense/kernel ['electra', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/key/bias ['electra', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/key/kernel ['electra', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/query/bias ['electra', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/query/kernel ['electra', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/value/bias ['electra', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/value/kernel ['electra', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/intermediate/dense/bias ['electra', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/intermediate/dense/kernel ['electra', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/output/LayerNorm/beta ['electra', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/output/LayerNorm/gamma ['electra', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/output/dense/bias ['electra', 'encoder', 'layer_0', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/output/dense/bias/adam_m ['electra', 'encoder', 'layer_0', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/output/dense/bias/adam_v ['electra', 'encoder', 'layer_0', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/output/dense/kernel ['electra', 'encoder', 'layer_0', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_0', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_0/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_0', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/output/dense/bias ['electra', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/output/dense/kernel ['electra', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/key/bias ['electra', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/key/kernel ['electra', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/query/bias ['electra', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/query/kernel ['electra', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/value/bias ['electra', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/value/kernel ['electra', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/intermediate/dense/bias ['electra', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/intermediate/dense/kernel ['electra', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/output/LayerNorm/beta ['electra', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/output/LayerNorm/gamma ['electra', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/output/dense/bias ['electra', 'encoder', 'layer_1', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/output/dense/bias/adam_m ['electra', 'encoder', 'layer_1', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/output/dense/bias/adam_v ['electra', 'encoder', 'layer_1', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/output/dense/kernel ['electra', 'encoder', 'layer_1', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_1', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_1/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_1', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/output/dense/bias ['electra', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/output/dense/kernel ['electra', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/key/bias ['electra', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/key/kernel ['electra', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/query/bias ['electra', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/query/kernel ['electra', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/value/bias ['electra', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/value/kernel ['electra', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/intermediate/dense/bias ['electra', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/intermediate/dense/kernel ['electra', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/output/LayerNorm/beta ['electra', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/output/LayerNorm/gamma ['electra', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/output/dense/bias ['electra', 'encoder', 'layer_10', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/output/dense/bias/adam_m ['electra', 'encoder', 'layer_10', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/output/dense/bias/adam_v ['electra', 'encoder', 'layer_10', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/output/dense/kernel ['electra', 'encoder', 'layer_10', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_10', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_10/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_10', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/output/dense/bias ['electra', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/output/dense/kernel ['electra', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/key/bias ['electra', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/key/kernel ['electra', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/query/bias ['electra', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/query/kernel ['electra', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/value/bias ['electra', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/value/kernel ['electra', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/intermediate/dense/bias ['electra', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/intermediate/dense/kernel ['electra', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/output/LayerNorm/beta ['electra', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/output/LayerNorm/gamma ['electra', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/output/dense/bias ['electra', 'encoder', 'layer_11', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/output/dense/bias/adam_m ['electra', 'encoder', 'layer_11', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/output/dense/bias/adam_v ['electra', 'encoder', 'layer_11', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/output/dense/kernel ['electra', 'encoder', 'layer_11', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_11', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_11/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_11', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/output/dense/bias ['electra', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/output/dense/kernel ['electra', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/key/bias ['electra', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/key/kernel ['electra', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/query/bias ['electra', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/query/kernel ['electra', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/value/bias ['electra', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/value/kernel ['electra', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/intermediate/dense/bias ['electra', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/intermediate/dense/kernel ['electra', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/output/LayerNorm/beta ['electra', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/output/LayerNorm/gamma ['electra', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/output/dense/bias ['electra', 'encoder', 'layer_2', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/output/dense/bias/adam_m ['electra', 'encoder', 'layer_2', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/output/dense/bias/adam_v ['electra', 'encoder', 'layer_2', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/output/dense/kernel ['electra', 'encoder', 'layer_2', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_2', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_2/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_2', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/output/dense/bias ['electra', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/output/dense/kernel ['electra', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/key/bias ['electra', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/key/kernel ['electra', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/query/bias ['electra', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/query/kernel ['electra', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/value/bias ['electra', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/value/kernel ['electra', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/intermediate/dense/bias ['electra', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/intermediate/dense/kernel ['electra', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/output/LayerNorm/beta ['electra', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/output/LayerNorm/gamma ['electra', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/output/dense/bias ['electra', 'encoder', 'layer_3', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/output/dense/bias/adam_m ['electra', 'encoder', 'layer_3', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/output/dense/bias/adam_v ['electra', 'encoder', 'layer_3', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/output/dense/kernel ['electra', 'encoder', 'layer_3', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_3', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_3/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_3', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/output/dense/bias ['electra', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/output/dense/kernel ['electra', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/key/bias ['electra', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/key/kernel ['electra', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/query/bias ['electra', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/query/kernel ['electra', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/value/bias ['electra', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/value/kernel ['electra', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/intermediate/dense/bias ['electra', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/intermediate/dense/kernel ['electra', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/output/LayerNorm/beta ['electra', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/output/LayerNorm/gamma ['electra', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/output/dense/bias ['electra', 'encoder', 'layer_4', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/output/dense/bias/adam_m ['electra', 'encoder', 'layer_4', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/output/dense/bias/adam_v ['electra', 'encoder', 'layer_4', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/output/dense/kernel ['electra', 'encoder', 'layer_4', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_4', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_4/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_4', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/output/dense/bias ['electra', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/output/dense/kernel ['electra', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/key/bias ['electra', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/key/kernel ['electra', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/query/bias ['electra', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/query/kernel ['electra', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/value/bias ['electra', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/value/kernel ['electra', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/intermediate/dense/bias ['electra', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/intermediate/dense/kernel ['electra', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/output/LayerNorm/beta ['electra', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/output/LayerNorm/gamma ['electra', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/output/dense/bias ['electra', 'encoder', 'layer_5', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/output/dense/bias/adam_m ['electra', 'encoder', 'layer_5', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/output/dense/bias/adam_v ['electra', 'encoder', 'layer_5', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/output/dense/kernel ['electra', 'encoder', 'layer_5', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_5', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_5/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_5', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/output/dense/bias ['electra', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/output/dense/kernel ['electra', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/key/bias ['electra', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/key/kernel ['electra', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/query/bias ['electra', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/query/kernel ['electra', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/value/bias ['electra', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/value/kernel ['electra', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/intermediate/dense/bias ['electra', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/intermediate/dense/kernel ['electra', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/output/LayerNorm/beta ['electra', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/output/LayerNorm/gamma ['electra', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/output/dense/bias ['electra', 'encoder', 'layer_6', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/output/dense/bias/adam_m ['electra', 'encoder', 'layer_6', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/output/dense/bias/adam_v ['electra', 'encoder', 'layer_6', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/output/dense/kernel ['electra', 'encoder', 'layer_6', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_6', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_6/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_6', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/output/dense/bias ['electra', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/output/dense/kernel ['electra', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/key/bias ['electra', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/key/kernel ['electra', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/query/bias ['electra', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/query/kernel ['electra', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/value/bias ['electra', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/value/kernel ['electra', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/intermediate/dense/bias ['electra', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/intermediate/dense/kernel ['electra', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/output/LayerNorm/beta ['electra', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/output/LayerNorm/gamma ['electra', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/output/dense/bias ['electra', 'encoder', 'layer_7', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/output/dense/bias/adam_m ['electra', 'encoder', 'layer_7', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/output/dense/bias/adam_v ['electra', 'encoder', 'layer_7', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/output/dense/kernel ['electra', 'encoder', 'layer_7', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_7', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_7/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_7', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/output/dense/bias ['electra', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/output/dense/kernel ['electra', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/key/bias ['electra', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/key/kernel ['electra', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/query/bias ['electra', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/query/kernel ['electra', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/value/bias ['electra', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/value/kernel ['electra', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/intermediate/dense/bias ['electra', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/intermediate/dense/kernel ['electra', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/output/LayerNorm/beta ['electra', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/output/LayerNorm/gamma ['electra', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/output/dense/bias ['electra', 'encoder', 'layer_8', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/output/dense/bias/adam_m ['electra', 'encoder', 'layer_8', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/output/dense/bias/adam_v ['electra', 'encoder', 'layer_8', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/output/dense/kernel ['electra', 'encoder', 'layer_8', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_8', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_8/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_8', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/output/LayerNorm/beta ['electra', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/output/LayerNorm/gamma ['electra', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/output/dense/bias ['electra', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/output/dense/bias/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/output/dense/bias/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/output/dense/kernel ['electra', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/key/bias ['electra', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/key/bias/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/key/bias/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/key/kernel ['electra', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/key/kernel/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/key/kernel/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/query/bias ['electra', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/query/bias/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/query/bias/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/query/kernel ['electra', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/query/kernel/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/query/kernel/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/value/bias ['electra', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/value/bias/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/value/bias/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/value/kernel ['electra', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/value/kernel/adam_m ['electra', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/attention/self/value/kernel/adam_v ['electra', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/intermediate/dense/bias ['electra', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/intermediate/dense/bias/adam_m ['electra', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/intermediate/dense/bias/adam_v ['electra', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/intermediate/dense/kernel ['electra', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/intermediate/dense/kernel/adam_m ['electra', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/intermediate/dense/kernel/adam_v ['electra', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/output/LayerNorm/beta ['electra', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/output/LayerNorm/beta/adam_m ['electra', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/output/LayerNorm/beta/adam_v ['electra', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/output/LayerNorm/gamma ['electra', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/output/LayerNorm/gamma/adam_m ['electra', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/output/LayerNorm/gamma/adam_v ['electra', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/output/dense/bias ['electra', 'encoder', 'layer_9', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/output/dense/bias/adam_m ['electra', 'encoder', 'layer_9', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/output/dense/bias/adam_v ['electra', 'encoder', 'layer_9', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/output/dense/kernel ['electra', 'encoder', 'layer_9', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/output/dense/kernel/adam_m ['electra', 'encoder', 'layer_9', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'electra'
Skipping electra/encoder/layer_9/output/dense/kernel/adam_v ['electra', 'encoder', 'layer_9', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'electra'
Skipping generator/embeddings_project/bias ['generator', 'embeddings_project', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/embeddings_project/bias/adam_m ['generator', 'embeddings_project', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/embeddings_project/bias/adam_v ['generator', 'embeddings_project', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/embeddings_project/kernel ['generator', 'embeddings_project', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/embeddings_project/kernel/adam_m ['generator', 'embeddings_project', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/embeddings_project/kernel/adam_v ['generator', 'embeddings_project', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/output/dense/bias ['generator', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/output/dense/kernel ['generator', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/key/bias ['generator', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/key/kernel ['generator', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/query/bias ['generator', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/query/kernel ['generator', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/value/bias ['generator', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/value/kernel ['generator', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/intermediate/dense/bias ['generator', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/intermediate/dense/kernel ['generator', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/output/LayerNorm/beta ['generator', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/output/LayerNorm/gamma ['generator', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/output/dense/bias ['generator', 'encoder', 'layer_0', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/output/dense/bias/adam_m ['generator', 'encoder', 'layer_0', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/output/dense/bias/adam_v ['generator', 'encoder', 'layer_0', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/output/dense/kernel ['generator', 'encoder', 'layer_0', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_0', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_0/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_0', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/output/dense/bias ['generator', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/output/dense/kernel ['generator', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/key/bias ['generator', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/key/kernel ['generator', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/query/bias ['generator', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/query/kernel ['generator', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/value/bias ['generator', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/value/kernel ['generator', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/intermediate/dense/bias ['generator', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/intermediate/dense/kernel ['generator', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/output/LayerNorm/beta ['generator', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/output/LayerNorm/gamma ['generator', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/output/dense/bias ['generator', 'encoder', 'layer_1', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/output/dense/bias/adam_m ['generator', 'encoder', 'layer_1', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/output/dense/bias/adam_v ['generator', 'encoder', 'layer_1', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/output/dense/kernel ['generator', 'encoder', 'layer_1', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_1', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_1/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_1', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/output/dense/bias ['generator', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/output/dense/kernel ['generator', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/key/bias ['generator', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/key/kernel ['generator', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/query/bias ['generator', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/query/kernel ['generator', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/value/bias ['generator', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/value/kernel ['generator', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/intermediate/dense/bias ['generator', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/intermediate/dense/kernel ['generator', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/output/LayerNorm/beta ['generator', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/output/LayerNorm/gamma ['generator', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/output/dense/bias ['generator', 'encoder', 'layer_10', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/output/dense/bias/adam_m ['generator', 'encoder', 'layer_10', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/output/dense/bias/adam_v ['generator', 'encoder', 'layer_10', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/output/dense/kernel ['generator', 'encoder', 'layer_10', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_10', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_10/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_10', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/output/dense/bias ['generator', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/output/dense/kernel ['generator', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/key/bias ['generator', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/key/kernel ['generator', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/query/bias ['generator', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/query/kernel ['generator', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/value/bias ['generator', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/value/kernel ['generator', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/intermediate/dense/bias ['generator', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/intermediate/dense/kernel ['generator', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/output/LayerNorm/beta ['generator', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/output/LayerNorm/gamma ['generator', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/output/dense/bias ['generator', 'encoder', 'layer_11', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/output/dense/bias/adam_m ['generator', 'encoder', 'layer_11', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/output/dense/bias/adam_v ['generator', 'encoder', 'layer_11', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/output/dense/kernel ['generator', 'encoder', 'layer_11', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_11', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_11/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_11', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/output/dense/bias ['generator', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/output/dense/kernel ['generator', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/key/bias ['generator', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/key/kernel ['generator', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/query/bias ['generator', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/query/kernel ['generator', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/value/bias ['generator', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/value/kernel ['generator', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/intermediate/dense/bias ['generator', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/intermediate/dense/kernel ['generator', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/output/LayerNorm/beta ['generator', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/output/LayerNorm/gamma ['generator', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/output/dense/bias ['generator', 'encoder', 'layer_2', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/output/dense/bias/adam_m ['generator', 'encoder', 'layer_2', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/output/dense/bias/adam_v ['generator', 'encoder', 'layer_2', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/output/dense/kernel ['generator', 'encoder', 'layer_2', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_2', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_2/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_2', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/output/dense/bias ['generator', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/output/dense/kernel ['generator', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/key/bias ['generator', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/key/kernel ['generator', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/query/bias ['generator', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/query/kernel ['generator', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/value/bias ['generator', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/value/kernel ['generator', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/intermediate/dense/bias ['generator', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/intermediate/dense/kernel ['generator', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/output/LayerNorm/beta ['generator', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/output/LayerNorm/gamma ['generator', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/output/dense/bias ['generator', 'encoder', 'layer_3', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/output/dense/bias/adam_m ['generator', 'encoder', 'layer_3', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/output/dense/bias/adam_v ['generator', 'encoder', 'layer_3', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/output/dense/kernel ['generator', 'encoder', 'layer_3', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_3', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_3/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_3', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/output/dense/bias ['generator', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/output/dense/kernel ['generator', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/key/bias ['generator', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/key/kernel ['generator', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/query/bias ['generator', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/query/kernel ['generator', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/value/bias ['generator', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/value/kernel ['generator', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/intermediate/dense/bias ['generator', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/intermediate/dense/kernel ['generator', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/output/LayerNorm/beta ['generator', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/output/LayerNorm/gamma ['generator', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/output/dense/bias ['generator', 'encoder', 'layer_4', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/output/dense/bias/adam_m ['generator', 'encoder', 'layer_4', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/output/dense/bias/adam_v ['generator', 'encoder', 'layer_4', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/output/dense/kernel ['generator', 'encoder', 'layer_4', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_4', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_4/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_4', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/output/dense/bias ['generator', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/output/dense/kernel ['generator', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/key/bias ['generator', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/key/kernel ['generator', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/query/bias ['generator', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/query/kernel ['generator', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/value/bias ['generator', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/value/kernel ['generator', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/intermediate/dense/bias ['generator', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/intermediate/dense/kernel ['generator', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/output/LayerNorm/beta ['generator', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/output/LayerNorm/gamma ['generator', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/output/dense/bias ['generator', 'encoder', 'layer_5', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/output/dense/bias/adam_m ['generator', 'encoder', 'layer_5', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/output/dense/bias/adam_v ['generator', 'encoder', 'layer_5', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/output/dense/kernel ['generator', 'encoder', 'layer_5', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_5', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_5/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_5', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/output/dense/bias ['generator', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/output/dense/kernel ['generator', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/key/bias ['generator', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/key/kernel ['generator', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/query/bias ['generator', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/query/kernel ['generator', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/value/bias ['generator', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/value/kernel ['generator', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/intermediate/dense/bias ['generator', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/intermediate/dense/kernel ['generator', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/output/LayerNorm/beta ['generator', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/output/LayerNorm/gamma ['generator', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/output/dense/bias ['generator', 'encoder', 'layer_6', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/output/dense/bias/adam_m ['generator', 'encoder', 'layer_6', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/output/dense/bias/adam_v ['generator', 'encoder', 'layer_6', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/output/dense/kernel ['generator', 'encoder', 'layer_6', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_6', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_6/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_6', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/output/dense/bias ['generator', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/output/dense/kernel ['generator', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/key/bias ['generator', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/key/kernel ['generator', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/query/bias ['generator', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/query/kernel ['generator', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/value/bias ['generator', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/value/kernel ['generator', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/intermediate/dense/bias ['generator', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/intermediate/dense/kernel ['generator', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/output/LayerNorm/beta ['generator', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/output/LayerNorm/gamma ['generator', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/output/dense/bias ['generator', 'encoder', 'layer_7', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/output/dense/bias/adam_m ['generator', 'encoder', 'layer_7', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/output/dense/bias/adam_v ['generator', 'encoder', 'layer_7', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/output/dense/kernel ['generator', 'encoder', 'layer_7', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_7', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_7/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_7', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/output/dense/bias ['generator', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/output/dense/kernel ['generator', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/key/bias ['generator', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/key/kernel ['generator', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/query/bias ['generator', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/query/kernel ['generator', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/value/bias ['generator', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/value/kernel ['generator', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/intermediate/dense/bias ['generator', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/intermediate/dense/kernel ['generator', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/output/LayerNorm/beta ['generator', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/output/LayerNorm/gamma ['generator', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/output/dense/bias ['generator', 'encoder', 'layer_8', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/output/dense/bias/adam_m ['generator', 'encoder', 'layer_8', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/output/dense/bias/adam_v ['generator', 'encoder', 'layer_8', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/output/dense/kernel ['generator', 'encoder', 'layer_8', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_8', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_8/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_8', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/output/dense/bias ['generator', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/output/dense/bias/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/output/dense/bias/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/output/dense/kernel ['generator', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/key/bias ['generator', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/key/bias/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/key/bias/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/key/kernel ['generator', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/key/kernel/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/key/kernel/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/query/bias ['generator', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/query/bias/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/query/bias/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/query/kernel ['generator', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/query/kernel/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/query/kernel/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/value/bias ['generator', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/value/bias/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/value/bias/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/value/kernel ['generator', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/value/kernel/adam_m ['generator', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/value/kernel/adam_v ['generator', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/intermediate/dense/bias ['generator', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/intermediate/dense/bias/adam_m ['generator', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/intermediate/dense/bias/adam_v ['generator', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/intermediate/dense/kernel ['generator', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/intermediate/dense/kernel/adam_m ['generator', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/intermediate/dense/kernel/adam_v ['generator', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/output/LayerNorm/beta ['generator', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/output/LayerNorm/beta/adam_m ['generator', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/output/LayerNorm/beta/adam_v ['generator', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/output/LayerNorm/gamma ['generator', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/output/LayerNorm/gamma/adam_m ['generator', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/output/LayerNorm/gamma/adam_v ['generator', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/output/dense/bias ['generator', 'encoder', 'layer_9', 'output', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/output/dense/bias/adam_m ['generator', 'encoder', 'layer_9', 'output', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/output/dense/bias/adam_v ['generator', 'encoder', 'layer_9', 'output', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/output/dense/kernel ['generator', 'encoder', 'layer_9', 'output', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/output/dense/kernel/adam_m ['generator', 'encoder', 'layer_9', 'output', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator'
Skipping generator/encoder/layer_9/output/dense/kernel/adam_v ['generator', 'encoder', 'layer_9', 'output', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator'
Skipping generator_predictions/LayerNorm/beta ['generator_predictions', 'LayerNorm', 'beta'] 'ElectraModel' object has no attribute 'generator_predictions'
Skipping generator_predictions/LayerNorm/beta/adam_m ['generator_predictions', 'LayerNorm', 'beta', 'adam_m'] 'ElectraModel' object has no attribute 'generator_predictions'
Skipping generator_predictions/LayerNorm/beta/adam_v ['generator_predictions', 'LayerNorm', 'beta', 'adam_v'] 'ElectraModel' object has no attribute 'generator_predictions'
Skipping generator_predictions/LayerNorm/gamma ['generator_predictions', 'LayerNorm', 'gamma'] 'ElectraModel' object has no attribute 'generator_predictions'
Skipping generator_predictions/LayerNorm/gamma/adam_m ['generator_predictions', 'LayerNorm', 'gamma', 'adam_m'] 'ElectraModel' object has no attribute 'generator_predictions'
Skipping generator_predictions/LayerNorm/gamma/adam_v ['generator_predictions', 'LayerNorm', 'gamma', 'adam_v'] 'ElectraModel' object has no attribute 'generator_predictions'
Skipping generator_predictions/dense/bias ['generator_predictions', 'dense', 'bias'] 'ElectraModel' object has no attribute 'generator_predictions'
Skipping generator_predictions/dense/bias/adam_m ['generator_predictions', 'dense', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator_predictions'
Skipping generator_predictions/dense/bias/adam_v ['generator_predictions', 'dense', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator_predictions'
Skipping generator_predictions/dense/kernel ['generator_predictions', 'dense', 'kernel'] 'ElectraModel' object has no attribute 'generator_predictions'
Skipping generator_predictions/dense/kernel/adam_m ['generator_predictions', 'dense', 'kernel', 'adam_m'] 'ElectraModel' object has no attribute 'generator_predictions'
Skipping generator_predictions/dense/kernel/adam_v ['generator_predictions', 'dense', 'kernel', 'adam_v'] 'ElectraModel' object has no attribute 'generator_predictions'
Skipping generator_predictions/output_bias ['generator_lm_head', 'bias'] 'ElectraModel' object has no attribute 'generator_lm_head'
Skipping generator_predictions/output_bias/adam_m ['generator_lm_head', 'bias', 'adam_m'] 'ElectraModel' object has no attribute 'generator_lm_head'
Skipping generator_predictions/output_bias/adam_v ['generator_lm_head', 'bias', 'adam_v'] 'ElectraModel' object has no attribute 'generator_lm_head'
```
So, my question is, is it the expected behaviour of the Electra model loader or I am doing something wrong? Thanks!
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
| 04-07-2020 09:08:52 | 04-07-2020 09:08:52 | Hi! I don't really understand how you obtained what you did, what script did you use, what arguments did you put in? The procedure to convert an ELECTRA checkpoint from the official implementation to our implementation is to do the following (feel free to skip the first steps if you already have your checkpoint):
```bash
# Get a checkpoint
wget https://storage.googleapis.com/electra-data/electra_small.zip
# Unzip it
unzip electra_small.zip
# Get an appropriate configuration file for your model (see below)
vim electra_small/config.json
# Run the script
python $TRANSFORMERS/src/transformers/convert_electra_original_tf_checkpoint_to_pytorch.py \
--tf_checkpoint_path=./electra_small/electra_small \
--config_file=./electra_small/config.json \
--pytorch_dump_path=pytorch_model.bin \
--discriminator_or_generator=discriminator
```
From this you should get the following output:
```bash
Initialize PyTorch weight ['discriminator_predictions', 'dense', 'bias'] discriminator_predictions/dense/bias
Initialize PyTorch weight ['discriminator_predictions', 'dense', 'kernel'] discriminator_predictions/dense/kernel
Initialize PyTorch weight ['discriminator_predictions', 'dense_prediction', 'bias'] discriminator_predictions/dense_1/bias
Initialize PyTorch weight ['discriminator_predictions', 'dense_prediction', 'kernel'] discriminator_predictions/dense_1/kernel
Initialize PyTorch weight ['electra', 'embeddings', 'LayerNorm', 'beta'] electra/embeddings/LayerNorm/beta
Initialize PyTorch weight ['electra', 'embeddings', 'LayerNorm', 'gamma'] electra/embeddings/LayerNorm/gamma
[...]
Skipping generator_predictions/dense/bias ['generator_predictions', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator_predictions'
Skipping generator_predictions/dense/kernel ['generator_predictions', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator_predictions'
Skipping generator_predictions/output_bias ['generator_lm_head', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator_lm_head'
INFO:transformers.modeling_electra:Skipping generator_predictions/temperature
INFO:transformers.modeling_electra:Skipping global_step
Save PyTorch model to pytorch_model.bin
```
Which tells you that it ignored the generator layers, but saved the discriminator layers :).
The tricky part here is to craft a configuration file specific to the model. I want to obtain the small discriminator from this checkpoint, so the configuration file is the following:
```json
{
"attention_probs_dropout_prob": 0.1,
"hidden_size": 256,
"intermediate_size": 1024,
"num_attention_heads": 4,
"num_hidden_layers": 12,
"embedding_size": 128,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"initializer_range": 0.02,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"type_vocab_size": 2,
"vocab_size": 30522
}
```
You can either write it yourself or instantiate it from a `transformers.ElectraConfig` and save it as a JSON file.<|||||>@LysandreJik I used your [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_electra_original_tf_checkpoint_to_pytorch.py) to convert the trained model from the [origin repo](https://github.com/google-research/electra) (training on my own data) and it worked. I wonder whether an equal technique to convert this trained model to the Electra tf2 model that implemented in HuggingFace?<|||||>@nguyenvulebinh glad the script worked! The script only outputs a PyTorch model but it's very simple to convert that model to TF2. Once you have the converted model, you can then load it in TensorFlow by specifying the `from_pt` option:
```py
from transformers import TFElectraForPreTraining
model = TFElectraForPreTraining.from_pretrained("directory", from_pt=True)
```
You can then save that model in `.h5` format so that it gets natively loaded by TensorFlow in the future:
```py
model.save_pretrained("directory-tf")
# Can now load directly from TensorFlow without the `from_pt` option:
model = TFElectraForPreTraining.from_pretrained("directory-tf")
```<|||||>@LysandreJik It's really cool! Thank you! I did it 😍<|||||>@LysandreJik
Hi,
I have a question on pre-training Electra using the PyTorch base model.
If I want to continue pretraining the Electra model (HuggingFace implementation) on a domain-specific corpus, which model should I use to initialize - the generator or discriminator?
Thanks!<|||||>When using the ELECTRA method what you're really interested in is the discriminator.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,670 | closed | Has anyone used the run_language_modeling.py to train a gpt2 in a different language? is it possible? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-07-2020 07:11:01 | 04-07-2020 07:11:01 | Hi! Maybe you can have a look at to the issue #1560 . <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,669 | closed | [examples] Generate argparsers from type hints on dataclasses | 04-07-2020 05:25:29 | 04-07-2020 05:25:29 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3669?src=pr&el=h1) Report
> Merging [#3669](https://codecov.io/gh/huggingface/transformers/pull/3669?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0a9d09b42a9c7c1ccc00da48486a1188078e8594&el=desc) will **increase** coverage by `0.03%`.
> The diff coverage is `83.54%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3669?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3669 +/- ##
==========================================
+ Coverage 78.03% 78.07% +0.03%
==========================================
Files 104 106 +2
Lines 17708 17787 +79
==========================================
+ Hits 13819 13887 +68
- Misses 3889 3900 +11
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3669?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/hf\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/3669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `74.00% <74.00%> (ø)` | |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/3669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.98% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/3669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `100.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.23% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.28% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3669?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3669?src=pr&el=footer). Last update [0a9d09b...b63747d](https://codecov.io/gh/huggingface/transformers/pull/3669?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Phew! Ok, I went through multiple rewrites today and I think it is pretty good now.
**TL;DR:**
- I only made changes to the `run_glue.py` example script to keep the diff smaller.
- I have to pass `DataClassType`s (_types_, not instances) to `HfArgumentParser` because if they have required properties/arguments, we wouldn't be able to instantiate them before "filling" them
- The class is designed to play well with the native `argparse`. In particular, you can get back any not-known args and parse them using a different argparse.ArgumentParser, to make adoption easier in complex scripts.
- **read the unit tests for (a subset of) the supported arguments and how the properties translate into arguments.** |
|
transformers | 3,668 | closed | ❓ In BART, why forcing the first token to BOS ? | # ❓ Questions & Help
In the generation method, the method `prepare_scores_for_generation` is called :
https://github.com/huggingface/transformers/blob/0a9d09b42a9c7c1ccc00da48486a1188078e8594/src/transformers/modeling_utils.py#L1208
And in this method, if it's the first decoding step, BOS token is forced :
https://github.com/huggingface/transformers/blob/0a9d09b42a9c7c1ccc00da48486a1188078e8594/src/transformers/modeling_bart.py#L924-L926
---
I don't understand why it's necessary, because anyway the decoder input ids already contain BOS :
https://github.com/huggingface/transformers/blob/0a9d09b42a9c7c1ccc00da48486a1188078e8594/src/transformers/modeling_utils.py#L866-L868 | 04-07-2020 04:50:11 | 04-07-2020 04:50:11 | Hi @Colanim,
That's indeed a very good question! The only reason why we add these hacks here is because that's the way Fairseq implemented it and you get better results on summarization using Bart this way. We measured the differences in performance when leaving out those "force token" hacks and it was quite significant.
Please read through these PRs to better understand why we made this decision:
https://github.com/huggingface/transformers/pull/3225
and
https://github.com/huggingface/transformers/pull/3140<|||||>> I don't understand why it's necessary, because anyway the decoder input ids already contain BOS :
Regarding this point, is the situation different? I went through the PRs and the code, but it seems that the default `decoder_start_token_id` is still the EOS_token.
Ultimately, the question I want to ask is
**If I want to use BART for fine-tuning on another summarization task, do I set the `decoder_start_token_id` to EOS_token or BOS_token?** |
transformers | 3,667 | closed | Any Ideas on how to generate in bulk with CTRL? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Is it possible to generate multiple articles with a list of prompts using CTRL?
Any ideas will be greatly appreciated
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
| 04-07-2020 04:45:19 | 04-07-2020 04:45:19 | CTRL is a very large model so generating in bulk would require a lot of RAM.
Also generating with padded batches is not really supported yet, see: https://github.com/huggingface/transformers/issues/3021<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,666 | closed | Created README.md for model card ChemBERTa | The README.md is added to explain an overview of SMILES, as this is the only model card trained on a non-language dataset. The documentation also explains potential use-cases for utilizing RoBERTa trained on masked language modelling for SMILES, and links to a repository with the original notebooks for evaluations, running predictions and some applications of the models for curiosity. | 04-07-2020 04:25:11 | 04-07-2020 04:25:11 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3666?src=pr&el=h1) Report
> Merging [#3666](https://codecov.io/gh/huggingface/transformers/pull/3666?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0a9d09b42a9c7c1ccc00da48486a1188078e8594&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3666?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3666 +/- ##
==========================================
+ Coverage 78.03% 78.04% +0.01%
==========================================
Files 104 104
Lines 17708 17708
==========================================
+ Hits 13819 13821 +2
+ Misses 3889 3887 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3666?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.23% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3666/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.28% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3666?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3666?src=pr&el=footer). Last update [0a9d09b...7df3a3b](https://codecov.io/gh/huggingface/transformers/pull/3666?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This is really cool, @seyonechithrananda. Can you add a
```
---
tags:
- chemistry
---
```
metadata block to the top of the file? Also cc'ing @mrm8488 who might be interested<|||||>Thank you @julien-c. I uploaded two models from ChEMBL25/26 for drug structure learning (SMILES) using same technique. In fact, they have been used for COVID-19 drug discovery<|||||>@mrm8488 Are you targeting ligand-protein modelling techniques with transformers? <|||||>@julien-c Made the changes. Let me know what you think!<|||||>> @mrm8488 Are you targeting ligand-protein modelling techniques with transformers?
As told you via Twitter I am getting started into it. Getting chemical knowledge :) |
transformers | 3,665 | closed | Fix mlm | The way the texts is being split up into blocks right now
```
tokenized_text = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(text))
for i in range(0, len(tokenized_text) - block_size + 1, block_size): # Truncate in block of block_size
self.examples.append(tokenizer.build_inputs_with_special_tokens(tokenized_text[i : i + block_size]))
```
Results in double [CLS] tokens at the beginning, since special tokens are being added at `tokenizer.convert_tokens_to_ids` And at `tokenizer.build_inputs_with_special_tokens`. Somehow, double [SEP] tokens are not occuring.
The following eliminates the double [CLS] token.
```
tokenized_text = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(text)[1:-2])
for i in range(0, len(tokenized_text) - block_size + 1, block_size): # Truncate in block of block_size
self.examples.append(tokenizer.build_inputs_with_special_tokens(tokenized_text[i : i + block_size]))
``` | 04-07-2020 01:02:58 | 04-07-2020 01:02:58 | Couldn't we achieve the same result by specifying `add_special_tokens=False` instead of that? This isn't robust to different models, as GPT-2 (which doesn't have special tokens) would get some tokens removed.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Closing this as `run_language_modeling.py` is now based on the trainer. Thanks for your contribution!! |
transformers | 3,664 | closed | Unable to serialize/save TF2.0 RobertaSequenceClassification model to saved model format | # 🐛 Bug
I am getting an error while trying to serialize/save TF2.0 RobertaSequenceClassification Keras model to saved model format. I do not see this issue with Bert or Albert model architecture. Please see below for my test script that can be used to reproduce this issue.
## Information
Model I am using (Bert, XLNet ...): Roberta
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
```python
import tensorflow as tf
from transformers import *
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = TFRobertaForSequenceClassification.from_pretrained('roberta-base')
##########Uncomment the following 2 lines for testing with BERT ############
#tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
#model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :]
outputs = model(input_ids)
logits = outputs[0]
tf_saved_model_path= "/tmp/saved_model/"
tf.keras.models.save_model(model, tf_saved_model_path, overwrite=True, include_optimizer=False, save_format='tf')
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
I need to export/serialize a TF Keras model to TF saved model format
## To reproduce
Steps to reproduce the behavior:
1. Run the script pasted above to reproduce the issue with Roberta
2. Uncomment the 2 lines as mentioned in the script for using Bert (no error seen with Bert)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
***Stack Trace for Roberta***
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-5-87e63ee0b3ac> in <module>
15
16 tf_saved_model_path= "/tmp/saved_model/"
---> 17 tf.keras.models.save_model(model, tf_saved_model_path, overwrite=True, include_optimizer=False, save_format='tf')
~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options)
136 else:
137 saved_model_save.save(model, filepath, overwrite, include_optimizer,
--> 138 signatures, options)
139
140
~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/save.py in save(model, filepath, overwrite, include_optimizer, signatures, options)
76 # we use the default replica context here.
77 with distribution_strategy_context._get_default_replica_context(): # pylint: disable=protected-access
---> 78 save_lib.save(model, filepath, signatures, options)
79
80 if not include_optimizer:
~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py in save(obj, export_dir, signatures, options)
949
950 _, exported_graph, object_saver, asset_info = _build_meta_graph(
--> 951 obj, export_dir, signatures, options, meta_graph_def)
952 saved_model.saved_model_schema_version = constants.SAVED_MODEL_SCHEMA_VERSION
953
~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py in _build_meta_graph(obj, export_dir, signatures, options, meta_graph_def)
1035
1036 object_graph_proto = _serialize_object_graph(saveable_view,
-> 1037 asset_info.asset_index)
1038 meta_graph_def.object_graph_def.CopyFrom(object_graph_proto)
1039
~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py in _serialize_object_graph(saveable_view, asset_file_def_index)
695 for obj, obj_proto in zip(saveable_view.nodes, proto.nodes):
696 _write_object_proto(obj, obj_proto, asset_file_def_index,
--> 697 saveable_view.function_name_map)
698 return proto
699
~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py in _write_object_proto(obj, proto, asset_file_def_index, function_name_map)
735 version=versions_pb2.VersionDef(
736 producer=1, min_consumer=1, bad_consumers=[]),
--> 737 metadata=obj._tracking_metadata)
738 # pylint:enable=protected-access
739 proto.user_object.CopyFrom(registered_type_proto)
~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py in _tracking_metadata(self)
2727 @property
2728 def _tracking_metadata(self):
-> 2729 return self._trackable_saved_model_saver.tracking_metadata
2730
2731 def _list_extra_dependencies_for_serialization(self, serialization_cache):
~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/base_serialization.py in tracking_metadata(self)
52 # TODO(kathywu): check that serialized JSON can be loaded (e.g., if an
53 # object is in the python property)
---> 54 return json_utils.Encoder().encode(self.python_properties)
55
56 def list_extra_dependencies_for_serialization(self, serialization_cache):
~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/json_utils.py in encode(self, obj)
42
43 def encode(self, obj):
---> 44 return super(Encoder, self).encode(_encode_tuple(obj))
45
46
/usr/local/opt/pyenv/versions/3.6.7/lib/python3.6/json/encoder.py in encode(self, o)
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
/usr/local/opt/pyenv/versions/3.6.7/lib/python3.6/json/encoder.py in iterencode(self, o, _one_shot)
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
258
259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,
~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/json_utils.py in default(self, obj)
39 items = obj.as_list() if obj.rank is not None else None
40 return {'class_name': 'TensorShape', 'items': items}
---> 41 return serialization.get_json_type(obj)
42
43 def encode(self, obj):
~/huggingface/transformers/env/lib/python3.6/site-packages/tensorflow/python/util/serialization.py in get_json_type(obj)
74 return obj.__wrapped__
75
---> 76 raise TypeError('Not JSON Serializable:', obj)
TypeError: ('Not JSON Serializable:', RobertaConfig {
"_num_labels": 2,
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bad_words_ids": null,
"bos_token_id": 0,
"decoder_start_token_id": null,
"do_sample": false,
"early_stopping": false,
"eos_token_id": 2,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 514,
"min_length": 0,
"model_type": "roberta",
"no_repeat_ngram_size": 0,
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 1,
"prefix": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"task_specific_params": null,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 1,
"use_bfloat16": false,
"vocab_size": 50265
}
)
## Expected behavior
There should be no error when saving/serializing the TF Keras Model for Roberta. I do not see any error with Bert or Albert.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform: Darwin-19.2.0-x86_64-i386-64bit
- Python version: 3.6.7
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): 2.2.0-rc1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
I also see the same issue with TF 2.1.0.
| 04-07-2020 00:18:36 | 04-07-2020 00:18:36 | do you solve it ?<|||||>> do you solve it ?
Not yet. Let me know if you are able to find the fix.<|||||>yes, I have the same issue. I inspected the tensorboard graph and there is no config operation or anything of that sort. I also tried to save it with a manually defined signature, which didn't work either.
Workaround for now is to use the `save_pretrained`. Is there a way to convert pretrained to TF2.0 saved_model?<|||||>We also see a similar issue in transformers 2.9.1.
Also curious if people have a workaround or solution to use with TF model serving?<|||||>FYI: The cause for this issue is documented in #4709. The current workaround/fix is to remove `config` from the call to this function:
https://github.com/huggingface/transformers/blob/d6a677b14bcfd56b22fafeb212a27c6068886e07/src/transformers/modeling_tf_roberta.py#L331
This prevents `trainable` from being set to `config` in the initialization function of `tf.keras.layers.Layer`. Then the model will be serialized correctly instead of failing to serialize the `trainable` value later.<|||||>Hello!
The saving in saved model format is not implemented yet, but it is planned to work on it :) I will reply here once there will be something about this. Sorry for the inconvenience.<|||||>I think this issue can be closed now due to PR #4884.
The sample code in the issue runs successfully in `master`.
<img width="1010" alt="Screenshot 2020-06-10 at 12 09 12" src="https://user-images.githubusercontent.com/5602332/84255742-5363d000-ab13-11ea-822c-72da88399995.png">
<|||||>Great, thanks for solving the issue @harkous <|||||>@harkous Thanks for the work.
I have still the issue of the author even with your code and upgrade the last version of transformer with pip install transformers --upgrade , is it still working with you ?<|||||>You have to install transformers from the master branch. The fix has not been released yet.<|||||>Hello !
It seems that I have a similar issue with a model based on Camembert when trying to save my model with :
`model.save("model",save_format='tf')`
Give me :
```
TypeError: ('Not JSON Serializable:', CamembertConfig {
"architectures": [
"CamembertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 5,
"eos_token_id": 6,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "camembert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 1,
"type_vocab_size": 1,
"vocab_size": 32005
}
)
```
At first with transformers 2.11.0 but also after upgrading to 3.3 (with TF 2.3)
I can give a code snippet to reproduce if necessary and Custom Model construction can be found here : https://github.com/MAIF/melusine/blob/master/melusine/models/neural_architectures.py#L312
**Complete Stack Trace**
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-32-ae508742561b> in <module>
----> 1 model.model.save("test4",save_format='tf')
~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in save(self, filepath, overwrite, include_optimizer, save_format, signatures, options)
1977 """
1978 save.save_model(self, filepath, overwrite, include_optimizer, save_format,
-> 1979 signatures, options)
1980
1981 def save_weights(self,
~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options)
132 else:
133 saved_model_save.save(model, filepath, overwrite, include_optimizer,
--> 134 signatures, options)
135
136
~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/save.py in save(model, filepath, overwrite, include_optimizer, signatures, options)
78 # we use the default replica context here.
79 with distribution_strategy_context._get_default_replica_context(): # pylint: disable=protected-access
---> 80 save_lib.save(model, filepath, signatures, options)
81
82 if not include_optimizer:
~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py in save(obj, export_dir, signatures, options)
974
975 _, exported_graph, object_saver, asset_info = _build_meta_graph(
--> 976 obj, export_dir, signatures, options, meta_graph_def)
977 saved_model.saved_model_schema_version = constants.SAVED_MODEL_SCHEMA_VERSION
978
~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py in _build_meta_graph(obj, export_dir, signatures, options, meta_graph_def)
1074
1075 object_graph_proto = _serialize_object_graph(saveable_view,
-> 1076 asset_info.asset_index)
1077 meta_graph_def.object_graph_def.CopyFrom(object_graph_proto)
1078
~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py in _serialize_object_graph(saveable_view, asset_file_def_index)
719 for obj, obj_proto in zip(saveable_view.nodes, proto.nodes):
720 _write_object_proto(obj, obj_proto, asset_file_def_index,
--> 721 saveable_view.function_name_map)
722 return proto
723
~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py in _write_object_proto(obj, proto, asset_file_def_index, function_name_map)
759 version=versions_pb2.VersionDef(
760 producer=1, min_consumer=1, bad_consumers=[]),
--> 761 metadata=obj._tracking_metadata)
762 # pylint:enable=protected-access
763 proto.user_object.CopyFrom(registered_type_proto)
~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py in _tracking_metadata(self)
3009 @property
3010 def _tracking_metadata(self):
-> 3011 return self._trackable_saved_model_saver.tracking_metadata
3012
3013 def _list_extra_dependencies_for_serialization(self, serialization_cache):
~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/base_serialization.py in tracking_metadata(self)
52 # TODO(kathywu): check that serialized JSON can be loaded (e.g., if an
53 # object is in the python property)
---> 54 return json_utils.Encoder().encode(self.python_properties)
55
56 def list_extra_dependencies_for_serialization(self, serialization_cache):
~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/json_utils.py in encode(self, obj)
42
43 def encode(self, obj):
---> 44 return super(Encoder, self).encode(_encode_tuple(obj))
45
46
~/.conda/envs/emails_maif_vie/lib/python3.6/json/encoder.py in encode(self, o)
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
~/.conda/envs/emails_maif_vie/lib/python3.6/json/encoder.py in iterencode(self, o, _one_shot)
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
258
259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,
~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/json_utils.py in default(self, obj)
39 items = obj.as_list() if obj.rank is not None else None
40 return {'class_name': 'TensorShape', 'items': items}
---> 41 return serialization.get_json_type(obj)
42
43 def encode(self, obj):
~/.conda/envs/emails_maif_vie/lib/python3.6/site-packages/tensorflow/python/util/serialization.py in get_json_type(obj)
70 return obj.__wrapped__
71
---> 72 raise TypeError('Not JSON Serializable:', obj)
TypeError: ('Not JSON Serializable:', CamembertConfig {
"architectures": [
"CamembertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 5,
"eos_token_id": 6,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "camembert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 1,
"type_vocab_size": 1,
"vocab_size": 32005
}
)
```<|||||>Please open another issue with a code snippet to make us able to reproduce your problem. |
transformers | 3,663 | closed | Speedup torch summarization tests | Speedup torch summarization tests by using small models that are faster to download and instantiate. | 04-06-2020 21:32:54 | 04-06-2020 21:32:54 | Non slow test speed before change:

<|||||>There is a tiny TF T5 model now as well via:
`model = TFAutoModelWithLMHead.from_pretrained("patrickvonplaten/t5-tiny-random")`<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3663?src=pr&el=h1) Report
> Merging [#3663](https://codecov.io/gh/huggingface/transformers/pull/3663?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0a9d09b42a9c7c1ccc00da48486a1188078e8594&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3663?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3663 +/- ##
==========================================
- Coverage 78.03% 78.02% -0.02%
==========================================
Files 104 104
Lines 17708 17708
==========================================
- Hits 13819 13817 -2
- Misses 3889 3891 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3663?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3663/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.96% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3663/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.97% <0.00%> (-0.13%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3663?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3663?src=pr&el=footer). Last update [0a9d09b...4751188](https://codecov.io/gh/huggingface/transformers/pull/3663?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LGTM, but FYI for context, I think the reason we were using the real models was that we intended to do integration testing: in `_test_mono_column_pipeline` we only test equality of keys but we could have tested equality (or closeness) of values.<|||||>Makes sense @julien-c . I'd be happy to add some `@slow` integration tests and try to fulfill the original intent |
transformers | 3,662 | closed | Create model card for NLP4H/ms_bert | 04-06-2020 19:20:06 | 04-06-2020 19:20:06 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3662?src=pr&el=h1) Report
> Merging [#3662](https://codecov.io/gh/huggingface/transformers/pull/3662?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/261c4ff4e297e919ba993e1214a805e988bc9e79&el=desc) will **decrease** coverage by `0.04%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3662?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3662 +/- ##
==========================================
- Coverage 78.29% 78.25% -0.05%
==========================================
Files 104 104
Lines 17628 17628
==========================================
- Hits 13802 13794 -8
- Misses 3826 3834 +8
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3662?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3662/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.65% <0.00%> (-0.84%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3662/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.12% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3662/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.10% <0.00%> (-0.13%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3662?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3662?src=pr&el=footer). Last update [261c4ff...fccd0c7](https://codecov.io/gh/huggingface/transformers/pull/3662?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@MichalMalyska This is super interesting, thanks for sharing
[**Model page**](https://huggingface.co/NLP4H/ms_bert) |
|
transformers | 3,661 | closed | fixed TransfoXLLMHeadModel documentation | 04-06-2020 17:59:49 | 04-06-2020 17:59:49 | ||
transformers | 3,660 | closed | Exception: process 0 terminated with signal SIGKILL | # ❓ Questions & Help
i was using this notebook : https://www.kaggle.com/theoviel/bert-pytorch-huggingface-with-tpu-multiprocessing
to finetune huggingface’s xlm roberta base model on jigsaw multilingual (ongoing kaggle competition)
this is my first time with torch xla and TPU multiprocessing…!
the code i am trying is exactly this one : https://pastebin.com/fS94MKYc on a kaggle kernel which gives TPU v3-8
but even for batch_size = 8 i see my jupyter notebook crashes after giving this error message : **Your notebook tried to allocate more memory than is available. It has restarted.**
where i can see other people are using same model with even batch_size = 64
full error message looks like this :
```
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<timed exec> in <module>
/opt/conda/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py in spawn(fn, args, nprocs, join, daemon, start_method)
180 join=join,
181 daemon=daemon,
--> 182 start_method=start_method)
/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py in start_processes(fn, args, nprocs, join, daemon, start_method)
156
157 # Loop on join until it returns True or raises an exception.
--> 158 while not context.join():
159 pass
160
/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py in join(self, timeout)
106 raise Exception(
107 "process %d terminated with signal %s" %
--> 108 (error_index, name)
109 )
110 else:
Exception: process 0 terminated with signal SIGKILL
```
same problem is occuring also when i try bert base multilingual of huggingface.
so i am not understanding exactly where in my code i need to make change so that it can work? it seems like the problem is not with the batch size but something else that i am unable to catch.please help,thanks in advance | 04-06-2020 16:48:43 | 04-06-2020 16:48:43 | Did you find the solution to the problem?<|||||>@jhashekhar it seems like pytorch xla has some memory issue itself, xla team is working on it, tf tpu is much better at this moment so i am not using pytorch tpu anymore,probably later this year torch xla team will solve all the performance issues they are having at this moment,until then i recommend tf tpu or if you need gpu then pytorch gpu.just my recommendation<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Had this come up when parallel training on GPUS with multiprocessing - thoughts on a solution?<|||||>Take a look at this issue: https://github.com/pytorch/xla/issues/1870#issuecomment-612217012
It's pretty long, but they helped me solve this problem last year. I got the model working, but ended up using TF.<|||||>> pytorch/xla#1870 (comment)
do you have a summary of what we need to do for solving this for pytorch?<|||||>@brando90 please use bf16 and follow this simple tutorial of mine for better understanding : https://www.kaggle.com/mobassir/faster-pytorch-tpu-baseline-for-cld-cv-0-9
Don't forget to reduce batch size,image size etc that fits in xla
I think it will help you to solve your oom error,thanks<|||||>> Had this come up when parallel training on GPUS with multiprocessing - thoughts on a solution?
I got this problem. Did you solve it? |
transformers | 3,659 | closed | Add model card | 04-06-2020 16:31:03 | 04-06-2020 16:31:03 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3659?src=pr&el=h1) Report
> Merging [#3659](https://codecov.io/gh/huggingface/transformers/pull/3659?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/39a34cc375ed79d18888464289b83713fc20f7d4&el=desc) will **not change** coverage by `%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3659?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3659 +/- ##
=======================================
Coverage 78.29% 78.29%
=======================================
Files 104 104
Lines 17628 17628
=======================================
Hits 13801 13801
Misses 3827 3827
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3659?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.28% <0.00%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3659?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3659?src=pr&el=footer). Last update [39a34cc...8d1de79](https://codecov.io/gh/huggingface/transformers/pull/3659?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 3,658 | closed | Add model card | 04-06-2020 16:24:02 | 04-06-2020 16:24:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3658?src=pr&el=h1) Report
> Merging [#3658](https://codecov.io/gh/huggingface/transformers/pull/3658?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/39a34cc375ed79d18888464289b83713fc20f7d4&el=desc) will **decrease** coverage by `0.05%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3658?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3658 +/- ##
==========================================
- Coverage 78.29% 78.23% -0.06%
==========================================
Files 104 104
Lines 17628 17628
==========================================
- Hits 13801 13791 -10
- Misses 3827 3837 +10
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3658?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3658/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.65% <0.00%> (-0.84%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3658/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.63% <0.00%> (-0.66%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3658?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3658?src=pr&el=footer). Last update [39a34cc...f390cb4](https://codecov.io/gh/huggingface/transformers/pull/3658?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 3,657 | closed | Weird summarization results - the summary is longer than the input | # 🐛 Bug
## Information
Summarization task is returning an unexpected results. For an input of
> "We have a telephony partner who is very interested in this program and may be able to help identify pilot customers."
The results is
> [{'summary_text': '"We have a telephony partner who is very interested in this program and may be able to help identify pilot customers," the company says. "We are looking at a number of different ways to get people talking to each other," it adds. "It\'s a very exciting time for us," says the company\'s chief operating officer.'}]
Model I am using (Bert, XLNet ...): Summarization pipeline
Language I am using the model on (English, Chinese ...): Eng
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [V ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [V ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Execute below script
```python
!pip install -q transformers --upgrade
from transformers import pipeline
summarizer = pipeline(task="summarization")
data = "We have a telephony partner who is very interested in this program and may be able to help identify pilot customers."
print(summarizer(data))
```
## Expected behavior
Would expect the summary to 1) not add contextual information that doesn't exist, and 2) to not be longer than the input.
Arguably the input is short but still...
## Environment info
Colab
| 04-06-2020 16:23:05 | 04-06-2020 16:23:05 | You can pass `summarizer(data, min_length=10, max_length=20)` to get a summary whose length is between 10 and 20 tokens. By default, summaries will be between 56 and 142 tokens. <|||||>Thanks @sshleifer, interestingly now by having a max_length the summary is just arbitrarily cut, which is not great either. Is there a way to constrain the summary length and actually preserve the sense?
> [{'summary_text': '"We have a telephony partner who is very interested in this program and may be'}]<|||||>The logic of the program is "generate the most likely summary" of between `min_length` and `max_length`. So it's not programmed to cut the summary in a rules based way.
With that in mind, I've also seen poor results summarizing documents that are very different than the finetuning distribution (news articles of ~1024 tokens).
You *might* get better results with `summarizer = pipeline(task="summarization", model='bart-large-xsum')` .<|||||>> The logic of the program is "generate the most likely summary" of between min_length and max_length. So it's not programmed to cut the summary in a rules based way.
Thanks for confirming - seems to be the right approach :)!
> You might get better results with summarizer = pipeline(task="summarization", model='bart-large-xsum') .
Ok, will give it a try then!
> With that in mind, I've also seen poor results summarizing documents that are very different than the finetuning distribution (news articles of ~1024 tokens).
So you want to keep it open as a bug or should we close?
As a side request, it would be awesome to have metrics associated with each models that are part of transformers to help users choose the right one for their job (cc: @julien-c ).
<|||||>Hi @sshleifer Can we increase token length beyond 1024 for generating a summary.
I got the following message while generating a summary of the 20000-word document.
`Your max_length is set to 1300, but you input_length is only 1024. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)`
<|||||>Unfortunately, Bart can only process 1024 tokens at once, so your best best would be to split your doc into chunks, summarize each one, and concatenate the summaries.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,656 | closed | Add model card | 04-06-2020 16:11:37 | 04-06-2020 16:11:37 | ||
transformers | 3,655 | closed | Add model card | 04-06-2020 16:08:19 | 04-06-2020 16:08:19 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3655?src=pr&el=h1) Report
> Merging [#3655](https://codecov.io/gh/huggingface/transformers/pull/3655?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/39a34cc375ed79d18888464289b83713fc20f7d4&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3655?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3655 +/- ##
=======================================
Coverage 78.29% 78.29%
=======================================
Files 104 104
Lines 17628 17628
=======================================
+ Hits 13801 13802 +1
+ Misses 3827 3826 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3655?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3655/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.28% <0.00%> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3655/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.23% <0.00%> (+0.12%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3655?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3655?src=pr&el=footer). Last update [39a34cc...0ee8e8c](https://codecov.io/gh/huggingface/transformers/pull/3655?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 3,654 | closed | Create model card | 04-06-2020 15:57:19 | 04-06-2020 15:57:19 | ||
transformers | 3,653 | closed | Create model card | 04-06-2020 15:44:22 | 04-06-2020 15:44:22 | ||
transformers | 3,652 | closed | Create README.md for ktrapeznikov/biobert_v1.1_pubmed_squad_v2 | 04-06-2020 14:00:49 | 04-06-2020 14:00:49 | [**Model page**](https://huggingface.co/ktrapeznikov/biobert_v1.1_pubmed_squad_v2) |
|
transformers | 3,651 | closed | ❓Adding new tokens to pre-trained tokenizer | ## Details
Hi, I am working with DistilBERT multilingual model for sequence classification tasks where I need to add some additional languages apart from mentioned [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages). And for that, I am struggling to find the correct way to update tokenizer. From the documentation, I inferred that first, i have to get all new tokens in a list, call `tokenizer.add_tokens()` and then again i have to pass those new sentences to tokenizer and get them tokenized. So the real question: is there any method which i use to update tokenizer and tokenize sentence at the same time (when tokenizer sees unknown token it adds the token to the dictionary). Thanks in advance. | 04-06-2020 14:00:11 | 04-06-2020 14:00:11 | docs are pretty nice imho;<|||||>If I had found what I looking for in the documentation then why would I open an issue and waste someone else's time? I know other approaches to this problem but one seemed to be more time saving so just checking for implementation available. But now it seems that people are more concerned about critical bugs only in the issue section. Closing the issue for good.<|||||>There is no way to dynamically add unknown tokens to the vocabulary. The simplest way to do it would be to encode the sequence, detect unknowns, and then add these to the vocabulary, which seems to be what you did!
Please be aware that you will have to resize the model's embedding matrix according to the tokens you've added.<|||||>Or you can map all such tokens a group of them maybe to an OOV kinda token as well. <|||||>So the only way to update tokenizer is to get all unknowns first then resize model embedding matrix. Thanks @LysandreJik and @AdityaSoni19031997 |
transformers | 3,650 | closed | How can I judge whether is in the dictionary? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Will different word get the same ids in tokenizer?
Cause I just meet this situation.
And it looks like this
candidate is ['charge', 'greet', 'treat', 'reward']
and candidate_ids is [10813, 1, 13581, 1]
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-06-2020 13:07:45 | 04-06-2020 13:07:45 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,649 | closed | Add model card for BERTeus | This PR includes the model card for the BERTeus model which has been recently uploaded to the huggingface repository. | 04-06-2020 11:22:03 | 04-06-2020 11:22:03 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3649?src=pr&el=h1) Report
> Merging [#3649](https://codecov.io/gh/huggingface/transformers/pull/3649?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2ee410560e45ae3c619dc1e0b0fc4d257c48e18a&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3649?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3649 +/- ##
=======================================
Coverage 78.28% 78.29%
=======================================
Files 104 104
Lines 17628 17628
=======================================
+ Hits 13800 13801 +1
+ Misses 3828 3827 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3649?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3649/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.28% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3649?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3649?src=pr&el=footer). Last update [2ee4105...5b63e6a](https://codecov.io/gh/huggingface/transformers/pull/3649?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Awesome, thanks for sharing – also cc @joeddav
[**Model page**](https://huggingface.co/ixa-ehu/berteus-base-cased) |
transformers | 3,648 | closed | Chatbot QnA feature for given text corpus | Is there a way we can have a chatbot specifically which can answer questions on a given text corpus?
Is there any transformer model which we can train for this and how? | 04-06-2020 10:26:41 | 04-06-2020 10:26:41 | You might want to check this, It has a very nice example. [https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering) <|||||>> You might want to check this, It has a very nice example. https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering
Well, I've known this solution for quite some time, this give outputs of start and end logits.
But i'd like to know if there is any implementation like if i just feed in just a corpus of text say a article, essay and so on and ask questions it would give some relevant outout |
transformers | 3,647 | closed | Bertabs metrics lower than paper | I tested abstractive summarization pre-trained model using the source under transformers/examples/summarization/bertabs/...
My dataset are CNN & Daily mail, which are 30 thousands of docs.
However, the result of rouge score is as follows
****** ROUGE SCORES ******
** ROUGE 1
F1 >> 0.275
Precision >> 0.299
Recall >> 0.260
** ROUGE 2
F1 >> 0.161
Precision >> 0.184
Recall >> 0.149
** ROUGE L
F1 >> 0.305
Precision >> 0.326
Recall >> 0.290
why is the result different from that of the article, Text Summarization with Pretrained Encoders ?
| 04-06-2020 10:12:47 | 04-06-2020 10:12:47 | bump. i have the same question. paper says 5 BeamSize and better accuracy for bert-base. <|||||>same here<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,646 | closed | Allow token regression in ForTokenClassification models | # 🚀 Feature request
Current `ForTokenClassification` class implementation for all models does not support regression. I propose to adapt the current implementation for all the `ForTokenClassification` models in order to enable out-of-the-box token regression when `num_labels == 1`, similarly to what is currently available in the `ForSentenceClassification` models.
Concretely, this would mean converting this (taken from `AlbertForTokenRegression` as an example, line 873 in `modeling_albert.py`):
```python
if labels is not None:
loss_fct = CrossEntropyLoss()
# Only keep active parts of the loss
if attention_mask is not None:
active_loss = attention_mask.view(-1) == 1
active_logits = logits.view(-1, self.num_labels)[active_loss]
active_labels = labels.view(-1)[active_loss]
loss = loss_fct(active_logits, active_labels)
else:
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss,) + outputs
return outputs # (loss), logits, (hidden_states), (attentions)
```
into something like this:
```python
if labels is not None:
if self.num_labels == 1:
# We are doing regression
loss_fct = MSELoss()
logits_view = logits.view(-1)
else:
# We are doing classification
loss_fct = CrossEntropyLoss()
logits_view = logits.view(-1, self.num_labels)
# Only keep active parts of the loss
if attention_mask is not None:
active_loss = attention_mask.view(-1) == 1
active_logits = logits_view[active_loss]
active_labels = labels.view(-1)[active_loss]
loss = loss_fct(active_logits, active_labels)
else:
loss = loss_fct(logits_view, labels.view(-1))
outputs = (loss,) + outputs
return outputs # (loss), logits, (hidden_states), (attentions)
```
## Motivation
I am currently working with token-level regression using multiple transformer models to predict eye-tracking metrics that are commonly considered as proxies for cognitive processing in psycholinguistics (e.g. word reading times, fixation counts, etc.). Given that most of those are continuous metrics, the ability to use `transformers` for token regression would make my work much faster. Moreover, I believe that this functionality can benefit other researchers working with token-level continuous metrics.
## Your contribution
If this feature is regarded as interesting by maintainers, I can submit a PR with the suggested changes applied to all currently supported models.
| 04-06-2020 09:59:22 | 04-06-2020 09:59:22 | I think that's a reasonable feature – Thoughts @LysandreJik @thomwolf?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,645 | closed | ❓ How to run pipeline (summarization) in FP16 mode ? | # ❓ Questions & Help
I couldn't find on the documentation any parameter that allow running a pipeline in FP16 mode. **Did I miss it or it's not a feature yet ?** | 04-06-2020 09:03:59 | 04-06-2020 09:03:59 | @sshleifer - do you have more info on that? <|||||>You can't without editing the code, unfortunately.
<|||||>Running a pipeline in FP16 mode would be really useful for optimizing the GPU RAM usage. Can this be turned into a feature request?
**Edit:** I just found out that the following works:
```
pipeline.model.half()
``` |
transformers | 3,644 | closed | How can I track the performance of my GPT-2 model during finetuning? | Hi,
I am new in using Transformer HugginFace library.
I am using [Google Colab](https://colab.research.google.com/github/interactive-fiction-class/interactive-fiction-class.github.io/blob/master/homeworks/language-model/hw4_transformer.ipynb) to fine tune GPT-2 model. Google Colab only displays the output of last 5000 lines. So, I could not be able to figure out the performance of previous checkpoints, whose output vanishes.
I would like to track the performance of my training model during the whole period of time. Is it possible to track it by using **tensorboard**? or is there any other way exist? I have noticed a variable "logging_steps" in [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) file, but I donot have an idea how can I use it to track performance of training model in Google Colab?
| 04-06-2020 08:25:35 | 04-06-2020 08:25:35 | [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) allow you to track you training performance (validation perplexity, training loss and learning rate) using tensorboad. You need to have tensorboad installed. And then you can just run `tensorboard --logdir=runs` to follow your training.<|||||>Thanks a lot. It works.
I have used these commands in Google Colab
```
%load_ext tensorboard
%tensorboard --logdir=runs
``` |
transformers | 3,643 | closed | BioMed Roberta-Base (AllenAI) | This PR includes the model card for Biomed-roberta base, which @kyleclo recently uploaded to allenai's huggingface model repository. | 04-06-2020 00:57:23 | 04-06-2020 00:57:23 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3643?src=pr&el=h1) Report
> Merging [#3643](https://codecov.io/gh/huggingface/transformers/pull/3643?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1789c7daf1b8013006b0aef6cb1b8f80573031c5&el=desc) will **increase** coverage by `0.94%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3643?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3643 +/- ##
==========================================
+ Coverage 77.32% 78.26% +0.94%
==========================================
Files 104 104
Lines 17628 17628
==========================================
+ Hits 13630 13796 +166
+ Misses 3998 3832 -166
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3643?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3643/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3643/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3643/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3643/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3643/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3643?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3643?src=pr&el=footer). Last update [1789c7d...1d980d5](https://codecov.io/gh/huggingface/transformers/pull/3643?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Slightly tweaked and merged! [**model page**](https://huggingface.co/allenai/biomed_roberta_base) |
transformers | 3,642 | closed | Fix roberta checkpoint conversion script | After #2521 and #2958, this script stopped working. We need to set the bias on the new `decoder` Linear directly.
cc @LysandreJik | 04-06-2020 00:51:48 | 04-06-2020 00:51:48 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3642?src=pr&el=h1) Report
> Merging [#3642](https://codecov.io/gh/huggingface/transformers/pull/3642?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1789c7daf1b8013006b0aef6cb1b8f80573031c5&el=desc) will **increase** coverage by `0.97%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3642?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3642 +/- ##
==========================================
+ Coverage 77.32% 78.29% +0.97%
==========================================
Files 104 104
Lines 17628 17628
==========================================
+ Hits 13630 13801 +171
+ Misses 3998 3827 -171
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3642?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.45% <0.00%> (+0.81%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3642?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3642?src=pr&el=footer). Last update [1789c7d...bd60e83](https://codecov.io/gh/huggingface/transformers/pull/3642?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Hi @myleott, thanks for looking into this! Indeed the conversion script is failing due to that bias. The checkpoints on the S3 do not need to be re-uploaded, it was only the conversion of the new checkpoints that needed to be updated.
I manually checked that we have the same results between the torch hub models and those hosted on our S3 + we have [integration tests that test just that ](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_roberta.py#L322) :)
Thanks for your fix! |
transformers | 3,641 | closed | Can't evaluate official TensorFlow NER model | # 🐛 Bug: Can't evaluate official TensorFlow NER model
## Information
Model I am using (Bert, XLNet ...): I am using bert-base-multilingual-cased
Language I am using the model on (English, Chinese ...): German
The problem arises when using:
* [X] the official example scripts: I was using the official script for the NER model training.
at this link https://github.com/huggingface/transformers/tree/master/examples/ner
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: I was training an NER model
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Follow the steps to download the data and run the model for TensorFlow
2. Training the model for 3 epochs
3. On evaluation, the model will fail
4. Then I tried to explicitly call the evaluation of the model. Using this:
python3 run_tf_ner.py --data_dir ~/data --model_type bert --labels ~/data/labels.txt --model_name_or_path $BERT_MODEL --output_dir $OUTPUT_DIR --max_seq_length $MAX_LENGTH --num_train_epochs $NUM_EPOCHS --per_device_train_batch_size $BATCH_SIZE --save_steps $SAVE_STEPS --seed $SEED --do_eval
Here is what I see when calling the evaluation step:
```I0405 20:15:11.758301 140712645343040 modeling_tf_utils.py:388] loading weights file germeval-model/tf_model.h5
2020-04-05 20:15:12.024952: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-04-05 20:15:12.031397: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2200000000 Hz
2020-04-05 20:15:12.032399: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3ae8970 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-04-05 20:15:12.032438: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
I0405 20:15:15.697083 140712645343040 run_tf_ner.py:418] Loading features from cached file /home/taras/data/cached_dev_bert-base-multilingual-cased_128.tf_record
Traceback (most recent call last):
File "run_tf_ner.py", line 641, in <module>
app.run(main)
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "run_tf_ner.py", line 576, in main
args, strategy, model, tokenizer, labels, pad_token_label_id, mode="dev"
File "run_tf_ner.py", line 314, in evaluate
eval_iterator = progress_bar(eval_dataset, total=num_eval_steps, parent=master, display=args["n_device"] > 1)
File "/home/taras/.local/lib/python3.7/site-packages/fastprogress/fastprogress.py", line 226, in __init__
super().__init__(gen, total, display, leave, parent, master)
File "/home/taras/.local/lib/python3.7/site-packages/fastprogress/fastprogress.py", line 24, in __init__
parent.add_child(self)
File "/home/taras/.local/lib/python3.7/site-packages/fastprogress/fastprogress.py", line 264, in add_child
self.child.prefix = f'Epoch {self.main_bar.last_v+1}/{self.main_bar.total} :'
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
```
## Expected behavior
Train an NER model and be able to evaluate and predict using the trained weights.
## Environment info
- `transformers` version: 2.7.0
- Platform: Linux-4.19.0-8-cloud-amd64-x86_64-with-debian-10.3
- Python version: 3.7.3
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.2.0-rc2 (False)
- Using GPU in script?: Yes, Tesla K80
- Using distributed or parallel set-up in script?: No
| 04-05-2020 20:34:56 | 04-05-2020 20:34:56 | Another comment, I have looked at the cache that the model outputs, and it has a bunch of question marks for all of the cache files(train,dev,test). That makes sense as it is cached, but still I am thinking that I might be doing something wrong. If someone knows where I might have made a mistake, please let me know.<|||||>This has been fixed in this PR https://github.com/fastai/fastprogress/pull/59
You will have to update fastprogress to the latest build on master for it to work.
This worked for me:
```
pip uninstall fastprogress
pip install git+https://github.com/fastai/fastprogress.git
```
edit: the correct install instruction this time<|||||>Thank you! I will check if it got fixed for me and close the issue.<|||||>@apcode Thanks it worked wonderful! Closing the issue. |
transformers | 3,640 | closed | Wrong Mask LM prediction with BertForMaskedLM | # 📚 Migration
## Information
<!-- Important information -->
Model I am using (Bert, XLNet ...):
Bert, Electra
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## Details
### When I transfer to transformers=2.7.0, I find that the LM model failed to predict correct masked tokens.
I test the **transformer** model on the old LM example of **pytorch_pretrained_bert**:
"Who was Jim Henson ? Jim Henson was a puppeteer"
My test code goes like following:
```python
# coding: utf-8
import torch
from transformers import BertTokenizer, BertModel, BertForMaskedLM, AutoModel, AutoTokenizer, AutoModelWithLMHead, ElectraModel, ElectraForMaskedLM
MODEL_PATH = 'Resources/bert-base-uncased/uncased_L-12_H-768_A-12/'
VOCAB = MODEL_PATH
print('== tokenizing ===')
tokenizer = BertTokenizer.from_pretrained(VOCAB)
# Tokenized input
text = "Who was Jim Henson ? Jim Henson was a puppeteer"
tokenized_text = tokenizer.tokenize(text)
masked_index = 6
tokenized_text[masked_index] = '[MASK]'
# Convert token to vocabulary indices
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
segments_ids = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
input_mask = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
# ======== Convert inputs to PyTorch tensors
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
# ======== predict tokens ========
print('== LM predicting ===')
# Load pre-trained model (weights)
model = BertForMaskedLM.from_pretrained(MODEL_PATH)
model.eval()
# Predict all tokens
predictions = model(tokens_tensor, segments_tensors)[0]
# confirm we were able to predict 'henson'
predicted_index = torch.argmax(predictions[0, masked_index]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])
print('predicted_token', predicted_token)
```
## Other Details
(1) Such testing code works fine with the previous version of **pytorch_pretrained_bert**.
But now it seems that model predicts a random token.
(2) Random predicting also happened when I load electra model with ElectraForMaskedLM.
<!-- A clear and concise description of the migration issue.
If you have code snippets, please provide it here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
-->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): 1.2-gpu
- Tensorflow version (GPU?): no
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
<!-- IMPORTANT: which version of the former library do you use? -->
* `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch):
pytorch-pretrained-bert
## Checklist
- [x] I have read the migration guide in the readme.
([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers);
[pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [x] I checked if a related official extension example runs on my machine.
| 04-05-2020 17:02:12 | 04-05-2020 17:02:12 | This is probably because you're not using special tokens at all. When using BERT you should add the `[CLS]` and `[SEP]` tokens at the appropriate places. Modifying your code to include these generates the correct answer:
```py
# Tokenized input
text = "Who was Jim Henson ? Jim Henson was a puppeteer"
tokenized_text = tokenizer.tokenize(text)
masked_index = 7 # <-- the masked index needs to be offset by 1 because a [CLS] token will be added at the beginning
tokenized_text[masked_index] = '[MASK]'
# Convert token to vocabulary indices
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
indexed_tokens = tokenizer.build_inputs_with_special_tokens(indexed_tokens) # <-- should add special tokens, this method does it
segments_ids = [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1] # <-- modify this to include special tokens
input_mask = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] # <-- modify this to include special tokens
# ======== Convert inputs to PyTorch tensors
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
# ======== predict tokens ========
print('== LM predicting ===')
# Load pre-trained model (weights)
model = BertForMaskedLM.from_pretrained(MODEL_PATH)
model.eval()
# Predict all tokens
predictions = model(tokens_tensor, segments_tensors)[0]
# confirm we were able to predict 'henson'
predicted_index = torch.argmax(predictions[0, masked_index]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])
print('predicted_token', predicted_token)
```
Result:
```
== tokenizing ===
== LM predicting ===
predicted_token ['henson']
```
Please note that there is a much simpler way of doing what you did, by using the `encode` method which automatically manage the special tokens. The `encode_plus` method manages the attention mask and segment IDs as well. Here's the full code using the `encode_plus` method:
```py
import torch
from transformers import BertTokenizer, BertModel, BertForMaskedLM, AutoModel, AutoTokenizer, AutoModelWithLMHead, ElectraModel, ElectraForMaskedLM
MODEL_PATH = 'bert-base-uncased'
VOCAB = MODEL_PATH
print('== tokenizing ===')
tokenizer = BertTokenizer.from_pretrained(VOCAB)
# Tokenized input
text = "Who was Jim Henson ? Jim [MASK] was a puppeteer"
inputs = tokenizer.encode_plus(text, return_tensors="pt")
masked_index = 7
model = BertForMaskedLM.from_pretrained(MODEL_PATH)
model.eval()
print('== LM predicting ===')
# Predict all tokens
predictions = model(**inputs)[0]
# confirm we were able to predict 'henson'
predicted_index = torch.argmax(predictions[0, masked_index]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])
print('predicted_token', predicted_token)
```<|||||>Thanks a lot for the answer! I can move on my projects.
It is still kind of weird that the old code works correctly without '[CLS]' and '[SEP]'.
Has some underlying code logic changed?<|||||>It's weird, I agree! This is the correct way to do it, and that's the way it should have been done in the previous versions as well, though. Glad you could get your code working! |
transformers | 3,639 | closed | Summarization pipeline - Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large-cnn/modelcard.json' | # 🐛 Bug
## Information
Loading the summarization pipeline will result in below assertion:
> Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large-cnn/modelcard.json' to download model card file.
> Creating an empty model card.
The problem arises when using:
* [V] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
`!pip install -q transformers --upgrade
from transformers import pipeline
summarizer = pipeline(task="summarization")`
1. Execute above code
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| 04-05-2020 16:30:15 | 04-05-2020 16:30:15 | @julien-c This is issue has been closed, but I continue to get the same exact error. Can you reopen it? Or do I have to open a new issue?<|||||>Did you update from master.
Can you paste the output of `transformers-cli env`<|||||>I just upgraded to the latest version:
```
- `transformers` version: 2.8.0
- Platform: Linux-5.3.0-46-generic-x86_64-with-Ubuntu-19.10-eoan
- Python version: 3.7.5
- PyTorch version (GPU?): 1.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```<|||||>The fix is not in a released version yet so you need to install from source. |
transformers | 3,638 | closed | Translation pipeline bug after 398 characters | # 🐛 Bug
The translation pipeline with T5 does not seem to allow longer translations than 400~ character. It either automatically stops at 398, or if I play with the min/max_length parameters, it produces gibberish after 400~ characters.
## Information
I am using the translation pipeline (T5)
I tried both:
translator_de = pipeline(task='translation_en_to_de')
translator_fr = pipeline(task='translation_en_to_fr')
I tried the different suggestions in this short twitter discussion, but couldn't get it to work: https://twitter.com/PatrickPlaten/status/1244747294664200193
## To reproduce
Steps to reproduce the behavior:
```Python
translator_de = pipeline(task='translation_en_to_de')
text_en = "The 2019–20 coronavirus pandemic is an ongoing pandemic of coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2).[6] The outbreak was first identified in Wuhan, Hubei, China, in December 2019. The World Health Organization (WHO) declared the outbreak to be a Public Health Emergency of International Concern on 30 January 2020 and recognized it as a pandemic on 11 March.[7][8] As of 30 March 2020, more than 745,000[4] cases of COVID-19 have been reported in over 190 countries and territories, resulting in approximately 35,000[4] deaths. More than 156,500[4] people have since recovered.[5]"
text_trans_de = translator_de(text_en, min_length=len(text_en), early_stopping=False)
text_trans_de[0]['translation_text']
```
Output:
'Zu den Bemühungen, die Ausbreitung des Virus zu verhindern, zählen Reisebeschränkungen, Quarantäne, Sperrzeiten, Arbeitsplatz-Gefahrkontrollen, Verschiebungen und Annullierungen von Veranstaltungen und Anlagenschließungen, darunter die Quarantäne in Hubei, nationale oder regionale Quarantäne in anderen Teilen der Welt, Sperrmaßnahmen in China und Südkorea, verschiedene Grenzschließungen oder Einreise\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad\xad'
## Expected behavior
Ideally, it would allow me to translate text of any length.
## Environment info
- `transformers` version: 2.7.0
- Platform: MacOS Catalania 10.15.3 (19D76)
- Python version: 7.3
- Using GPU in script?: No, CPU
- Using distributed or parallel set-up in script?: No.
| 04-05-2020 14:18:46 | 04-05-2020 14:18:46 | Hi @MoritzLaurer,
I played around with your example a bit and I don't get a good translation either! I think one of the main problems is that T5 was pretrained on a per sentence level - not on whole texts.
Therefore you get quite good results when you split your text into sentences and translate each sentence on its own as follows:
```
from transformers import pipeline
translator_de = pipeline(task='translation_en_to_de')
text_en = "The 2019–20 coronavirus pandemic is an ongoing pandemic of coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2).[6] The outbreak was first identified in Wuhan, Hubei, China, in December 2019. The World Health Organization (WHO) declared the outbreak to be a Public Health Emergency of International Concern on 30 January 2020 and recognized it as a pandemic on 11 March.[7][8] As of 30 March 2020, more than 745,000[4] cases of COVID-19 have been reported in over 190 countries and territories, resulting in approximately 35,000[4] deaths. More than 156,500[4] people have since recovered.[5]"
translation_list = []
text_list = text_en.split('.')
for text in text_list:
translation_list.append(translator_de(text + '.'))
```
I would actually always do this when using T5 on translation and then concatenate the sentences back together afterward. It's very rare that you need to know the previous or next sentence in order to get good translation results.
PS:
You have to be careful when using `len(text_en)` it gives you the number of characters in your string not the number of words. Also note that `min_length` and `max_length` represent the number of minimal and maximal tokens (which is usually a bit less than the number of words). <|||||>Here the results, I got in German:
```
[[{'translation_text': 'Die Koronavirus-Pandemie 2019–20 ist eine anhaltende Pandemie der Koronavirus-Krankheit 2019 (COVID-19), verursacht durch das schwere akute Atemwegssyndrom Koronavirus 2 (SARS-CoV-2).'}],
[{'translation_text': '[6] Der Ausbruch wurde erstmals im Dezember 2019 in Wuhan, Hubei, China, festgestellt.'}],
[{'translation_text': 'Die Weltgesundheitsorganisation (WHO) hat den Ausbruch am 30. Januar 2020 als öffentlichen Gesundheitsnotstand von internationaler Bedeutung erklärt und ihn am 11. März als Pandemie anerkannt.'}],
[{'translation_text': '[7][8] Zum 30. März 2020 wurden in über 190 Ländern und Gebieten mehr als 745 000 Fälle von COVID-19 gemeldet, was zu etwa 35 000 Todesfällen führte.'}],
[{'translation_text': 'Mehr als 156.500[4] Menschen haben sich seitdem erholt.'}],
[{'translation_text': '[5].'}]]
```
<|||||>Hi @patrickvonplaten,
Great, thank you very much for the response! It makes sense and splitting in sentences seems like a good solution. (also thanks for clarifying that min_length refers to tokens and not characters) |
transformers | 3,637 | closed | [TransfoXL] fix argument order of update_mems fn in TF version | Wrong argument order of function. Thanks @dmytyar ! | 04-05-2020 10:20:47 | 04-05-2020 10:20:47 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3637?src=pr&el=h1) Report
> Merging [#3637](https://codecov.io/gh/huggingface/transformers/pull/3637?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ab8ab4f50baf391612cbc78cfa3f09b7ad0c3ac&el=desc) will **increase** coverage by `0.94%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3637?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3637 +/- ##
==========================================
+ Coverage 77.34% 78.29% +0.94%
==========================================
Files 104 104
Lines 17628 17628
==========================================
+ Hits 13634 13801 +167
+ Misses 3994 3827 -167
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3637?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `89.15% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.45% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3637?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3637?src=pr&el=footer). Last update [4ab8ab4...8db7ebd](https://codecov.io/gh/huggingface/transformers/pull/3637?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,636 | closed | [Docs, T5] Fix TF T5 examples docstring | Update TF T5 docstring - since forgotten to do so in: #3547 | 04-05-2020 10:11:25 | 04-05-2020 10:11:25 | |
transformers | 3,635 | closed | Reinitializing layers in BERT | Hello,
I have a question regarding re-initialising the encoder layers in BERT. What happens if I call the __init__() method of a BERT layer, is the the layer re-initialised using the pre-trained BERT weights or does it get completely new weights ?
`model.bert.encoder.layer[0].__init__(config)` | 04-05-2020 09:12:07 | 04-05-2020 09:12:07 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,634 | closed | Custom collate function that pads only to the longest sequence? | Currenty all input is padded to `max_seq_length` but in most cases the longest sequence in a batch is shorter than that, sometimes by a significant amount. If there is a custom collate function that pads only to the longest sequence in a batch, that will probably save quite some memory and time. Is that feasible? | 04-05-2020 09:10:56 | 04-05-2020 09:10:56 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,633 | closed | Cased model + `--do_lower_case` in documentation? | The [examples README](https://github.com/huggingface/transformers/tree/master/examples/README.md) has a lot of examples using both a cased model and the `--do_lower_case` option. Is that an error? | 04-05-2020 06:08:05 | 04-05-2020 06:08:05 | |
transformers | 3,632 | closed | [Bart] Replace config.output_past with use_cache kwarg | - Rename generation_mode -> `use_cache`
### Benefits
- Avoid confusion (see linked issues)
- allow unit tests to instantiate once, then test `forward` and `generate`. Avoiding extra 10 second init cost.
- Never accidentally have slow generation
### Costs
- If a developer is changing something and wants to turn caching off, they must edit `prepare_inputs_for_generation` and pass use_cache=False. This is documented. They are a developer by construction so this cost is low.
- inconsistency with other cachers like `CTRLModel`
| 04-04-2020 20:45:17 | 04-04-2020 20:45:17 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3632?src=pr&el=h1) Report
> Merging [#3632](https://codecov.io/gh/huggingface/transformers/pull/3632?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ab8ab4f50baf391612cbc78cfa3f09b7ad0c3ac&el=desc) will **increase** coverage by `0.94%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3632?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3632 +/- ##
==========================================
+ Coverage 77.34% 78.29% +0.94%
==========================================
Files 104 104
Lines 17628 17629 +1
==========================================
+ Hits 13634 13802 +168
+ Misses 3994 3827 -167
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3632?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.61% <100.00%> (+<0.01%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.45% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3632?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3632?src=pr&el=footer). Last update [4ab8ab4...904b387](https://codecov.io/gh/huggingface/transformers/pull/3632?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I think these if statements in tests are not needed anymore now and can be removed:
https://github.com/huggingface/transformers/blob/4ab8ab4f50baf391612cbc78cfa3f09b7ad0c3ac/tests/test_modeling_common.py#L632
and
https://github.com/huggingface/transformers/blob/4ab8ab4f50baf391612cbc78cfa3f09b7ad0c3ac/tests/test_modeling_tf_common.py#L428
Looks good to me otherwise |
transformers | 3,631 | closed | Fix RoBERTa/XLNet Pad Token in run_multiple_choice.py | `convert_examples_to_fes atures` sets `pad_token=0` by default, which is correct for BERT but incorrect for RoBERTa (`pad_token=1`) and XLNet (`pad_token=5`). I think the other arguments to `convert_examples_to_features` are correct, but it might be helpful if someone checked who is more familiar with this part of the codebase. | 04-04-2020 18:22:34 | 04-04-2020 18:22:34 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3631?src=pr&el=h1) Report
> Merging [#3631](https://codecov.io/gh/huggingface/transformers/pull/3631?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/243e687be6cd701722cce050005a2181e78a08a8&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3631?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3631 +/- ##
==========================================
- Coverage 78.30% 78.28% -0.02%
==========================================
Files 104 104
Lines 17627 17627
==========================================
- Hits 13802 13800 -2
- Misses 3825 3827 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3631?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.45% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.97% <0.00%> (-0.13%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3631?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3631?src=pr&el=footer). Last update [243e687...284fa1b](https://codecov.io/gh/huggingface/transformers/pull/3631?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,630 | closed | How to get top 10 possible set of words to calculate Top-K accuracy and MRR? | # ❓ Questions & Help
Dear All,
I am a newcomer to build language model with GPT-2.
I have [fine-tuned](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) a language model by using GPT-2.
Now, I would like to calculate Top-k accuracy and Mean Reciprocal Rank (MRR) of my model. For this purpose, I am using following strategy to get top 10 next predicted words:
1. Get a sub-sequence of text of length such as 40, contained in a test.txt, from the start of the file.
2. Next sub-sequence is created by moving a window of 40 words to next step 1. This process will go on untill, we get a list of all sub-sequences of test.txt file.
3. Pass the sub-sequences one by one to the generated model, which should give next 10 possible set of words.
For this purpose, I am using following segment of code to get top 10 words by adapting a code mentioned at this [link](https://github.com/huggingface/transformers/blob/master/examples/run_generation.py):
```
prompt_text = 'hello world' #Its an example. In reality it will be a complete subsequence of 40 words
encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False, return_tensors="pt")
encoded_prompt = encoded_prompt.to('cpu')
output_sequences = model.generate(
input_ids=encoded_prompt,
max_length=1+len(encoded_prompt[0]),
top_p=0.9,
do_sample=True,
num_return_sequences=10
)
```
Is it a right way to get top 10 next words on the basis of input string, which helps me in calculating Top-k accuracy and MRR of my model? Kindly let me know about your concerns.
| 04-04-2020 17:04:28 | 04-04-2020 17:04:28 | I have written a function to calculate Top-k accuracy and MRR of a model, which is trained by using GPT-2. However, the function gives me very low values of Top-k accuracy and MRR.
Kindly let me know anything wrong in this function?
```
def calculateModelPerformance(model,tokenizer):
testData=readFileData('dataset/test.txt') # file contain words separated by space.
step=1
block_size=128
top1 = 0.0
top3 = 0.0
top5 = 0.0
top10 = 0.0
mrr=0.0
totalIterations=0.0
for i in range(0, len(testData)-block_size, step):
print("Iteration " + str(i+1))
sequence=testData[i: i + block_size]
next_word=testData[i + block_size]
input_ids = torch.tensor(tokenizer.encode(sequence)).unsqueeze(0)
# get logits of last predicted token
next_word_logits = model(input_ids)[0][0, -1].detach()
probabilities, indices = next_word_logits.topk(10)
words = [tokenizer.decode(tir.item()) for tir in indices]
rank = 1.0
for word in words:
if word == next_word:
mrr += 1.0/rank
if rank<=1.0:
top1+=1.0
if rank<=3.0:
top3+=1.0
if rank<=5.0:
top5+=1.0
if rank<=10.0:
top10+=1.0
print("MRR ", str(mrr))
print("Top 1 ",str(top1))
print("Top 3 ", str(top3))
print("Top 5 ",str(top5))
print("Top 10 ",str(top10))
break
rank = rank + 1.0
totalIterations +=1.0
print("Total MRR ",str(mrr/totalIterations))
print("Total Top-1 Accuracy ", str(top1 / totalIterations))
print("Total Top-3 Accuracy ",str(top3/totalIterations))
print("Total Top-5 Accuracy ", str(top5 / totalIterations))
print("Total Top-10 Accuracy ", str(top10 / totalIterations))
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,629 | closed | Create README.md for ktrapeznikov/scibert_scivocab_uncased_squad_v2 | 04-04-2020 16:33:29 | 04-04-2020 16:33:29 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3629?src=pr&el=h1) Report
> Merging [#3629](https://codecov.io/gh/huggingface/transformers/pull/3629?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/243e687be6cd701722cce050005a2181e78a08a8&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3629?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3629 +/- ##
==========================================
- Coverage 78.30% 78.28% -0.02%
==========================================
Files 104 104
Lines 17627 17627
==========================================
- Hits 13802 13800 -2
- Misses 3825 3827 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3629?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.45% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.97% <0.00%> (-0.13%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3629?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3629?src=pr&el=footer). Last update [243e687...e1252ef](https://codecov.io/gh/huggingface/transformers/pull/3629?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 3,628 | closed | Create README.md for ktrapeznikov/albert-xlarge-v2-squad-v2 | adding readme for
ktrapeznikov/albert-xlarge-v2-squad-v2 | 04-04-2020 16:05:20 | 04-04-2020 16:05:20 | |
transformers | 3,627 | closed | Failing to load saved TFBertModel | TF version: 2.2.0-rc1
transformers version: 2.7.0
`import tensorflow as tf`
`import transformers`
`print(tf.__version__)`
`print(transformers.__version__)`
`MAX_LEN = 10`
`model_path = 'saved_model/temp_model'`
`ids = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32)`
`mask = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32)`
`token_type_ids = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32)`
`base_model = transformers.TFBertModel.from_pretrained("bert-base-cased"`
`, output_hidden_states=False)`
`base_output = base_model([ids, mask, token_type_ids])`
`seq_out, _ = base_output[0], base_output[1]`
`base_model.trainable = False`
`model = tf.keras.models.Model(inputs=[ids, mask, token_type_ids], outputs=[seq_out])`
`model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])`
`print(model.summary())`
`model.save(model_path)`
`model = tf.keras.models.load_model(model_path)`
Model load fails with the following error:
Traceback (most recent call last):
File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/nest.py", line 378, in assert_same_structure
expand_composites)
TypeError: The two structures don't have the same nested structure.
First structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}
Second structure: type=list str=[TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/0'), TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/1'), TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/2')]
More specifically: The two namedtuples don't have the same sequence type. First structure type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')} has type dict, while second structure type=list str=[TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/0'), TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/1'), TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/2')] has type list
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "temp.py", line 29, in <module>
model = tf.keras.models.load_model(model_path)
File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py", line 190, in load_model
return saved_model_load.load(filepath, compile)
File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 116, in load
model = tf_load.load_internal(path, loader_cls=KerasObjectLoader)
File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py", line 604, in load_internal
export_dir)
File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 188, in __init__
super(KerasObjectLoader, self).__init__(*args, **kwargs)
File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py", line 123, in __init__
self._load_all()
File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 215, in _load_all
self._finalize_objects()
File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 506, in _finalize_objects
_finalize_saved_model_layers(layers_revived_from_saved_model)
File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 677, in _finalize_saved_model_layers
inputs = infer_inputs_from_restored_call_function(call_fn)
File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 921, in infer_inputs_from_restored_call_function
spec = nest.map_structure(common_spec, spec, spec2)
File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/nest.py", line 611, in map_structure
expand_composites=expand_composites)
File "/Users/sourabhmaity/anaconda3/lib/python3.7/site-packages/tensorflow/python/util/nest.py", line 385, in assert_same_structure
% (str(e), str1, str2))
TypeError: The two structures don't have the same nested structure.
First structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}
Second structure: type=list str=[TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/0'), TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/1'), TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/2')]
More specifically: The two namedtuples don't have the same sequence type. First structure type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')} has type dict, while second structure type=list str=[TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/0'), TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/1'), TensorSpec(shape=(None, 10), dtype=tf.int32, name='inputs/2')] has type list
Entire first structure:
{'input_ids': .}
Entire second structure:
[., ., .] | 04-04-2020 15:21:26 | 04-04-2020 15:21:26 | Facing a similar issue with tf 2.2.0-rc3.
```
def get_model(lr=0.00001):
inp_bert = tf.keras.layers.Input(shape=(512), dtype="int32")
bert = TFBertModel.from_pretrained('bert-base-multilingual-cased')(inp_bert)[0]
doc_encodings = tf.squeeze(bert[:, 0:1, :], axis=1)
out = tf.keras.layers.Dense(1, activation="sigmoid")(doc_encodings)
model = tf.keras.Model(inp_bert, out)
adam = tf.keras.optimizers.Adam(lr=lr)
model.compile(optimizer=adam, loss="binary_crossentropy", metrics=["accuracy"])
return model
model = get_model()
model.save("model_name",save_format='tf')
model = tf.keras.models.load_model('model_name')
model.summary()
```
Output error is:
```
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py in assert_same_structure(nest1, nest2, check_types, expand_composites)
383 "Entire first structure:\n%s\n"
384 "Entire second structure:\n%s"
--> 385 % (str(e), str1, str2))
386
387
ValueError: The two structures don't have the same nested structure.
First structure: type=TensorSpec str=TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs')
Second structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='inputs/input_ids')}
More specifically: Substructure "type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='inputs/input_ids')}" is a sequence, while substructure "type=TensorSpec str=TensorSpec(shape=(None, 512), dtype=tf.int32, name='inputs')" is not
Entire first structure:
.
Entire second structure:
{'input_ids': .}
```<|||||>change
`base_output = base_model([ids, mask, token_type_ids])`
to
`base_output = base_model.bert([ids, mask, token_type_ids])`
should fix
<|||||>>
>
> change
> `base_output = base_model([ids, mask, token_type_ids])`
> to
> `base_output = base_model.bert([ids, mask, token_type_ids])`
> should fix
Thanks @Souls362 .. solves it.<|||||>This worked for me as well with `TFBertModel`, however, I run into the same issue with `TFXLNetModel`. `TFXLNetModel` doesn't seem to have an equivalent to the `.bert` property/attribute. Does anyone know how to solve this when using `TFXLNetModel`?<|||||>For `TFXLNetModel` this would be the `.transformer` attribute, as you can see [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_xlnet.py#L1127)<|||||>@LysandreJik thank you! That works perfectly<|||||>@LysandreJik How about `TFOpenAIGPTLMHeadModel` ? I use `.transformer` attribute, but the output shape become `[None, None, 768]`, while the original output shape of `TFOpenAIGPTLMHeadModel` is `[None, None, 13088]`. How to solve it? Thanks a lot!<|||||>Well the `transformer` attribute is the transformer in itself, which has a hidden size of 768. The LM head model has an additional head which is the embedding matrix of, which has a size of 13088.<|||||>Yes, I think so too. So how can I save the whole model?<|||||>@Souls362 you are the greatest! I looked way too long for this.
@huggingface folks, please add an extra detailed example on serialization in TF2.
There is seriously some clear documentation missing there.<|||||>> change
> `base_output = base_model([ids, mask, token_type_ids])`
> to
> `base_output = base_model.bert([ids, mask, token_type_ids])`
> should fix
one tip for TFBertSequenceClassification: base_model.bert([ids, mask, token_type_ids])[1]<|||||>> > change
> > `base_output = base_model([ids, mask, token_type_ids])`
> > to
> > `base_output = base_model.bert([ids, mask, token_type_ids])`
> > should fix
>
> one tip for TFBertSequenceClassification: base_model.bert([ids, mask, token_type_ids])[1]
What is the difference of 0 and 1 in the brackets?<|||||>> > > change
> > > `base_output = base_model([ids, mask, token_type_ids])`
> > > to
> > > `base_output = base_model.bert([ids, mask, token_type_ids])`
> > > should fix
> >
> >
> > one tip for TFBertSequenceClassification: base_model.bert([ids, mask, token_type_ids])[1]
>
> What is the difference of 0 and 1 in the brackets?
[TFBertModel documentation](https://huggingface.co/transformers/model_doc/bert.html#transformers.TFBertModel)
model returns sequence output and pooled output (for classification)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> change
> `base_output = base_model([ids, mask, token_type_ids])`
> to
> `base_output = base_model.bert([ids, mask, token_type_ids])`
> should fix
best answer ever!<|||||>```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/tmp/ipykernel_92500/3480800436.py in <module>
----> 1 model_eval = model.evaluate(
2 dataset_test,
3 use_multiprocessing=True,
4 return_dict=True)
~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in evaluate(self, x, y, batch_size, verbose, sample_weight, steps, callbacks, max_queue_size, workers, use_multiprocessing, return_dict)
1387 with trace.Trace('test', step_num=step, _r=1):
1388 callbacks.on_test_batch_begin(step)
-> 1389 tmp_logs = self.test_function(iterator)
1390 if data_handler.should_sync:
1391 context.async_wait()
~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
826 tracing_count = self.experimental_get_tracing_count()
827 with trace.Trace(self._name) as tm:
--> 828 result = self._call(*args, **kwds)
829 compiler = "xla" if self._experimental_compile else "nonXla"
830 new_tracing_count = self.experimental_get_tracing_count()
~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
869 # This is the first call of __call__, so we have to initialize.
870 initializers = []
--> 871 self._initialize(args, kwds, add_initializers_to=initializers)
872 finally:
873 # At this point we know that the initialization is complete (or less
~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
723 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)
724 self._concrete_stateful_fn = (
--> 725 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
726 *args, **kwds))
727
~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2967 args, kwargs = None, None
2968 with self._lock:
-> 2969 graph_function, _ = self._maybe_define_function(args, kwargs)
2970 return graph_function
2971
~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3359
3360 self._function_cache.missed.add(call_context_key)
-> 3361 graph_function = self._create_graph_function(args, kwargs)
3362 self._function_cache.primary[cache_key] = graph_function
3363
~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3194 arg_names = base_arg_names + missing_arg_names
3195 graph_function = ConcreteFunction(
-> 3196 func_graph_module.func_graph_from_py_func(
3197 self._name,
3198 self._python_function,
~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
988 _, original_func = tf_decorator.unwrap(python_func)
989
--> 990 func_outputs = python_func(*func_args, **func_kwargs)
991
992 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
632 xla_context.Exit()
633 else:
--> 634 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
635 return out
636
~/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
975 except Exception as e: # pylint:disable=broad-except
976 if hasattr(e, "ag_error_metadata"):
--> 977 raise e.ag_error_metadata.to_exception(e)
978 else:
979 raise
TypeError: in user code:
/home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1233 test_function *
return step_function(self, iterator)
/home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1224 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
return fn(*args, **kwargs)
/home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1217 run_step **
outputs = model.test_step(data)
/home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1188 test_step
self.compiled_metrics.update_state(y, y_pred, sample_weight)
/home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/keras/engine/compile_utils.py:387 update_state
self.build(y_pred, y_true)
/home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/keras/engine/compile_utils.py:317 build
self._metrics = nest.map_structure_up_to(y_pred, self._get_metric_objects,
/home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/util/nest.py:1159 map_structure_up_to
return map_structure_with_tuple_paths_up_to(
/home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/util/nest.py:1241 map_structure_with_tuple_paths_up_to
assert_shallow_structure(
/home/user/miniconda3/envs/p3/lib/python3.8/site-packages/tensorflow/python/util/nest.py:847 assert_shallow_structure
raise TypeError(_STRUCTURES_HAVE_MISMATCHING_TYPES.format(
TypeError: The two structures don't have the same sequence type. Input structure has type <class 'tuple'>, while shallow structure has type <class 'dict'>.
```
```python
def map_tk(X, Y):
X = tokenizer(
X,
max_length=max_len,
padding='max_length',
truncation=True,
return_token_type_ids=False,
return_tensors="tf")
X = {
"input_ids": tf.reshape(X["input_ids"], [max_len]),
"attention_mask": tf.reshape(X["attention_mask"], [max_len])
}
Y = {
"y1": to_categorical(Y["y1"], num_classes=11),
"y2": to_categorical(Y["y2"], num_classes=4)
}
return X, Y
def gen_data(df: pd.DataFrame):
def gen():
for _, row in df.iterrows():
d = {
"X": row["content"],
"Y": {
"y1": row["y1"],
"y2": row["y2"]
}
}
yield map_tk(d["X"], d["Y"])
return gen
output_signature = (
{
"input_ids": tf.TensorSpec(shape=(150,), dtype=tf.int32),
"attention_mask": tf.TensorSpec(shape=(150,), dtype=tf.int32)
},
{
"institution": tf.TensorSpec(shape=(11,), dtype=tf.int32),
"laws_nature": tf.TensorSpec(shape=(4,), dtype=tf.int32)
})
def build_dataset(df: pd.DataFrame, shuffle_size=0):
ds = tf.data.Dataset.from_generator(
gen_data(df),
output_signature=output_signature)
if shuffle_size > 0:
ds = ds.shuffle(buffer_size=shuffle_size)
return ds.batch(batch_size=batch_size).prefetch(1)
dataset_train = build_dataset(train, 25600)
dataset_valid = build_dataset(valid)
dataset_test = build_dataset(test)
```
This is the inputs of my model, is there any workaround for `electra`?
```python
input_ids = Input(shape=(max_len,), name="input_ids", dtype="int32")
attention_mask = Input(shape=(max_len,), name="attention_mask", dtype="int32")
inputs = {"input_ids": input_ids, "attention_mask": attention_mask}
X = pretrained(inputs)["hidden_states"][-3:-1]
``` |
transformers | 3,626 | closed | ValueError: You have to specify either input_ids or inputs_embeds! | ## Details
I'm quite new to NLP task. However, I was trying to train the T5-large model and set things as follows. But unfortunately, I've got an error.
```python
def build_model(transformer, max_len=512):
input_word_ids = Input(shape=(max_len,), dtype=tf.int32, name="input_word_ids")
sequence_output = transformer(input_word_ids)[0]
cls_token = sequence_output[:, 0, :]
out = Dense(1, activation='sigmoid')(cls_token)
model = Model(inputs=input_word_ids, outputs=out)
return model
model = build_model(transformer_layer, max_len=MAX_LEN)
```
It thorws
```
ValueError: in converted code:
ValueError Traceback (most recent call last)
<ipython-input-19-8ad6e68cd3f5> in <module>
----> 5 model = build_model(transformer_layer, max_len=MAX_LEN)
6
7 model.summary()
<ipython-input-17-e001ed832ed6> in build_model(transformer, max_len)
31 """
32 input_word_ids = Input(shape=(max_len,), dtype=tf.int32, name="input_word_ids")
---> 33 sequence_output = transformer(input_word_ids)[0]
34 cls_token = sequence_output[:, 0, :]
35 out = Dense(1, activation='sigmoid')(cls_token)
ValueError: You have to specify either input_ids or inputs_embeds
``` | 04-04-2020 09:44:01 | 04-04-2020 09:44:01 | Hi @innat,
T5 is an encoder-decoder model so you will have to provide both `input_ids` and `decoder_input_ids` to the model. Maybe taking a look at the [T5 docs](https://huggingface.co/transformers/model_doc/t5.html#transformers.T5Model.forward) (especially the "Examples") can help you :-)
<|||||>Just noticed that the Examples docstring for TF T5 was wrong. Is fixed with #3636 .<|||||>@patrickvonplaten
hello, sorry to bother you. Would you please justify the following piece of code:
### Imports
```python
from transformers import TFAutoModel, AutoTokenizer
# First load the real tokenizer
tokenizer = AutoTokenizer.from_pretrained('t5-small')
transformer_layer = TFAutoModel.from_pretrained('t5-small')
```
### Define Encoder
```python
def encode(texts, tokenizer, maxlen=512):
enc_di = tokenizer.batch_encode_plus(
texts,
return_attention_masks=False,
return_token_type_ids=False,
pad_to_max_length=True,
max_length=maxlen
)
return np.array(enc_di['input_ids'])
# tokenized
x_train = encode('text', tokenizer, maxlen=200)
y_train
```
### Define Model and Call
```python
def build_mod(transformer, max_len=512):
input_word_ids = Input(shape=(max_len,), dtype=tf.int32, name="input_word_ids")
sequence_output = transformer(input_word_ids)[0]
cls_token = sequence_output[:, 0, :]
out = Dense(1, activation='sigmoid')(cls_token)
model = Model(inputs=input_word_ids, outputs=out)
model.compile(Adam(lr=1e-5), loss='binary_crossentropy', metrics=['accuracy'])
return model
# calling
model = build_model(transformer_layer, max_len=200)
```
Now, according to the docstring, should I do,
`outputs = model(input_ids=x_train, decoder_input_ids=x_train)[0]`
?<|||||>I'm not 100% sure what you want to do here exactly. T5 is always trained in a text-to-text format. We have a section here on how to train T5: https://huggingface.co/transformers/model_doc/t5.html#training
Otherwise I'd recommend taking a look at the official paper.<|||||>@patrickvonplaten Thanks for this. I encountered the same issue and this resolved it!
I'm wondering if it makes sense to make the error message capture the requirement of having both `input_ids` and `decoder_input_ids` since this is an encoder-decoder model? This may make the fix clearer for users of encoder decoder models in the future.
I.e., for encoded-decoder models, switch the error message from:
```
ValueError: You have to specify either input_ids or inputs_embeds
```
to:
```
ValueError: You have to specify either (input_ids and decoder_input_ids) or inputs_embeds
```
I can sent this as a PR as well if you think it makes sense!<|||||>Hi @enzoampil,
A PR for a cleaner Error message would be nice if you feel like it :-). It would be good if the error message could change between `ValueError: You have to specify either input_ids or inputs_embeds` if `self.is_decoder == False` and `ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds` if `self.is_decoder == True`. So adding a simple if statement to the error message is definitely a good idea!<|||||>Got it will do. Thanks for the pointers! 😄 <|||||>Hi, I also got the same error when training seq2seq on tf.keras and I could not follow the example you provide on https://huggingface.co/transformers/model_doc/t5.html#training (this example is for pytorch I think)
I create `x_encoder` as` input_ids` and `x_decoder_in` for `decoder_input_ids`
model = TFT5Model.from_pretrained('t5-base')
model.compile('adam',loss='sparse_binary_crossentropy')
So when I want to train the model I simply do
`model.fit({'input_ids': x_encoder, 'decoder_input_ids': x_decoder_in})`
where I clearly provide `input_ids` , but still got this error message :
`ValueError: You have to specify either input_ids or inputs_embeds`
Note that changing input from dict to list got the same error. Changing model from TFT5Model to TFT5ForConditionalGeneration got the same error. Changing loss to BCE got the same error.
Moreover, changing input to only one array
`model.fit({'input_ids': x_encoder})`
is also error :
`ValueError: No data provided for "decoder_input_ids". Need data for each key in: ['decoder_input_ids', 'input_ids']`<|||||>In `class TFT5Model(TFT5PreTrainedModel):`
I found this line (899-900):
```
# retrieve arguments
input_ids = kwargs.get("inputs", None)
```
Shouldn't it be `kwargs.get("input_ids", None)` ??<|||||>@ratthachat - thanks for you message!
We definitely need to provide more TF examples for the T5 Model. I want to tackle this problem in ~2 weeks.
In TF we use the naming convention `inputs`, so the you should change to `model.fit({"inputs": x_encoder})` . I very much agree that the error message is quite misleading and correct it in this PR: #4401. <|||||>Thanks for your consideration, Patrick!<|||||>@patrickvonplaten Sorry to tag you in this old thread, but is there any official T5 TF example (as you mentioned in the last thread)?<|||||>@ratthachat - no worries, we should definitely add more TF T5 examples and we still don't have a good TF T5 notebook.
I am moving the discussion to the forum and if no one answers I will spent some time coping a T5 PT notebook to TF.<|||||>Hi @patrickvonplaten i wanted to fine tune using T5 using TF 2.0 but its soo confusing at each end as compared to pytorch which is really well documented all current examples (community + offcial) are for pytorch. is the work for TFT5 notebook underway?<|||||>Okey, seems like no-one has a complete TF T5 notebook. I will start working on it this week: https://discuss.huggingface.co/t/how-to-train-t5-with-tensorflow/641/6
Should be done by next week sometime :-) <|||||>Hi @patrickvonplaten
Please help me with this error.
I'm doing inference with a T5-base model which I finetuned on GLUE tasks.
It's giving error like
`ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds`
While doing inference, we just need to provide input_ids for the encoder right?
Why do we need `decoder_input_ids`?
And as it's inference, my `labels` will also be `None`.
So, [this](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_t5.py#L1171) part will not execute.
`decoder_input_ids = self._shift_right(labels)`
Waiting for your reply.
Thank you.<|||||>@prashant-kikani it is indeed a strange behavior. have you tried passing `input_ids` to `decoder_input_ids` like:
```
input_ids = tokenizer(..., return_tensor='tf') # replace pt for pytorch
outputs= model(input_ids=input_ids, decoder_input_ids=input_ids)
assert len(outputs)==3, 'must return 3 tensors when inferencing'
```<|||||>Hi @HarrisDePerceptron
We can do it & it's giving some output also. But it's not the right thing to do.
You see, T5 which Transformer itself, is a text to text model.
So, it can do inference in linear time by matrix multiplication when `label` is available.
But, when label is not available, we need to go sequentially by doing forward pass in decoder for each word till `</s>` doesn't come.
We need to concatenate last output of decoder with new input if decoder each time.
What do you think?
<|||||>@prashant-kikani @HarrisDePerceptron
For `decoder_input_ids` , we just need to put a single BOS token so that the decoder will know that this is the beginning of the output sentence. (Even in GLUE task, T5 still looks at every output label as a complete sentence )
We can see a concrete example by looking at the function
`prepare_inputs_for_generation` which is called by `model.generate`
(`generate` function is here : https://github.com/huggingface/transformers/blob/master/src/transformers/generation_tf_utils.py )
See line 298 in the above link :
```
if self.config.is_encoder_decoder:
if decoder_start_token_id is None:
decoder_start_token_id = bos_token_id
```
and line 331:
```
# create empty decoder_input_ids
input_ids = (
tf.ones(
(effective_batch_size * num_beams, 1),
dtype=tf.int32,
)
* decoder_start_token_id
)
```
and see T5's `prepare_inputs_for_generation` which change the above `input_ids` into `decoder_input_ids` implementation at :
https://github.com/huggingface/transformers/blob/08f534d2da47875a4b7eb1c125cfa7f0f3b79642/src/transformers/modeling_tf_t5.py#L1367<|||||>Hi @patrickvonplaten Patrick,
Thanks for your great work and great comment. I mimic the process of inferencing T5 as below and I got a bug, is it possible that you could help me to advise what has happended?
```py
from transformers import AutoModel, AutoTokenizer
model_name = "castorini/t5-base-canard"
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
context = '''
Frank Zappa ||| Disbandment ||| What group disbanded ||| Zappa and the Mothers of Invention ||| When did they disband?
'''
encoded_input = tokenizer(
context,
padding='max_length',
max_length=512,
truncation=True,
return_tensors="pt",
)
decoder_input = tokenizer(
context,
padding='max_length',
max_length=512,
truncation=True,
return_tensors="pt",
)
encoder_output = model.generate(input_ids=encoded_input["input_ids"], decoder_input_ids=decoder_input["input_ids"])
output = tokenizer.decode(
encoder_output[0],
skip_special_tokens=True
)
output
```
I got error, though I alreadly provided ```decoder_input_ids```:
```
Some weights of the model checkpoint at castorini/t5-base-canard were not used when initializing T5Model: ['lm_head.weight']
- This IS expected if you are initializing T5Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing T5Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Input length of decoder_input_ids is 512, but ``max_length`` is set to 20. This can lead to unexpected behavior. You should consider increasing ``config.max_length`` or ``max_length``.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-11-b9fe12b71812>](https://localhost:8080/#) in <module>()
24 )
25
---> 26 encoder_output = model.generate(input_ids=encoded_input["input_ids"], decoder_input_ids=decoder_input["input_ids"])
27 output = tokenizer.decode(
28 encoder_output[0],
6 frames
[/usr/local/lib/python3.7/dist-packages/transformers/models/t5/modeling_t5.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
925 else:
926 err_msg_prefix = "decoder_" if self.is_decoder else ""
--> 927 raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds")
928
929 if inputs_embeds is None:
ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
```
Thanks!<|||||>Hey @dxlong2000,
I'll open a new issue for this to make it more visible as I think this error happens quite often. See: https://github.com/huggingface/transformers/issues/16234<|||||>Good issue! really helps me. |
transformers | 3,625 | closed | how can i run gpt2 model on tf serving ? | 04-04-2020 08:18:16 | 04-04-2020 08:18:16 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
|
transformers | 3,624 | closed | Add code to pretrain T5 model from scratch | # 🚀 Feature request
The T5 model can significantly improve NLP task accuracies. However, the existing pretrained models are all in English. I'd like to pretrain T5 model on different language datasets from scratch.
Can you add code on pretraining T5 model? Thanks.
| 04-03-2020 22:21:32 | 04-03-2020 22:21:32 | @patrickvonplaten Can we pre-train T5 from scratch on any task? I want to use it for Question Answering.<|||||>Many notebooks for T5 are now added to the community notebooks :-) <|||||>@patrickvonplaten can you share the notebook which show T5 pre-training if it is available ?<|||||>@patrickvonplaten as of today, I didn't find any notebook that is related to T5 *pretraining* in the [community notebooks collection ](https://huggingface.co/transformers/master/community.html#community-notebooks). Could you elaborate more on where there is a codebase to do the pretraining? Thanks!<|||||>> @patrickvonplaten as of today, I didn't find any notebook that is related to T5 pretraining in the community notebooks collection . Could you elaborate more on where there is a codebase to do the pretraining? Thanks!
Yes I agree there is no guide for pretraining
|
transformers | 3,623 | closed | Create model card | 04-03-2020 22:10:00 | 04-03-2020 22:10:00 | I forgot to add the language in the header:
```
---
language: english
thumbnail:
---
```<|||||>Looks quite cool! Also cc'ing @lvwerra
Thanks for sharing 🙏<|||||>Very nice - never tested it for negative feedback :) |
|
transformers | 3,622 | closed | default of weight_decay for run_language_modeling.py | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-03-2020 19:48:12 | 04-03-2020 19:48:12 | Hi
the default value of weight_decay is "0" for run_language_modeling.py , why is that ?
shouldn't it be 0.01 according to original paper of BERT ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,621 | closed | fix prepare_for_tokenization in tokenization_roberta.py | fix the corner case break run_glue.py with QQP task mentioned in #3608 | 04-03-2020 19:36:52 | 04-03-2020 19:36:52 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,620 | closed | Update notebooks | Update the notebooks in the documentation | 04-03-2020 18:18:49 | 04-03-2020 18:18:49 | |
transformers | 3,619 | closed | Feature Request: Fill Mask more than 1 token | At the moment you can use hugging face's mask filling pipeline to predict 1 masked token in a sentence using the below:
```
!pip install -q transformers
from __future__ import print_function
import ipywidgets as widgets
from transformers import pipeline
nlp_fill = pipeline('fill-mask')
nlp_fill("I am going to guess <mask> in this sentence")
```
The request is that you also add the ability to predict N masked tokens rather than only 1 masked token. So that for example if the sentence is `"I am going to make <mask> <mask> for breakfast"` then the model might predict "fried eggs" for the 2 masked tokens | 04-03-2020 14:49:09 | 04-03-2020 14:49:09 | Duplicate of #3609
Not currently supported but we welcome a PR |
transformers | 3,618 | closed | Update German Bert model card | We changed the vocab to work with run_split_on_punc tokenization.
Now there are much less [UNK] punctuation tokens.
For more details see deepset-ai/FARM/issues/60 | 04-03-2020 13:54:01 | 04-03-2020 13:54:01 | @Timoeller @tholor LGTM and thanks for linking to the discussion on the FARM repo. 👍<|||||>cherry-picked only the relevant change in 4ab8ab4f50baf391612cbc78cfa3f09b7ad0c3ac |
transformers | 3,617 | closed | Choosing between adding frequent out of vocab words and doing further pretraining. | I have a dutch medical dataset (for Namen Entity Recognition) which contains a lot of domain-specific words. The dutch BERT tokenizer therefor outputs a lot of [UNK] tokens when it tokenizes.
Given that I dispose over a corpus of 60k labelled tokens, and right now I have also a relatively small annotated corpus of 185k tokens, would it be best to:
- just add the most frequent out of vocab words to the vocab of the tokenizer
- start from a BERT checkpoint and do further pretraining on the unlabeled dataset (which is now of size 185k which is pretty small I assume..). There might be a possibility for me to obtain a much larger unannotated dataset of potentially millions of (unlabelled) tokens, but I was wondering if even millions of tokens is enough to do some meaningful further pretraining?
Thanks! | 04-03-2020 13:14:54 | 04-03-2020 13:14:54 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,616 | closed | Out of memory error while training GPT2-large on 8x32GB Nvidia Volta | # 🐛 Bug
I'm getting an `out-of-memory error` while trianing `gpt2-large` using `batch_size=1`. I'm using the [examples/run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) script. I'm using a custom dataset with varied length examples, maximum `block_size` is 1024.
This is the command I'm using:
```
python -m torch.distributed.launch --nproc_per_node 8 run_language_modeling.py --output_dir=./output_attention_mask_padding/ --model_type=gpt2 --model_name_or_path=gpt2-large --do_train --train_data_file=./data/training.txt --line_by_line --per_gpu_train_batch_size 1 --num_train_epochs 3 --fp16
```
I tried changing `args.gradient_accumulation_steps` but to no success.
Here's the traceback:
```python
Traceback (most recent call last): | 9/213 [00:45<09:51, 2.90s/it]
File "run_language_modeling.py", line 988, in <module>
main()
File "run_language_modeling.py", line 938, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_language_modeling.py", line 506, in train
outputs = model(inputs, masked_lm_labels=labels, attention_mask=attention_mask) if args.mlm else model(inputs, labels=labels, attention_mask=attention_mask)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py", line 442, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/home/deepspeed/.local/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 612, in forward
loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 916, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/usr/local/lib/python3.6/dist-packages/apex/amp/wrap.py", line 27, in wrapper
kwargs)
File "/usr/local/lib/python3.6/dist-packages/apex/amp/utils.py", line 78, in casted_args
new_args.append(cast_fn(x))
File "/usr/local/lib/python3.6/dist-packages/apex/amp/utils.py", line 71, in maybe_float
return x.float()
RuntimeError: CUDA out of memory. Tried to allocate 190.00 MiB (GPU 2; 31.72 GiB total capacity; 28.71 GiB already allocated; 135.88 MiB free; 1.66 GiB cached)
Traceback (most recent call last):
File "run_language_modeling.py", line 988, in <module>
main()
File "run_language_modeling.py", line 938, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_language_modeling.py", line 523, in train
scaled_loss.backward()
File "/usr/local/lib/python3.6/dist-packages/torch/tensor.py", line 118, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: CUDA out of memory. Tried to allocate 194.00 MiB (GPU 4; 31.72 GiB total capacity; 29.42 GiB already allocated; 155.88 MiB free; 951.73 MiB cached)
```
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.6.0
- Platform: Linux
- Using distributed or parallel set-up in script?: Yes
| 04-03-2020 11:28:18 | 04-03-2020 11:28:18 | Someone's help, please?
A block_size of 900 works, but I need to use 1024. Is there gradient checkpointing maybe?<|||||>I managed to train on block_size of 950 using the latest build of pytorch supported by NVIDIA: https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel_20-03.html#rel_20-03<|||||>We are about to add gradient checkpointing, see here: https://github.com/huggingface/transformers/pull/4659, but I'm very unsure if it works well for distributed training...we might have to assign different modules to different devices as suggested here: https://github.com/huggingface/transformers/pull/3578<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,615 | closed | Mismatch of loss shape in document with output of TransfoXL | # 🐛 Bug
## Information
Model: Transformer-XL
Language: English
The problem arises when using: a small demo of transformer-xl output
## To reproduce
Steps to reproduce the behavior:
```python3
model_name = 'transfo-xl-wt103'
model = AutoModelWithLMHead.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
sentence = 'Hello world'
tokenize_input = tokenizer.tokenize(sentence)
tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
outputs = model(tensor_input, labels=tensor_input)
print(outputs[0].size())
```
run the code above
## Expected behavior
The output[0] is supposed to be a tensor shaped as `(1,)`, as described in the [source code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_transfo_xl.py#L862), however, the actual shape is (1, 2) (bsz, sequence_len).
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform: Ubuntu 18.04
- Python version: 3.6
- PyTorch version (GPU?): 1.4.0, with 1080Ti
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 04-03-2020 11:07:44 | 04-03-2020 11:07:44 | The output shape looks correct to me
"loss (:obj:`torch.FloatTensor` of shape `(batch_size, sequence_length)`" means that
`outputs[0]` should be of shape ` [batch_size, sequence_length] ` and it is ` [1, 2] `, so that is correct no?<|||||>yes, since the document was updated via #3661 |
transformers | 3,614 | closed | The tensorflow implementation of T5ForConditionalGeneration runs much slower than the pytorch one. GPU utilization is 30% | # 🐛 Bug
When I am running the official example in examples/summarization/t5/example on PyTorch, I have much better performance than the Tensorflow one. When running on PyTorch it needs 4s per iteration and uses 100% of the GPU. When running the TensorFlow model it needs 30s per iteration and the GPU utilization is 15-20%.
## Information
Model I am using: t5-small
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name): CNN Dailymail
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Follow the steps in the official examples/summarization/t5/example
2. Use the modified evaluate_cnn.py script provided below
```
import argparse
from pathlib import Path
from tqdm import tqdm
from rouge_score import rouge_scorer, scoring
from transformers import TFT5ForConditionalGeneration, T5Tokenizer
def chunks(lst, n):
"""Yield successive n-sized chunks from lst."""
for i in range(0, len(lst), n):
yield lst[i : i + n]
def generate_summaries(lns, output_file_path, model_size, batch_size):
output_file = Path(output_file_path).open("w")
model = TFT5ForConditionalGeneration.from_pretrained(model_size)
tokenizer = T5Tokenizer.from_pretrained(model_size)
# update config with summarization specific params
task_specific_params = model.config.task_specific_params
if task_specific_params is not None:
model.config.update(task_specific_params.get("summarization", {}))
for batch in tqdm(list(chunks(lns, batch_size))):
batch = [model.config.prefix + text for text in batch]
dct = tokenizer.batch_encode_plus(batch, max_length=512, return_tensors="tf", pad_to_max_length=True)
input_ids = dct["input_ids"]#.to(device)
attention_mask = dct["attention_mask"]#.to(device)
summaries = model.generate(input_ids=input_ids, attention_mask=attention_mask)
dec = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summaries]
for hypothesis in dec:
output_file.write(hypothesis + "\n")
output_file.flush()
def calculate_rouge(output_lns, reference_lns, score_path):
score_file = Path(score_path).open("w")
scorer = rouge_scorer.RougeScorer(["rouge1", "rouge2", "rougeL"], use_stemmer=True)
aggregator = scoring.BootstrapAggregator()
for reference_ln, output_ln in zip(reference_lns, output_lns):
scores = scorer.score(reference_ln, output_ln)
aggregator.add_scores(scores)
result = aggregator.aggregate()
score_file.write(
"ROUGE_1: \n{} \n\n ROUGE_2: \n{} \n\n ROUGE_L: \n{} \n\n".format(
result["rouge1"], result["rouge2"], result["rougeL"]
)
)
def run_generate():
parser = argparse.ArgumentParser()
parser.add_argument(
"model_size",
type=str,
help="T5 model size, either 't5-small', 't5-base', 't5-large', 't5-3b', 't5-11b'. Defaults to 't5-base'.",
default="t5-base",
)
parser.add_argument(
"input_path", type=str, help="like cnn_dm/test_articles_input.txt",
)
parser.add_argument(
"output_path", type=str, help="where to save summaries",
)
parser.add_argument("reference_path", type=str, help="like cnn_dm/test_reference_summaries.txt")
parser.add_argument(
"score_path", type=str, help="where to save the rouge score",
)
parser.add_argument(
"--batch_size", type=int, default=8, required=False, help="batch size: how many to summarize at a time",
)
parser.add_argument(
"--no_cuda", default=False, type=bool, help="Whether to force the execution on CPU.",
)
args = parser.parse_args()
# args.device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
source_lns = [x.rstrip() for x in open(args.input_path).readlines()]
generate_summaries(source_lns, args.output_path, args.model_size, args.batch_size)
output_lns = [x.rstrip() for x in open(args.output_path).readlines()]
reference_lns = [x.rstrip() for x in open(args.reference_path).readlines()]
calculate_rouge(output_lns, reference_lns, args.score_path)
if __name__ == "__main__":
run_generate()
```
## Expected behavior
The Tensorflow code should work with similar performance as the PyTorch one
## Environment info
absl-py 0.9.0
astor 0.7.1
attrs 19.3.0
blinker 1.4
boto3 1.12.34
botocore 1.15.34
cachetools 3.1.1
certifi 2019.11.28
cffi 1.14.0
chardet 3.0.4
click 7.1.1
cryptography 2.8
dill 0.3.1.1
docutils 0.15.2
filelock 3.0.12
future 0.18.2
gast 0.2.2
google-auth 1.12.0
google-auth-oauthlib 0.4.1
google-pasta 0.2.0
googleapis-common-protos 1.51.0
grpcio 1.27.2
h5py 2.10.0
idna 2.9
jmespath 0.9.5
joblib 0.14.1
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.0
Markdown 3.2.1
nltk 3.4.5
numpy 1.18.1
oauthlib 3.0.1
opt-einsum 3.2.0
pip 20.0.2
promise 2.3
protobuf 3.11.4
pyasn1 0.4.8
pyasn1-modules 0.2.7
pycparser 2.20
PyJWT 1.7.1
pyOpenSSL 19.1.0
PySocks 1.7.1
python-dateutil 2.8.1
regex 2020.2.20
requests 2.23.0
requests-oauthlib 1.2.0
rouge-score 0.0.3
rsa 4.0
s3transfer 0.3.3
sacremoses 0.0.38
scipy 1.4.1
sentencepiece 0.1.85
setuptools 46.1.3.post20200325
six 1.14.0
tensorboard 2.1.0
tensorflow 2.1.0
tensorflow-datasets 2.1.0
tensorflow-estimator 2.1.0
tensorflow-gpu 2.1.0
tensorflow-metadata 0.21.1
termcolor 1.1.0
tokenizers 0.5.2
torch 1.4.0
tqdm 4.45.0
transformers 2.7.0
urllib3 1.25.7
Werkzeug 1.0.1
wheel 0.34.2
wrapt 1.12.1
- `transformers` version: 2.70, 2.7.1, and builded from 81484b447b7d8504ff5e1cfff38ec35918383963
- Platform: Ubuntu Ubuntu 18.04.4 LTS
- Python version: 3.7.6
- PyTorch version (GPU?):
- Tensorflow version (GPU?):2.1.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:No
| 04-03-2020 10:52:40 | 04-03-2020 10:52:40 | I want to take a closer look in a week or so at this. This issue seems to be related: https://github.com/huggingface/transformers/issues/4634<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>The problem is likely because of `generate()` not being compatible with `tf.function`. I want to take a look at this in more detail while working on this PR: https://github.com/huggingface/transformers/pull/5662<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,613 | closed | Added albert-base-bahasa-cased README and fixed tiny-bert-bahasa-cased README | 04-03-2020 10:42:45 | 04-03-2020 10:42:45 | ||
transformers | 3,612 | closed | training GPT2 from scratch : implement causal attention mask? | # ❓ Questions & Help
I am trying to train a ```GPT2``` model from scratch but I noticed, by looking into the code here https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_gpt2.py that there doesn’t seem to be an implementation for a causal mask. Maybe it is in another repo and I missed it, I also couldn't find ressources on this in the docs.
I could write an ugly for loop and feed each of my sequences one token at a time to the network which would be super unefficient. I could also chop up each of my examples token by token, pad them and feed it like a batch, which is probably faster but doesn’t feel super satisfacting.
Do you know if there is a standard implementation of casal mask that I missed, or another way to do what I am describing ?
PS : I have already read huggingface’s blogpost on training from scratch, but unfortunately it doesn't say much about the implementation of said training :/. | 04-03-2020 09:36:50 | 04-03-2020 09:36:50 | You'd want to look at the `run_language_modeling.py` script which implements causal language modeling. (do not pass the `--mlm` flag)<|||||>I'm thinking some edit to run_language_modeling.py script maybe would make it work. I don't think just to not pass the --mlm flag you solve the problem @julien-c. Have you found any solution @johncwok? I'm searching the same thing.<|||||>@johncwok GPT2 always uses a causal mask. It's quite hidden in the code. This line https://github.com/huggingface/transformers/blob/0a4b1068e1d6c46525082b91a4ba00a09c9270ac/src/transformers/modeling_gpt2.py#L145 creates the causal mask that is then applied to the weights. The naming can definitely be improved here! So no matter what mask you insert it will only be applied in combination with the causal mask.
Also take a look at this line that creates the mask:
https://github.com/huggingface/transformers/blob/0a4b1068e1d6c46525082b91a4ba00a09c9270ac/src/transformers/modeling_gpt2.py#L107
<|||||>After https://github.com/huggingface/transformers/pull/2715/files is merged, I will do some renaming in the code - seems like a lot of people look for the causal mask in GPT2, CTRL and GPT |
transformers | 3,611 | closed | corrected mistake in polish model cards | 04-03-2020 08:46:22 | 04-03-2020 08:46:22 | ||
transformers | 3,610 | closed | How can u make sure that my transformer model should only one GPU, though the serve has multiple GPU cards. | I have transformer BERT model and I am trying to train on lambda server which has 8 GPU cards, How can u make sure that this model should use only once GPU out of 8, by default , it is using all GPUs. even after setting
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
| 04-03-2020 08:15:49 | 04-03-2020 08:15:49 | You should have used the template, because now we don't have enough information to help you: how are you running the script (torch launch utility? Which command?), which script are you using (your own (give details) or one of the example scripts)?
By default, PyTorch will only use one GPU unless you specify it to go DDP.<|||||>Use tags please. Read through this guide. https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks<|||||>@BramVanroy it was my bad, I am in bit hurry so that I was not able to provide my code path. I have just pushed my code to github. Please check below for path
[https://github.com/tiru1930/bert_intent_classification](url)
In this PATH I use src/Train.py to train the model.
<|||||>As you see, you are just wasting your own time and mine when you are _in a hurry_. In the future, take your time to write a good starting posts so that we _want_ to help you and _can_ help you quickly.
In your code, you are calling DataParallel on your mode, which will automatically run your model over multiple GPU's (but under a single process). Remove this line.
https://github.com/tiru1930/bert_intent_classification/blob/master/src/train.py#L80 |
transformers | 3,609 | closed | Filling more than 1 masked token at a time | I am able to use hugging face's mask filling pipeline to predict 1 masked token in a sentence using the below:
```
!pip install -q transformers
from __future__ import print_function
import ipywidgets as widgets
from transformers import pipeline
nlp_fill = pipeline('fill-mask')
nlp_fill("I am going to guess <mask> in this sentence")
```
But does anyone have an opinion on what is the best way to do this if I want to predict 2 masked tokens? e.g. if the sentence is instead `"I am going to <mask> <mask> in this sentence"`?
If i try and put this exact sentence into nlp_fill I get the error "ValueError: only one element tensors can be converted to Python scalars" so it doesn't work automatically.
Any help would be much appreciated!
Stack overflow question [link](https://stackoverflow.com/questions/60990897/best-way-of-using-hugging-faces-mask-filling-for-more-than-1-masked-token-at-a) | 04-03-2020 08:06:22 | 04-03-2020 08:06:22 | Indeed, this is not supported right now. We'd welcome a PR though :)<|||||>Before somebody starts on a PR, we need to consider what exactly this should do.
For `top_k = 1`, most users probably expect a single forward pass and picking the top prediction for each token. For greater `top_k`, however, picking the k-best prediction at each mask position has increasingly high risk of yielding an inconsistent sequence. A beam search over all possible sequences with some overall objective and returning the overall `top_k` best sequences will be more desirable, but also more work to implement.
A naive objective could simply multiply the probabilities of each candidate replacement obtained from a single forward pass. However, these probabilities are not conditional on the specific choice for the other mask positions. What exactly these probabilities are when there is more than 1 mask token is not clear to me but I think a reasonable assumption is that the network produces some kind of weighted average of all the probability distributions one would get if one fixes the other mask tokens and makes a forward pass with just one mask token.
Therefore, I think one must make multiple forward passes to get the probability of each decision step in the gap filling process. It is not clear though in what order to make decisions. Even in the simplest case of continuous mask positions we could proceed left-to-right, right-to-left, from both sides simultaneously, start in the middle or in some other way. The order could also be influenced by the probabilities, e.g. condensating the most confidently predicted token first.
It may also be desirable to have a [MASK*] that is expanded to multiple tokens as needed. Then, one may want to have a brevity penalty or normalise by length as otherwise the model will prefer short answers as their probability is higher. One may also want to have a callback to filter candidate substitutions, e.g. for a cloze test one may want to check that the sequence does not start with '##' and that it detokenises to a single word of the target language.<|||||>Please see the following issue https://github.com/huggingface/transformers/issues/10158 and PR https://github.com/huggingface/transformers/pull/10222 for an attempt to take a crack at this<|||||>@jowagner Has made some very valid points. In fact, these are the same concerns I have had previously with how multiple mask filling even works when done simultaneously. However, there are some issues with all of the approaches and I am not quite sure yet as to how it could be resolved.
Take for example you have 3 mask positions and we follow the method that gives preference first to the most confidently predicted token. There is an intrinsic issue as to what the most confident token would even mean here in the first place given that the other 2 masks are still empty and not filled. My point being, the probability of which word needs to be filled in a particular slot is not necessarily indicative of whether that SHOULD be the first one to be filled.
Do have a look at https://arxiv.org/abs/2002.03079 's work on Blank Language Model. Most of the valuable suggestions that you provided here start spilling into this paper's realm.
I would be very happy to discuss further about this with you Joachim<|||||>Hi, I've implemented right to left, left to right, and random mask filling in PyTorch for top k ids that the model thinks are the most probable tokens in a sentence in one of my projects. In this implementation, each time we want to generate a mask, the model looks at the previously generated sentences and decides what is the most probable for the next masked position. So if we have 2 masks in a sentence, by setting top_k=5, we'll have 25 sentences (5 tokens for the first position, and for each of these 5 sentences with one mask we have another 5 tokens for the second mask). It'll output something like this:(I used Persian models for this. I hope you can see how the masks are being filled)

Then in the next step, we implemented a beam search to choose the most probable sequence of all between all these sentences.
I'd be glad to help HuggingFace on this issue, I can send my code or send a pull request.
<|||||>The idea in https://github.com/huggingface/transformers/pull/10222/commits/80a113641a49c73f7680289219096ee5cf7ca620#r605659735 may point to how one can combine left and right direction or even average over all possible sequences of crystallisation.<|||||>Hi, This is the function for different orders of prediction. I hope it helps.
Also, In the beam search section, we constructed a dictionary of bi tri and four grams in a specific corpus related to our work and scored predictions based on those. I won't include this extensive part here but tell me if it can be useful.
```
def predict_seqs_dict(sequence, model, tokenizer, top_k=5, order='right-to-left'):
ids_main = tokenizer.encode(sequence,
return_tensors="pt",
add_special_tokens=False)
ids_ = ids_main.detach().clone()
position = torch.where(ids_main == tokenizer.mask_token_id)
positions_list = position[1].numpy().tolist()
if order =='left-to-right':
positions_list.reverse()
elif order=='random':
random.shuffle(positions_list)
# print(positions_list)
predictions_ids = {}
predictions_detokenized_sents = {}
for i in range(len(positions_list)):
predictions_ids[i] = []
predictions_detokenized_sents[i] = []
# if it was the first prediction,
# just go on and predict the first predictions
if i==0:
model_logits = model(ids_main)['logits'][0][positions_list[0]]
top_k_tokens = torch.topk(model_logits, top_k, dim=0).indices.tolist()
for j in range(len(top_k_tokens)):
#print(j)
ids_t_ = ids_.detach().clone()
ids_t_[0][positions_list[0]] = top_k_tokens[j]
predictions_ids[i].append(ids_t_)
pred = tokenizer.decode(ids_t_[0])
predictions_detokenized_sents[i].append(pred)
# append the sentences and ids of this masked token
# if we already have some predictions, go on and fill the rest of the masks
# by continuing the previous predictions
if i!=0:
for pred_ids in predictions_ids[i-1]:
# get the logits
model_logits = model(pred_ids)['logits'][0][positions_list[i]]
# get the top 5 of this prediction and masked token
top_k_tokens = torch.topk(model_logits, top_k, dim=0).indices.tolist()
for top_id in top_k_tokens:
ids_t_i = pred_ids.detach().clone()
ids_t_i[0][positions_list[i]] = top_id
pred = tokenizer.decode(ids_t_i[0])
# append the sentences and ids of this masked token
predictions_ids[i].append(ids_t_i)
predictions_detokenized_sents[i].append(pred)
return predictions_detokenized_sents
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>While an external scoring model may produce higher quality results, such an approach would move quite far away from letting the BERT model make the predictions. For example, consider a users who is evaluating the quality of a BERT model using a cloze test. They don't want issues of the BERT model to be smoothed / repaired by the external scoring model.
For finding the most confidently predicted token, I don't see why the fact that 3 or more masks may include a mask that has only masked neighbours is a problem. What we need is a measure of confidence that can be derived from the class probability distribution of the MLM head (its softmax layer). BERT gives us a class probability distribution for each masked token. The most confident token is then simply the one for which the confidence measure gives the greatest value.
I didn't yet find time to read https://arxiv.org/abs/2002.03079 <|||||>@jowagner Just to reconfirm, your proposition was to fill the slots not in an arbitrary left to right or right to left fashion, but to fill the one that has the highest value in the softmax layer and then utilize that while regenerating clozes for the rest of the masks, correct?
The high confidence for the position could be by virtue of there not being any other better suitable candidates for that position rather than being an indicator that the model is most confident about that prediction (for us to be filling that prediction first before using that as the seed to move on and fill the rest in a similar fashion). Right? |
transformers | 3,608 | closed | RobertaTokenizer corner case with empty string | https://github.com/huggingface/transformers/blob/81484b447b7d8504ff5e1cfff38ec35918383963/src/transformers/tokenization_roberta.py#L239
this will introduce issue if `text == ""`, which will occur if anyone follows `run_glue.py` with QQP task as the `train.tsv` has two lines contains the empty column.
can be corrected to `if add_prefix_space and (not text or not text[0].isspace()):` | 04-03-2020 07:53:33 | 04-03-2020 07:53:33 | A PR is welcome!<|||||>created PR #3621 <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>cc @mfuntowicz @n1t0
The problem is that the tokenizer does not allow empty strings to be passed (will lead to index out of bounds error when `text[0].isspace()`). An empty string is possible, according to OP, when using the QQP task which has such a format. OP added a PR here that you can have a look at https://github.com/huggingface/transformers/pull/3621<|||||>This issue speaks about fixing https://github.com/huggingface/transformers/blob/81484b447b7d8504ff5e1cfff38ec35918383963/src/transformers/tokenization_roberta.py#L239 which seems totally reasonable to me, but #3621 does a lot more than that, half of which I don't even understand.
@boy2000-007man could you update your PR to only fix the relevant line, and maybe add a test?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,607 | closed | Allow the creation of "entity groups" for NerPipeline #3548 | This pull request adds an `index` key to the dictionary returned by `NerPipeline`. The index will be necessary in order to identify **entity groups**, where an entity group is a contiguous series of tokens, having the same **entity type**.
Details of what I want to be able to do can be found in issue #3548.
If this PR gets merged, I would also like to ask if you guys would recommend that I implement the **entity group** transformation in `NerPipeline` itself.
Possibly, I can set the parameter `group` at initialization, where if `True`, the *grouped* version of the output will be returned.
E.g.
Instead of the following *ungrouped* output:
```
[{'entity': 'I-PER', 'score': 0.9983270168304443, 'word': 'En'},
{'entity': 'I-PER', 'score': 0.9952995777130127, 'word': '##zo'},
{'entity': 'I-ORG', 'score': 0.9984350204467773, 'word': 'Australian'},
{'entity': 'I-ORG', 'score': 0.9967807531356812, 'word': 'National'},
{'entity': 'I-ORG', 'score': 0.9959043264389038, 'word': 'University'},
{'entity': 'I-ORG', 'score': 0.9900023937225342, 'word': 'AU'},
{'entity': 'I-ORG', 'score': 0.9763911366462708, 'word': '##N'}]
```
We get something like the following *grouped* output:
```
[{'entity_group': 'I-PER', 'score': 0.9983270168304443, 'word': 'Enzo'},
{'entity_group': 'I-ORG', 'score': 0.9984350204467773, 'word': 'Australian National University'},
{'entity_group': 'I-ORG', 'score': 0.9900023937225342, 'word': 'AUN'}]
``` | 04-03-2020 07:35:38 | 04-03-2020 07:35:38 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3607?src=pr&el=h1) Report
> Merging [#3607](https://codecov.io/gh/huggingface/transformers/pull/3607?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/81484b447b7d8504ff5e1cfff38ec35918383963&el=desc) will **decrease** coverage by `1.04%`.
> The diff coverage is `43.33%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3607?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3607 +/- ##
==========================================
- Coverage 78.06% 77.02% -1.05%
==========================================
Files 100 100
Lines 17134 17159 +25
==========================================
- Hits 13375 13216 -159
- Misses 3759 3943 +184
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3607?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/3607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `72.76% <43.33%> (-2.19%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0.00%> (-10.00%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.61% <0.00%> (-2.64%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0.00%> (-2.30%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.20% <0.00%> (-1.35%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.28% <0.00%> (-0.33%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.10% <0.00%> (+0.12%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3607?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3607?src=pr&el=footer). Last update [81484b4...34623f3](https://codecov.io/gh/huggingface/transformers/pull/3607?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thank you @enzoampil!<|||||>FYI that I will apply the entity grouping functionality explained above in this same PR<|||||>**This pull request now applies the entity group transformation illustrated above by setting the parameter: `group`=True**.
This was done by reflecting the transformation inside `NerPipeline`. I've changed the name of the pull request to better reflect the feature being proposed.
cc @julien-c @mfuntowicz @petulla
Sample code:
```
# Install branch
# Make sure to restart runtime after installing if using Google Colab
!pip install -e git+git://github.com/enzoampil/transformers.git@add_index_to_ner_pipeline#egg=transformers
# Grouped NER
from transformers import pipeline
nlp = pipeline('ner', group=True)
nlp("Enzo works at the Australian National University (AUN)")
# [{'entity_group': 'I-PER', 'score': 0.9968132972717285, 'word': 'Enzo'},
# {'entity_group': 'I-ORG', 'score': 0.9970400333404541, 'word': 'Australian National University'},
# {'entity_group': 'I-ORG', 'score': 0.9831967651844025, 'word': 'AUN'}]
# Ungrouped NER
nlp = pipeline('ner', group=False)
nlp("Enzo works at the Australian National University (AUN)")
# [{'entity': 'I-PER', 'index': 1, 'score': 0.9983270168304443, 'word': 'En'},
# {'entity': 'I-PER', 'index': 2, 'score': 0.9952995777130127, 'word': '##zo'},
# {'entity': 'I-ORG', 'index': 6, 'score': 0.9984350204467773, 'word': 'Australian'},
# {'entity': 'I-ORG','index': 7, 'score': 0.9967807531356812, 'word': 'National'},
# {'entity': 'I-ORG', 'index': 8 'score': 0.9959043264389038, 'word': 'University'},
# {'entity': 'I-ORG', 'index': 10, 'score': 0.9900023937225342, 'word': 'AU'},
# {'entity': 'I-ORG', 'index': 11, 'score': 0.9763911366462708, 'word': '##N'}]
```
Tutorial on how to do Entity Grouping w/ `NerPipeline` [here](https://colab.research.google.com/drive/1CVLP0n3Q5t5qiWpode7jyhUNZpmLg0mS)
I'm very keen to get feedback for the above, so please let me know if I should change anything, or perform additional steps to bring its quality to an acceptable level.<|||||>I accidentally deleted the fork for this, so I've recreated this pull request [here](https://github.com/huggingface/transformers/pull/3957). Apologies for any inconvenience caused by this.
I will close this PR so please refer to the one linked above. |
transformers | 3,606 | closed | Fix typo in FeatureExtractionPipeline docstring | Fixed a typo in the docstring of `FeatureExtractionPipeline` | 04-03-2020 07:00:01 | 04-03-2020 07:00:01 | |
transformers | 3,605 | closed | 🐛 Summarization pipeline : T5-base much slower than BART-large | # 🐛 Bug
## Information
Model : `bart-large-cnn` and `t5-base`
Language : English
The problem arises when using : [this colab notebook](https://colab.research.google.com/drive/1iAIFX1QQiFm1F01vMmnAgFh4oH1H-K8W), using both BART and T5 with pipeline for Summarization.
Dataset : CNN/DM
## To reproduce
Run the notebook and measure time for inference between the 2 models. On my run, I have :
```
BART = 73s
T5 = 369s
```
## Expected behavior
I expected T5 to be at least as fast as BART, since there is less parameters (for the base version at least). Instead it takes much longer with T5...
@patrickvonplaten Do you happen to know why T5 is so slow ? | 04-03-2020 06:59:18 | 04-03-2020 06:59:18 | Hi @Colanim, thanks a lot for your speed comparison :-).
It might be possible that the pipelines used different default parameters for `T5` and `Bart` under the hood which strongly influence their running times.
Besides `min_length` and `max_length` could you also insert those parameters into both `T5` and `Bart` to overwrite the default parameters:
```
"early_stopping": True
"length_penalty": 2.0
"no_repeat_ngram_size": 3
"num_beams": 4
```
If there is still a big difference in time, then I guess we have to take a closer look!
<|||||>Thanks for your fast answer @patrickvonplaten
Here is the link to the modified notebook, with the parameters you mentioned :
https://colab.research.google.com/drive/1kCm5ew8qDQqguZjbsC6Ujs9KZBaSfafi
---
Unfortunately, there is still a **huge** difference...
```
BART = 66s
T5 = 226s
```<|||||>Ok, good to know! thanks for doing the comparison @Colanim. This might interest you as well @sshleifer :-)
Oh actually I just remember that Bart caches the decoder hidden key/value outputs when doing auto-regressive decoding (similar to GPT2 - check Visuals under "GPT-2 Masked Self-Attention" in this [post](http://jalammar.github.io/illustrated-gpt2/)) and I think T5 does not.
But T5 could cache the decoder key/value outputs to speed up decoding as well since it uses a causal mask for the decoder. This could definitely be a Feature Request. What do you think
@sshleifer @craffel @thomwolf ?<|||||>Sounds worth it! |
transformers | 3,604 | closed | Update README.md | Update AutoModel & AutoTokernizer loading. | 04-03-2020 04:55:58 | 04-03-2020 04:55:58 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3604?src=pr&el=h1) Report
> Merging [#3604](https://codecov.io/gh/huggingface/transformers/pull/3604?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/81484b447b7d8504ff5e1cfff38ec35918383963&el=desc) will **not change** coverage by `%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3604?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3604 +/- ##
=======================================
Coverage 78.06% 78.06%
=======================================
Files 100 100
Lines 17134 17134
=======================================
Hits 13375 13375
Misses 3759 3759
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3604?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.45% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.10% <0.00%> (+0.12%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3604?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3604?src=pr&el=footer). Last update [81484b4...8845212](https://codecov.io/gh/huggingface/transformers/pull/3604?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,603 | closed | Update README.md | Update AutoModel & AutoTokenizer loading. | 04-03-2020 04:49:55 | 04-03-2020 04:49:55 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3603?src=pr&el=h1) Report
> Merging [#3603](https://codecov.io/gh/huggingface/transformers/pull/3603?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/81484b447b7d8504ff5e1cfff38ec35918383963&el=desc) will **not change** coverage by `%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3603?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3603 +/- ##
=======================================
Coverage 78.06% 78.06%
=======================================
Files 100 100
Lines 17134 17134
=======================================
Hits 13375 13375
Misses 3759 3759
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3603?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3603?src=pr&el=footer). Last update [81484b4...32340ca](https://codecov.io/gh/huggingface/transformers/pull/3603?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,602 | closed | Multilingual BART - | This adds `mbart-en-ro` model, a BART variant finetuned on english-romanian translation.
### TODO
- [x] (docs) pretrained_model.rst
- [ ] (docs) README.md
- [ ] (docs) bart.rst
### Differences with Bart
`config.normalize_before`: all the `LayerNorm` calls happen before attention calls
`config.add_final_layer_norm`: There is one extra layer_norm in the decoder
`config.scale_embedding`: embeddings are multiplied by 32 (`sqrt(d_model=1024)`)
### Future PRs
- The model returns the same variables as fairseq, but the tokenizer is not yet at parity with fairseq. This is the next PR in the pipeline.
- the `mbart-large-cc25` (no finetuning) model has a very different state dict. Also WIP.
### Misc
- the link_tester got angry about me not typing out URLs in this PR. Unclear why it didn't happen earlier.
Needs documentation but unclear where to put it.
| 04-03-2020 00:06:18 | 04-03-2020 00:06:18 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3602?src=pr&el=h1) Report
> Merging [#3602](https://codecov.io/gh/huggingface/transformers/pull/3602?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1a16d9d94a81e95463b166adfce4a8e02cdc47eb&el=desc) will **not change** coverage by `%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3602?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3602 +/- ##
=======================================
Coverage 78.06% 78.06%
=======================================
Files 100 100
Lines 17181 17181
=======================================
Hits 13413 13413
Misses 3768 3768
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3602?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3602?src=pr&el=footer). Last update [1a16d9d...1a16d9d](https://codecov.io/gh/huggingface/transformers/pull/3602?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Hi, is there any reason that cc-25 was removed and only the fine-tuned one is kept? any way I can quickly enable that? thanks<|||||>When the authors released the CC25 checkpoint, it was shaped differently than `mbart-large-en-ro` and I am not clear on whether that is fixed yet.
See https://github.com/pytorch/fairseq/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+mbart |
transformers | 3,601 | closed | [Generate, Test] Split generate test function into beam search, no beam search | - Clean the generate testing functions
- Also should fix flaky behaviour of bad_word_tokens test (see #3367 and https://circleci.com/gh/huggingface/transformers/27997?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link)
| 04-02-2020 20:43:57 | 04-02-2020 20:43:57 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3601?src=pr&el=h1) Report
> Merging [#3601](https://codecov.io/gh/huggingface/transformers/pull/3601?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f68d22850ced09bb194b30068ff94ca3409f0879&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3601?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3601 +/- ##
==========================================
- Coverage 78.06% 78.05% -0.01%
==========================================
Files 100 100
Lines 17134 17134
==========================================
- Hits 13375 13374 -1
- Misses 3759 3760 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3601?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.28% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.10% <0.00%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3601?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3601?src=pr&el=footer). Last update [f68d228...857e77e](https://codecov.io/gh/huggingface/transformers/pull/3601?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,600 | closed | Why isn't there a SequenceClassificationModel for GPT-2 (and some other models)? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
<!-- Description of your issue -->
Why isn't there a SequenceClassificationModel (like there is for BERT) for GPT-2? I was able to implement this pretty easily by adding a "[CLS]" token to the vocabulary (like in the GPT2DoubleHeadsModel), appending sequences with "[CLS]", and then adding a linear layer that maps from the embedding of "[CLS]" to a vector of logits corresponding to the classes. After training, this model worked comparably to BertSequenceClassificationModel for my use-case. It would be nice to have this model in the transformers library and not have to code it up from scratch.
If this sounds like a good idea, I can make a pull request with a GPT2SequenceClassificationModel added. If not, why is it not a good idea? | 04-02-2020 20:36:30 | 04-02-2020 20:36:30 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,599 | closed | Why is there not a SequenceClassification model for GPT-2? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Why isn't there a SequenceClassificationModel (like there is for BERT) for GPT-2? I was able to implement this pretty easily by adding a "[CLS]" token to the vocabulary (like in the GPT2DoubleHeadsModel), appending sequences with "[CLS]", and then adding a linear layer that maps from the embedding of "[CLS]" to a vector of logits corresponding to the classes. After training, this model worked comparably to BertSequenceClassificationModel for my use-case. It would be nice to have this model in the transformers library and not have to code it up from scratch.
If this sounds like a good idea, I can make a pull request with a GPT2SequenceClassificationModel added. If not, why is it not a good idea?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-02-2020 20:36:28 | 04-02-2020 20:36:28 | My apologies, my computer glitched and posted twice. Please close this issue and refer to: https://github.com/huggingface/transformers/issues/3600 |
transformers | 3,598 | closed | After enable fp16, torch.save model has error | # 🐛 Bug
After complete training, the model cannot be saved.
## Information


| 04-02-2020 20:28:36 | 04-02-2020 20:28:36 | Please use the template in the future. It is there for a reason. As mentioned in the template, don't post a screenshot. Use code blocks, post your code or the example script that you used, and the error trace. Also provide your version of PyTorch and Python.
https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,597 | closed | CTRL generates French text when I want English texts | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
CTRL generates French texts when I want English texts.
I run this command: **python examples/run_generation.py --model_type ctrl --model_name_or_path ctrl --prompt "Looking well today" --length 500 --temperature 0.8 --repetition 1.2**
What do I need to add or change to generate English texts only?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
| 04-02-2020 20:16:13 | 04-02-2020 20:16:13 | CTRL uses control codes, as is mentioned in our documentation, with examples on the [original repository](https://github.com/salesforce/ctrl#generations). Have you tried using these control codes?<|||||>How do I specify which control code I want to use? Do I have to do that in the command line and if yes, how? This is the Control code I want to use 16360 (i.e. politics).
Thank you <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,596 | closed | batch_encode_plus with pad_to_max_length but no max_length is not padding the output | # 🐛 Bug
## Information
Model I am using BERT:
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
import torch
import numpy as np
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
seq1 = "This is a short sequence"
seq2 = "This will be a much longer sequence, so the short one requires padding"
input = [seq1, seq2]
# Explicitly specified padding length
max_len = 20
tck_temp = tokenizer.batch_encode_plus(input, max_length=max_len, pad_to_max_length=True)
inp_ids = tck_temp['input_ids']
assert len(inp_ids[0]) == len(inp_ids[1]) == max_len, "Both inputs should have length equal to 20"
# Implicit padding length set to models max length
model_max_len = tokenizer.max_len
tck_temp = tokenizer.batch_encode_plus(input, pad_to_max_length=True)inp_ids = tck_temp['input_ids']
assert len(inp_ids[0]) == len(inp_ids[1]) == model_max_len, "Both inputs should have length equal to %d" % model_max_len
```
## Expected behavior
According to the documentation, `batch_encode_plus` with `pad_to_max_length=True` should pad sequence to models maximal length, if the `max_length` is not explicitly specified.
The attached script should run without raising Exception.
From documentation
"If no max length is specified, the padding is done up to the model’s max length."
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform: Linux-4.15.0-74-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 04-02-2020 19:49:11 | 04-02-2020 19:49:11 | Hi,
Is anyone working on this or is it open for someone to take?
I was able to reproduce the issue.
If not being worked on by anyone, I would like to take it up.
Thanks<|||||>I'm facing similar issues with batch_encode_plus:
```
tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-cased')
a = ['short sentence', 'Larger sentence than short sentence']
input_ids = torch.tensor(tokenizer.batch_encode_plus(a, pad_to_max_length=True)['input_ids'])
```
It doesn't work for me, it return this error:
`ValueError: expected sequence of length 2 at dim 1 (got 6)`
Thanks<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This should now be fixed on master with the updated tokenizer API |
transformers | 3,595 | closed | [Generation] delete print statement | Somehow forgot to delete it from PR #3550. | 04-02-2020 19:36:49 | 04-02-2020 19:36:49 | |
transformers | 3,594 | closed | Wrong tokenization for distilbert-base-multilingual-cased | # 🐛 Bug
## Information
Model I am using (DistillBert):
The problem arises when using:
* my own modified scripts: (give details below)
The tasks I am working on is:
* my own task or dataset: (give details below)
## To reproduce
with transformers 2.3.0:
```python
import torch
from transformers import DistilBertTokenizer
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-multilingual-cased")
result = torch.tensor(tokenizer.encode("Hello, my dog is cute"))
print (result)
itos = tokenizer.ids_to_tokens
print (itos[61694])
print (itos[10133])
# The original token for 'Hello' exits but for some reason it's not used?
print (itos[31178])
```
Output:
```bash
[101, 61694, 10133, 117, 15127, 17835, 10124, 21610, 10112, 102]
'hell'
'##o'
'Hello'
```
## Expected behavior
```python
import torch
from transformers import DistilBertTokenizer
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-multilingual-cased")
result = torch.tensor(tokenizer.encode("Hello, my dog is cute"))
print (result)
```
Output:
```bash
[101, 31178, 117, 15127, 17835, 10124, 21610, 10112, 102]
```
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.3.0
- Python version: >3.6
| 04-02-2020 18:23:05 | 04-02-2020 18:23:05 | Can you show how you initialize `tokenizer`? Which vocab are you using?<|||||>> Can you show how you initialize `tokenizer`? Which vocab are you using?
sorry I forgot that... I updated the code in the issue already.
I was using `distilbert-base-multilingual-cased`
`tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-multilingual-cased")`
<|||||>This behaviour seems to have been solved in v2.7.0 as running your code yields the correct result in my environment.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,593 | closed | Transformers and BERT: dealing with possessives and apostrophes when encode | Let's consider two sentences:
"why isn't Alex's text tokenizing? The house on the left is the Smiths' house"
Now let's tokenize and decode:
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
tokenizer.decode(tokenizer.convert_tokens_to_ids(tokenizer.tokenize("why isn't Alex's text tokenizing? The house on the left is the Smiths' house")))
We get:
"why isn't alex's text tokenizing? the house on the left is the smiths'house"
**My question is how dealing with missing space in some possessives like *smiths'house*?**
For me, it seems that the process of tokenization in Transformers is done not right. Let's consider output of
tokenizer.tokenize("why isn't Alex's text tokenizing? The house on the left is the Smiths' house")
we get:
['why', 'isn', "'", 't', 'alex', "'", 's', 'text', 'token', '##izing', '?', 'the', 'house', 'on', 'the', 'left', 'is', 'the', 'smith', '##s', "'", 'house']
So in this step, we already have lost important information about the last apostrophe. It would be much better if tokenization was done in the another way:
['why', 'isn', "##'", '##t', 'alex', "##'", '##s', 'text', 'token', '##izing', '?', 'the', 'house', 'on', 'the', 'left', 'is', 'the', 'smith', '##s', "##'", 'house']
In this way, tokenization keeps all information about apostrophes, and we will not have problems with possessives. | 04-02-2020 16:23:17 | 04-02-2020 16:23:17 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Any idea about how to solve this apostrophe seperating word token with different word id in bert rokenizer ? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.