repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 9,711 | closed | Add support for RemBERT | # π New model addition
## Model description
Hi,
I just found this really interesting upcoming ICLR 2021 paper: "Rethinking Embedding Coupling in Pre-trained Language Models":
> We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that allocating additional capacity to the output embedding provides benefits to the model that persist through the fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the number of parameters at the fine-tuning stage.
Paper can be found [here](https://openreview.net/forum?id=xpFFI_NtgpW).
Thus, the authors propose a new *Rebalanced mBERT (**RemBERT**) model* that outperforms XLM-R. An integration into Transformers would be awesome!
I would really like to help with the integration into Transformers, as soon as the model is out!
## Open source status
* [ ] the model implementation is available: authors plan to release model implementation
* [ ] the model weights are available: authors plan to release model checkpoint
* [ ] who are the authors: @hwchung27, @Iwontbecreative, Henry Tsai, Melvin Johnson and @sebastianruder
| 01-20-2021 23:16:44 | 01-20-2021 23:16:44 | Decided it would be easier for us to take care of this since we plan to directly release the model checkpoint in huggingface.
Started working on it over the week-end, will share PR once it is more polished. <|||||>This is great news @Iwontbecreative! Let us know if you need help. |
transformers | 9,710 | closed | Let Trainer provide the device to perform training | # π Feature request
Training_args object [chooses](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/training_args.py#L477) the training device by itself(cuda:0 by default). I request a possibility for a user to be able to choose it :)
## Motivation
Imagine a situation when we have a cluster with several gpus and cuda:0 memory is full(I have it right now :)) So user cannot use Trainer object for training.
| 01-20-2021 21:49:21 | 01-20-2021 21:49:21 | |
transformers | 9,709 | closed | DeepSpeed: Exits with CUDA runtime error on A100 (requires recompiling DeepSpeed for NVIDIA 8.0 Arch) | ## Environment info
- `transformers` version: 4.3.0 (unofficial, off current main branch)
- Platform: Linux
- Python version: 3.7
- PyTorch version (GPU?): 1.7.1
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
## Information
Model I am using (Bert, XLNet ...): T-5
The problem arises when using:
* [ ] the official example scripts: examples/seq2seq/finetune_trainer.py
## Issue
In the hopes this saves others some time since it took a while for me to fix: When running the new DeepSpeed mode in Transformers 4.3.0 on an A100 GPU, it will exit with a runtime error:
RuntimeError: CUDA error: no kernel image is available for execution on the device
For me, this was due to installing DeepSpeed from pip rather than source. The A100 architecture appears not to be (as of this writing) installed in the default. If you install from source as described in this post ( https://www.deepspeed.ai/tutorials/advanced-install/ ), the error goes away. The post suggests selecting the architecture using the TORCH_CUDA_ARCH_LIST environment variable, but I found just using the install.sh script (which I am assuming auto-detects the architecture of your GPU) worked more successfully.
| 01-20-2021 21:04:14 | 01-20-2021 21:04:14 | Pinging @stas00<|||||>Heh, actually I wrote this section: https://www.deepspeed.ai/tutorials/advanced-install/#building-for-the-correct-architectures and the autodetector, since I originally had the same issue.
This problem is also partially in pytorch - which is now fixed too in pytorch-nightly.
`TORCH_CUDA_ARCH_LIST` is there if you say want to use the binary build on another machine or want to optimize it for whatever reason. e.g. I build it with:
```
TORCH_CUDA_ARCH_LIST="6.1;8.6" DS_BUILD_OPS=1 pip install --no-cache -v --disable-pip-version-check -e .
```
because I have 1070 and 3090 cards.
I'm glad you found a way to solve it.
Now, this is a purely DeepSpeed issue and has nothing to do with transformers, other than perhaps a documentation issue.
I'm all ears at how perhaps `transformers` can improve the doc on our side to help the users find a solution quickly.
1. Probably should recommend to install from source
2. but then when we bail on missing `deepspeed` we say do `pip install deepspeed` - do you think we should change that to:
> `pip install deepspeed` or if it doesn't work install from source?
The thing is `pip install deepspeed` is installing from source, but I think it perhaps isn't using the same build script? So should we say:
> `pip install deepspeed` or if it doesn't work install from https://github.com/microsoft/deepspeed?
or may be easier to just say:
> install from https://github.com/microsoft/deepspeed?
What happens if you install with:
```
DS_BUILD_OPS=1 pip install deepspeed
```
Perhaps your issue is JIT/PTX which happens if you don't do the above - i.e. the binary build gets postpone till run time. `DS_BUILD_OPS=1` forces the binary build.
In any case let's discuss this over at DeepSpeed Issues - @PeterAJansen, would you please open an issue there because only you can report/reproduce the specific error - should they fix the pip build. and tag me?
BTW, fairscale has its own issues with `pip install fairscale` - I also have to build from the the repo, because I am forced to use pytorch-nightly due to rtx-30* and it won't build at all via `pip` directly.
so whatever we decide we should do the same for `fairscale`.
Thank you!<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,708 | closed | fix typo | # What does this PR do?
fix typo
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 01-20-2021 19:46:42 | 01-20-2021 19:46:42 | |
transformers | 9,707 | closed | Allow text generation for ProphetNetForCausalLM | # What does this PR do?
The configuration for ProphetNetForCausalLM is overwritten at initialization to ensure that it is used as a decoder (and not as an encoder_decoder) for text generation.
The initialization of the parent class for ProphetNetForCausalLM is done before this overwrite, causing the `model.config.is_encoder_decoder` to remain possibly True. This leads to an error if the generate method of the model is later called as the non-existing method `get_encoder` is called.
Fixes https://github.com/huggingface/transformers/issues/9702
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
| 01-20-2021 18:56:34 | 01-20-2021 18:56:34 | Thanks a lot for fixing it @guillaume-be |
transformers | 9,706 | closed | [PR/Issue templates] normalize, group, sort + add myself for deepspeed | This PR:
* case-normalizes, groups and sorts the tagging entries
* removes one duplicate
* adds myself for deepspeed
* adds/removes/moves others based on their suggestions through this PR
@LysandreJik, @sgugger, @patrickvonplaten
| 01-20-2021 17:08:51 | 01-20-2021 17:08:51 | Once the PR template is complete and everybody is happy I will sync with the Issue template. So please only review the former if you're just joining in.<|||||>should we add bullets? As in:
```
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, longformer, reformer, t5, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0
- trainer: @sgugger
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
```<|||||>I like bullets.<|||||>someone to tag for ONNX issues? @mfuntowicz? |
transformers | 9,705 | closed | [deepspeed] fix the backward for deepspeed | This PR fixes a bug in my deepspeed integration - `backward` needs to be called on the deepspeed object.
@sgugger
Fixes: https://github.com/huggingface/transformers/issues/9694
| 01-20-2021 16:55:41 | 01-20-2021 16:55:41 | Thanks for fixing! |
transformers | 9,704 | closed | ValueError("The training dataset must have an asserted cardinality") when running run_tf_ner.py | ## Environment info
- `transformers` version: 4.2.0
- Platform: linux
- Python version: python3.6
- PyTorch version (GPU?): 1.7.1 gpu
- Tensorflow version (GPU?):2.4.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:no
### Who can help
@stefan-it
## Information
Model I am using (Bert, XLNet ...): bert-base-multilingual-cased
The problem arises when using:
- [yes ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name) ner GermEval 2014
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Created a new conda environment using conda env -n xftf2 python=3.6
2. pip insrall transformer==4.2.0 tensorflow==2.4 torch==1.7.1
3. Prepare the data set(train.txt, test.txt, dev.txt) according to the README under the folder token-classification, run run_tf_ner.py
setting from_pt=True, with the following parameters:
```
--data_dir ./data \
--labels ./data/labels.txt \
--model_name_or_path bert-base-multilingual-cased \
--output_dir ./output \
--max_seq_length 128 \
--num_train_epochs 4\
--per_device_train_batch_size 32 \
--save_steps 500 \
--seed 100 \
--do_train \
--do_eval \
--do_predict
```
Here is the stack trace:
```
01/21/2021 00:12:18 - INFO - utils_ner - *** Example ***
01/21/2021 00:12:18 - INFO - utils_ner - guid: dev-5
01/21/2021 00:12:18 - INFO - utils_ner - tokens: [CLS] Dara ##us entwickelte sich im Rok ##oko die Sitt ##e des gemeinsamen Wein ##ens im Theater , das die Stand ##es ##grenze ##n innerhalb des Publikum ##s ΓΌber ##brΓΌcken sollte . [SEP]
01/21/2021 00:12:18 - INFO - utils_ner - input_ids: 101 95621 10251 28069 10372 10211 51588 20954 10128 105987 10112 10139 58090 90462 12457 10211 16223 117 10242 10128 15883 10171 58433 10115 21103 10139 63332 10107 10848 99765 17799 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/21/2021 00:12:18 - INFO - utils_ner - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/21/2021 00:12:18 - INFO - utils_ner - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
01/21/2021 00:12:18 - INFO - utils_ner - label_ids: -1 24 -1 24 24 24 6 -1 24 24 -1 24 24 24 -1 24 24 24 24 24 24 -1 -1 -1 24 24 24 -1 24 -1 24 24 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
Traceback (most recent call last):
File "run_tf_ner.py", line 299, in <module>
main()
File "run_tf_ner.py", line 231, in main
trainer.train()
File "/.conda/envs/xftf2/lib/python3.6/site-packages/transformers/trainer_tf.py", line 457, in train
train_ds = self.get_train_tfdataset()
File "/.conda/envs/xftf2/lib/python3.6/site-packages/transformers/trainer_tf.py", line 141, in get_train_tfdataset
raise ValueError("The training dataset must have an asserted cardinality")
ValueError: The training dataset must have an asserted cardinality
```
## Expected behavior
In such a case, is there any tips to deal with it?I really appreciate any help you can provide.
| 01-20-2021 16:54:35 | 01-20-2021 16:54:35 | Maybe @jplu has an idea!<|||||>Hello!
This error is always raised by the TFTrainer when your dataset has not a cardinality attached.
Can you give me the version of the `run_tf_ner.py` you are using please?<|||||>The run_tf_ner.py I used was downloaded from this https://github.com/huggingface/transformers/tree/master/examples/token-classificationοΌ transformer version is 4.2.0οΌ tensorflow == 2.4.0
@jplu <|||||>Are you sure this is the exact version or not from another commit? Because I see a cardinality assigned in the current script. Even thought the script is not working since 4.2.0 but for a diffeerent reason.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,703 | closed | Fix WAND_DISABLED test | # What does this PR do?
As reported in #9699, the test for the WAND_DISABLED environment variable is not working right now. This PR fixes that.
Fixes #9699
| 01-20-2021 16:49:29 | 01-20-2021 16:49:29 | |
transformers | 9,702 | closed | ProphetNetForCausalLM text generation fails | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: latest master (4.3.0.dev0)
- Platform: win64
- Python version: 3.7
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?): N/A
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten
## Information
Model I am using: ProphetNet
The `ProphetNetForCausalLM` defined at https://github.com/huggingface/transformers/blob/88583d4958ae4cb08a4cc85fc0eb3aa02e6b68af/src/transformers/models/prophetnet/modeling_prophetnet.py#L1884 overwrites the `is_encoder_decoder` flag to a value of False to ensure the mode is used as a decoder only, regardless of what is given in the configuration file.
However, the initialization of the parent class is done before this overwrite, causing the `model.config.is_encoder_decoder` to remain possibly `True`. This leads to an error if the `generate` method of the model is later called as the non-existign method `get_encoder` is called:
```python
AttributeError: 'ProphetNetForCausalLM' object has no attribute 'get_encoder'
```
The script below allows reproducing:
```python
from transformers import ProphetNetTokenizer, ProphetNetForCausalLM
tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/prophetnet-large-uncased')
model = ProphetNetForCausalLM.from_pretrained('patrickvonplaten/prophetnet-decoder-clm-large-uncased').cuda()
model = model.eval()
input_sentences = ["It was a very nice and sunny"]
inputs = tokenizer(input_sentences, return_tensors='pt')
# Generate text
summary_ids = model.generate(inputs['input_ids'].cuda(),
num_beams=4,
temperature=1.0,
top_k=50,
top_p=1.0,
repetition_penalty=1.0,
min_length=10,
max_length=32,
no_repeat_ngram_size=3,
do_sample=False,
early_stopping=True)
model_output = tokenizer.batch_decode(summary_ids, skip_special_tokens=True)
```
## Step to fix it
The call to `super().__init__(config)` in the initialization method should be moved from modeling_prophetnet.py#L1886 to modeling_prophetnet.py#L1890 (after the configuration object was modified). If you agree I could submit a small PR with the same, I tested locally and the model does not crash.
As a side note, After the fix, the generation quality remains very poor, is there a pretrained snapshot for ProphetNet that can actually be used for causal generation? | 01-20-2021 15:49:00 | 01-20-2021 15:49:00 | You're totally right @gui11aume! Thanks for posting this issue - it would be great if you could open a PR to fix it. The checkpoint I uploaded won't work well because I just took the decoder part of the encoder-decoder model and removed all cross-attention layer. The model would have to be fine-tuned to work correctly. The main motivation to add `ProphetNetForCausalLM` however was to enable things like `Longformer2ProphetNet` as described here: https://github.com/huggingface/transformers/pull/9033 |
transformers | 9,701 | closed | how to run pegasus finetune on multiple gpus | ## Environment Information
- transformers version: 4.2.0dev0
- Platform: Linux-3.10.0-1062.18.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.7.3
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
## Who might help
@sgugger
@patrickvonplaten
@patil-suraj
## Information
The fine-tune process is taking really long time, so I want to do it parallel on multiple gpus.
The problem arises when using:
I do not have found the instructions for training on multiple gpus for the arguments, are there configurations for something like nodes, etc. or should I implement it in my own script?
## To reproduce
```
python finetune.py \
--gpus 0 \
--learning_rate=1e-4 \
--do_train \
--do_predict \
--n_val 1000 \
--val_check_interval 0.25 \
--max_source_length 512 --max_target_length 56 \
--freeze_embeds --label_smoothing 0.1 --adafactor --task summarization_xsum \
--model_name_or_path google/pegasus-xsum \
--output_dir=xsum_results \
--data_dir xsum \
--tokenizer_name google/pegasus-large \
"$@"
```
and which of the belowings are correct? I saw both in other posts:
--model_name_or_path google/pegasus-xsum
--tokenizer_name google/pegasus-large \
or
--model_name_or_path google/pegasus-large
--tokenizer_name google/pegasus-xum \
I think it should be the second one but I am not sure.
## Expected behavior
1. Enable the finetune of pegasus model on multiple gpus.
2. Inject the correct arguments. | 01-20-2021 13:46:46 | 01-20-2021 13:46:46 | Please use the [forums](https://discuss.huggingface.co/) to ask questions like this. Also note that there is no `finetune` script in the example folder anymore, so you should probably be using `finetune_trainer` or `run_seq2seq`. |
transformers | 9,700 | closed | NAN return from F.softmax function in pytorch implementation of BART self-attention | Pytorch 1.7.1 with GPU
transformers 3.0.2
Filling all masked positions with "-inf" may cause a NAN issue for softmax function returns. | 01-20-2021 13:28:09 | 01-20-2021 13:28:09 | It may cause similar issues in other models and other versions of the same model as well.<|||||>HI @KaiQiangSong
We haven't yet observed `NaN`s with BART specifically, could you post a code snippet where the model returns `NaN` so we could take a look ?<|||||>> HI @KaiQiangSong
>
> We haven't yet observed `NaN`s with BART specifically, could you post a code snippet where the model returns `NaN` so we could take a look ?
Sorry that, I couldn't publish my code now due to it is unpublished research.
I've fixed the issue myself with changing the mask_fill of float("-inf") to -1e5 (for supporting AMP as well).
Just post this issue here to let you know there might be a potential issue.<|||||>I have the same exact problem.
Ill try with the -1e5 trick and see if it helps me too.
Thanks a lot!<|||||>> I have the same exact problem.
>
> Ill try with the -1e5 trick and see if it helps me too.
>
> Thanks a lot!
glad that my solution helps.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,699 | closed | WANDB_DISABLED env variable not working as expected | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux-5.4.34-1-pve-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
I'm using modified scripts, but the error is related to a specific function in the `integrations.py` module, as explained below.
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Make sure that `wandb` is installed on your system and set the environment variable `WANDB_DISABLED` to "true", which should entirely disable `wandb` logging
2. Create an instance of the `Trainer` class
3. Observe that the Trainer always reports the error "WandbCallback requires wandb to be installed. Run `pip install wandb`."
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I would have expected to disable `wandb`, but instead setting the `WANDB_DISABLED` environment variable completely prevents the user from using `wandb`.
After a bit of digging in the source code, I discovered that the `Trainer` uses the `WandbCallback` class (in `integrations.py`) to handle `wandb` logging. In that class, the `__init__` method has the following lines:
```python
has_wandb = is_wandb_available()
assert has_wandb, "WandbCallback requires wandb to be installed. Run `pip install wandb`."
```
In particular, by checking the `is_wandb_available()` function, we can see that it performs the following check:
```python
if os.getenv("WANDB_DISABLED"):
return False
```
That if statement does not seem to be correct, since environment variables are stored as strings and the truth value of a string depends on whether it is empty or not. So, for example, by not setting the `WANDB_DISABLED` variable at all, then `wandb` would be enabled, but setting it to any value would entirely disable `wandb`. | 01-20-2021 13:21:21 | 01-20-2021 13:21:21 | You're right, thanks for reporting! The PR mentioned above should fix that. |
transformers | 9,698 | closed | Model Parallelism for DeBERTa |
Hi,
Is there any way to apply [Model Parallelism](https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html#apply-model-parallel-to-existing-modules) for DeBERTa ?
I want to run 'microsoft/deberta-large' on 2 GPU's (32 GB each) using [PyTorch's Model Parallelism](https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html#apply-model-parallel-to-existing-modules) . | 01-20-2021 12:41:35 | 01-20-2021 12:41:35 | Hello! It's currently not implemented for DeBERTa, unfortunately. Following the document you linked, it should be pretty easy to do it in a script!<|||||>Hi @LysandreJik ,
Will DeBERTa (or any of RoBERTa, ALBERT) work if I separate these layers as two or three parts and connect them sequentially?
Because this is what is happening in [previous link](https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html#apply-model-parallel-to-existing-modules) that I shared<|||||>You would need to cast the intermediate hidden states to the correct devices as well. You can see that in the example you shared, see how the intermediate hidden states were cast to cuda 1:
```py
def forward(self, x):
x = self.seq2(self.seq1(x).to('cuda:1'))
return self.fc(x.view(x.size(0), -1))
```<|||||>Hi @LysandreJik ,
For DeBERTa, I'm able to split entire model into 'embedding', 'encoder', 'pooler', 'classifier' and 'dropout' layers as shown in below pic.

With this approach, I trained on IMDB classification task by assigning 'encoder' to second GPU and others to first 'GPU'. At the end of the training, second GPU consumed lot of memory when compared to first GPU and this resulted in 20-80 split of the entire model.
So, I tried splitting encoder layers also as shown below but getting this error - **"TypeError: forward() takes 1 positional argument but 2 were given"**
```
embed = dberta.deberta.embeddings.to('cuda:0')
f6e = dberta.deberta.encoder.layer[:6].to('cuda:0')
l6e = dberta.deberta.encoder.layer[6:].to('cuda:1')
pooler = dberta.pooler.to('cuda:0')
classifier = dberta.classifier.to('cuda:0')
dropout = dberta.dropout.to('cuda:0')
test = "this is to test deberta"
inp_ids = tok_dberta(test, return_tensors='pt').input_ids
att_mask = tok_dberta(test, return_tensors='pt').attention_mask
emb_out = embed(inp_ids.to('cuda:0'))
first_6_enc_lay_out = f6e(emb_out)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-15-379d948e5ba5> in <module>
----> 1 first_6_enc_lay_out = f6e(emb_out)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
TypeError: forward() takes 1 positional argument but 2 were given
```
Plz suggest how to proceed further..<|||||>Hi @LysandreJik ,
Plz update on the above issue that I'm facing<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,697 | closed | Fix TF template | # What does this PR do?
Fix a template issue for TF. | 01-20-2021 11:30:57 | 01-20-2021 11:30:57 | Thanks for fixing! |
transformers | 9,696 | closed | Add notebook | # What does this PR do?
Add a notebook to the list of community notebooks, illustrating how you can fine-tune `LayoutLMForSequenceClassification` for classifying scanned documents, just as invoices or resumes.
## Who can review?
@sgugger
| 01-20-2021 11:12:52 | 01-20-2021 11:12:52 | |
transformers | 9,695 | closed | The model learns nothing after 3 epochs of training | I have trained a multilingual Bert model on 3 different input data configurations ( imbalanced, partial balanced, and full balanced) for the sentiment classification task. Everything works fine so far, except the zero-shot model being trainined on the full balanced dataset (training data: label balanced data; val/test data: label balanced data). however, the result is very weird:
<img width="638" alt="Screen Shot 2021-01-20 at 11 45 47 AM" src="https://user-images.githubusercontent.com/41744366/105165475-afb1b800-5b16-11eb-9d8f-d775fa9a07ee.png">
As you can see, the model has not learned anything, and it classifies everything into neutral in the testing phase.
Could anyone helps please?
| 01-20-2021 10:58:44 | 01-20-2021 10:58:44 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead? You'll get more answers over there.
Thanks! |
transformers | 9,694 | closed | ModuleAttributeError: 'GPT2LMHeadModel' object has no attribute 'backward' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux-4.19.0-12-cloud-amd64-x86_64-with-debian-10.6
- Python version: 3.7.8
- PyTorch version (GPU?): 1.6.0a0+9907a3e (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No(?)
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
Trainer: @sgugger
## To reproduce
Steps to reproduce the behavior:
1. Set up a TrainingArguments for a GPT2LMHeadModel with the following deepspeed config:
```
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"overlap_comm": true,
"contiguous_gradients": true,
"cpu_offload": false
},
"optimizer": {
"type": "Adam",
"params": {
"adam_w_mode": true,
"lr": 3e-5,
"betas": [ 0.9, 0.999 ],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
}
}
```
2. Attempt to call `trainer.train()`.
## Expected behavior
Training should begin as expected.
## Believed bug location
It would appear that [line 1286 in trainer.py](https://github.com/huggingface/transformers/blob/76f36e183a825b8e5576256f4e057869b2e2df29/src/transformers/trainer.py#L1286) actually calls the `backward` method on the *model*, not the loss object. I will try rebuilding after fixing that line and seeing if it helps.
| 01-20-2021 10:10:59 | 01-20-2021 10:10:59 | > It would appear that line 1286 in trainer.py actually calls the backward method on the model, not the loss object. I will try rebuilding after fixing that line and seeing if it helps.
This is incorrect. It appears that the ``model_wrapped.module`` in the aforementioned trainer.py actually resolves to GPT2LMHeadModel. Another big shot in the dark, but maybe ``model_wrapped`` is never actually wrapping because I'm only using one GPU? It's very late where I live, I'll take another shot at this in the morning.<|||||>You then need to launch your script with the `deepspeed` launcher. Could you tell us which command you ran?
Also cc @stas00 since he added deepspeed to Trainer.<|||||>Yes, please tag me on any deepspeed issues.
Thank you for this report.
I think it's a bug, it should be:
```
self.deepspeed.backward(loss)
```
I will test and send a fix.
<|||||>The merged PR closed this report, but should you still have an issue please don't hesitate to re-open it.
<|||||>Hello,
Thanks for this great deepspeed feature. I am also running into the same error both for
DistilBertForSequenceClassification' object has no attribute 'backward'
and for
BertForSequenceClassification object has no attribute 'backward'
here is the full error:
> ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-6-beaae64139c1> in <module>
23 )
24
---> 25 trainer.train()
~/anaconda3/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path, trial)
886 tr_loss += self.training_step(model, inputs)
887 else:
--> 888 tr_loss += self.training_step(model, inputs)
889 self._total_flos += self.floating_point_ops(inputs)
890
~/anaconda3/lib/python3.7/site-packages/transformers/trainer.py in training_step(self, model, inputs)
1263 elif self.deepspeed:
1264 # calling on DS engine (model_wrapped == DDP(Deepspeed(PretrainedModule)))
-> 1265 self.model_wrapped.module.backward(loss)
1266 else:
1267 loss.backward()
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
574 return modules[name]
575 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 576 type(self).__name__, name))
577
578 def __setattr__(self, name, value):
AttributeError: 'DistilBertForSequenceClassification' object has no attribute 'backward'
Any idea?
Thanks<|||||>@victorstorchan, can you please ensure you use an up-to-date master?<|||||>Thanks for your answer. I just pip installed transformers 1h ago. It should be up-to-date right?<|||||>no, it won't. pip installs the released version. you need the unreleased master build, which there are several ways to go about, one of them is just:
```
pip install git+https://github.com/huggingface/transformers
```
<|||||>My bad! Thanks @stas00 <|||||>You did nothing wrong, @victorstorchan.
I will propose an update to the installation page so that the distinction is loud and clear. |
transformers | 9,693 | closed | ModuleAttributeError: 'GPT2LMHeadModel' object has no attribute 'backward' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?): None
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No?
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
Trainer: @sgugger
## To reproduce
Steps to reproduce the behavior:
1. Set up a TrainingArguments for a GPT2LMHeadModel with the following deepspeed config:
`{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"overlap_comm": true,
"contiguous_gradients": true,
"cpu_offload": false
},
"optimizer": {
"type": "Adam",
"params": {
"adam_w_mode": true,
"lr": 3e-5,
"betas": [ 0.9, 0.999 ],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
}
}`
2. Attempt to train.
## Expected behavior
Training should begin as expected.
## Believed bug location
It would appear that [line 1286 in trainer.py](https://github.com/huggingface/transformers/blob/76f36e183a825b8e5576256f4e057869b2e2df29/src/transformers/trainer.py#L1286) actually calls the `backward` method on the *model*, not the loss object. I will try rebuilding after fixing that line and seeing if it helps.
| 01-20-2021 10:07:31 | 01-20-2021 10:07:31 | https://github.com/huggingface/transformers/issues/9694 |
transformers | 9,692 | closed | input one model's output to another one | Hello,
I want to create a model which generates text and the generated text is input to other model. So basically two models are trained together. How can i achieve this using hugging face?
Thanks | 01-20-2021 10:07:12 | 01-20-2021 10:07:12 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead? You'll get more answers there!
Thanks! |
transformers | 9,691 | closed | Add DeBERTa head models | This PR adds 3 head models on top of the DeBERTa base model: `DebertaForMaskedLM`, `DebertaForTokenClassification`, `DebertaForQuestionAnswering`. These are mostly copied from `modeling_bert.py` with bert->deberta.
## Who can review?
@LysandreJik
Also tagging original DeBERTa author: @BigBird01
Fixes #9689 | 01-20-2021 08:57:48 | 01-20-2021 08:57:48 | Thanks for the review @LysandreJik, the test did fail because of the pooler. Is fixed now! |
transformers | 9,690 | closed | Is there a C++ interface? |
Is there a C++ interface? transformers | 01-20-2021 08:27:10 | 01-20-2021 08:27:10 | No, only Python.<|||||>> No, only Python.
thx.
It means that using torch cannot call bert with c++, right? |
transformers | 9,689 | closed | MLM training for DeBERTa not supported: configuration class is missing | When I ran the example script run_mlm.py to fine tune the pretrained deberta model on a customized dataset, I got the following error. The same command worked for roberta-base.
The command:
python run_mlm.py --model_name_or_path 'microsoft/deberta-base' --train_file slogans/train.txt --validation_file slogans/test.txt --do_train --do_eval --per_device_train_batch_size 64 --per_device_eval_batch_size 64 --learning_rate 1e-3 --num_train_epochs 10 --output_dir /home/jovyan/share2/xiaolin/models/mlm/temp --save_steps 5000 --logging_steps 100
The terminal error:
Traceback (most recent call last):
File "run_mlm.py", line 409, in <module>
main()
File "run_mlm.py", line 264, in main
cache_dir=model_args.cache_dir,
File "/home/jovyan/.local/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py", line 1093, in from_pretrained
config.__class__, cls.__name__, ", ".join(c.__name__ for c in MODEL_FOR_MASKED_LM_MAPPING.keys())
ValueError: Unrecognized configuration class <class 'transformers.models.deberta.configuration_deberta.DebertaConfig'> for this kind of AutoModel: AutoModelForMaskedLM.
Model type should be one of LayoutLMConfig, DistilBertConfig, AlbertConfig, BartConfig, CamembertConfig, XLMRobertaConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, BertConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, ReformerConfig, FunnelConfig, MPNetConfig, TapasConfig. | 01-20-2021 05:27:39 | 01-20-2021 05:27:39 | Looking at the [docs](https://huggingface.co/transformers/model_doc/deberta.html), it seems like there's currently no `DeBERTaForMaskedLM` defined. I will make a PR that adds this. |
transformers | 9,688 | closed | [Open in Colab] links not working in examples/README.md | For the following tasks below, the  button contains github links instead of colab links.
- question-answering
- text-classification
- token-classification
@sgugger
| 01-20-2021 04:12:45 | 01-20-2021 04:12:45 | Hi @wilcoln
Yes, the links point to Github, feel free to open a PR to replace the GitHub links with colab :). Thanks! |
transformers | 9,687 | closed | Can't load previously built tokenizers | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux-3.10.0-957.5.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: not for this part that triggers this error
- Using distributed or parallel set-up in script?: n
### Who can help
Probably @mfuntowicz or @patrickvonplaten
## Information
N/A -- none of the fields here applied
## To reproduce
Context: I work on cluster where most nodes don't have internet access. Therefore I pre-build tokenizers, models, etc., in cli on nodes with internet access and then make sure that I can access the local caches on other nodes. That last part -- accessing the tokenizer I've built -- is failing for BlenderBot 400M distilled tokenizer. It's also failing for blenderbot small 90M which I also built today, potentially for others too, but it doesn't seem to be failing for roberta-base, which I had built before (and is a tokenizer small rather than base).
1. `AutoTokenizer.from_pretrained('facebook/blenderbot-400M-distill')` from a node with internet access
2. the same as above, from a node without internet access
3. You should see this error getting triggered : [https://github.com/huggingface/transformers/blob/14d677ca4a62facf70b28f2922b12e6cd3692a03/src/transformers/file_utils.py#L1234](url)
Here's the specific Traceback:
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lusers/margsli/miniconda3/envs/latest/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 388, in from_pretrained
return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/usr/lusers/margsli/miniconda3/envs/latest/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1738, in from_pretrained
resolved_vocab_files[file_id] = cached_path(
File "/usr/lusers/margsli/miniconda3/envs/latest/lib/python3.8/site-packages/transformers/file_utils.py", line 1048, in cached_path
output_path = get_from_cache(
File "/usr/lusers/margsli/miniconda3/envs/latest/lib/python3.8/site-packages/transformers/file_utils.py", line 1234, in get_from_cache
raise ValueError(
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.`
Dug around a little and found that cached_path() gets called 7 filenames/urls when I'm offline and only 6 when I'm online (I printed cache_path every time cached_path() gets called) -- the last one is not seen when offline, and that's the one that triggers the error. Printed the same things for other tokenizers I had previously built and didn't see this. Not sure if that's helpful, but it was as far as I got during my debugging.
## Expected behavior
no error
| 01-20-2021 00:57:12 | 01-20-2021 00:57:12 | Hello! To make sure I understand your issue, you're doing the following:
```py
AutoTokenizer.from_pretrained('facebook/blenderbot-400M-distill')
```
on a node which has internet access, and then you're doing the same once you have no internet access. You want the library to rely on the cache that it had previously downloaded, is that right?
Could you make sure you are up to date with the `master` branch, and try the following once you have no internet access:
```py
AutoTokenizer.from_pretrained('facebook/blenderbot-400M-distill', local_files_only=True)
```
Thank you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,686 | closed | BertGenerationDecoder .generate() issue during inference with PyTorch Lightning | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Ubuntu 20.04.1 LTS
- Python version: 3.8.5
- PyTorch version: 1.7.1
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Tried both distributed and parallel
### Who can help
TextGeneration: @TevenLeScao
Text Generation: @patrickvonplaten
examples/seq2seq: @patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
## Information
I am using BertGenerationEncoder and BertGenerationDecoder. I am using `transformers` in combination with PyTorch lightning.
At inference, `.generate()` outputs the same thing for each input.
I am unsure of why this is occurring, my only hunch is that PyTorch lighting is somehow blocking the outputs of the encoder to reach the decoder for cross-attention? As the outputs seem as though the decoder is just given the `[BOS]` token only for each input during inference.
The task that I am demonstrating this issue on is:
* WMT'14 English to German.
I have had this problem occur on different tasks as well. Using WMT'14 English to German to demonstrate.
## To reproduce
I have tried to simplify this down, but unfortunately, the example is still long. Sorry about that. Please let me know if something does not work.
If torchnlp is not installed: `pip install pytorch-nlp`
If pytorch_lightning is not installed: `pip install pytorch-lightning `
```
from torchnlp.datasets.wmt import wmt_dataset
import torch
import torch.nn as nn
from pytorch_lightning.core.datamodule import LightningDataModule
from pytorch_lightning.metrics.functional.nlp import bleu_score
import pytorch_lightning as pl
from transformers import (
BertGenerationConfig,
BertGenerationEncoder,
BertGenerationDecoder,
)
from transformers import AutoTokenizer
import os
import numpy as np
import multiprocessing
class Dataset(LightningDataModule):
def __init__(
self,
mbatch_size,
dataset_path,
encoder_tokenizer,
decoder_tokenizer,
max_len=None,
**kwargs,
):
super().__init__()
self.mbatch_size = mbatch_size
self.dataset_path = dataset_path
self.encoder_tokenizer = encoder_tokenizer
self.decoder_tokenizer = decoder_tokenizer
self.max_len = max_len
## Number of workers for DataLoader
self.n_workers = multiprocessing.cpu_count()
def setup(self, stage=None):
## Assign train & validation sets
if stage == "fit" or stage is None:
train_iterator, val_iterator = wmt_dataset(
directory=self.dataset_path,
train=True,
dev=True,
)
self.train_set = Set(
train_iterator,
self.encoder_tokenizer,
self.decoder_tokenizer,
self.max_len,
)
self.val_set = Set(
val_iterator,
self.encoder_tokenizer,
self.decoder_tokenizer,
self.max_len,
)
## Assign test set
if stage == "test" or stage is None:
test_iterator = wmt_dataset(directory=self.dataset_path, test=True)
self.test_set = Set(
test_iterator,
self.encoder_tokenizer,
self.decoder_tokenizer,
self.max_len,
)
def train_dataloader(self):
return DataLoader(
self.train_set,
batch_size=self.mbatch_size,
num_workers=self.n_workers,
shuffle=True,
)
def val_dataloader(self):
return DataLoader(
self.val_set,
batch_size=self.mbatch_size,
num_workers=self.n_workers,
)
def test_dataloader(self):
return DataLoader(
self.test_set,
batch_size=self.mbatch_size,
num_workers=self.n_workers,
)
class Set(torch.utils.data.Dataset):
def __init__(
self,
iterator,
encoder_tokenizer,
decoder_tokenizer,
max_len,
):
self.iterator = iterator
self.encoder_tokenizer = encoder_tokenizer
self.decoder_tokenizer = decoder_tokenizer
self.n_examples = len(self.iterator)
self.max_len = max_len
def __getitem__(self, index):
example = self.iterator[index]
english_encoded = self.encoder_tokenizer(
example["en"],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=self.max_len,
)
german_encoded = self.decoder_tokenizer(
example["de"],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=self.max_len,
)
return {
"input_ids": english_encoded["input_ids"][0],
"token_type_ids": english_encoded["token_type_ids"][0],
"attention_mask": english_encoded["attention_mask"][0],
"decoder_input_ids": german_encoded["input_ids"][0],
"decoder_token_type_ids": german_encoded["token_type_ids"][0],
"decoder_attention_mask": german_encoded["attention_mask"][0],
}
def __len__(self):
return self.n_examples
class BERT2BERT(nn.Module):
def __init__(self, **kwargs):
super(BERT2BERT, self).__init__()
assert "ckpt_base" in kwargs, "ckpt_base must be passed."
self.ckpt_base = kwargs["ckpt_base"]
## Tokenizer
assert (
"encoder_tokenizer" in kwargs
), "A tokenizer for the encoder must be passed."
assert (
"decoder_tokenizer" in kwargs
), "A tokenizer for the decoder must be passed."
self.encoder_tokenizer = kwargs["encoder_tokenizer"]
self.decoder_tokenizer = kwargs["decoder_tokenizer"]
## Encoder
assert "encoder_init" in kwargs, "Set encoder_init in config file."
self.encoder_init = kwargs["encoder_init"]
ckpt_dir = os.path.join(self.ckpt_base, self.encoder_init)
self.encoder = BertGenerationEncoder.from_pretrained(ckpt_dir)
## Decoder
assert "decoder_init" in kwargs, "Set decoder_init in config file."
self.decoder_init = kwargs["decoder_init"]
ckpt_dir = os.path.join(self.ckpt_base, self.decoder_init)
config = BertGenerationConfig.from_pretrained(ckpt_dir)
config.is_decoder = True
config.add_cross_attention = True
config.bos_token_id = self.decoder_tokenizer.cls_token_id
config.eos_token_id = self.decoder_tokenizer.sep_token_id
config.pad_token_id = self.decoder_tokenizer.pad_token_id
config.max_length = kwargs["max_length"] if "max_length" in kwargs else 20
config.min_length = kwargs["min_length"] if "min_length" in kwargs else 10
config.no_repeat_ngram_size = (
kwargs["no_repeat_ngram_size"] if "no_repeat_ngram_size" in kwargs else 0
)
config.early_stopping = (
kwargs["early_stopping"] if "early_stopping" in kwargs else False
)
config.length_penalty = (
kwargs["length_penalty"] if "length_penalty" in kwargs else 1.0
)
config.num_beams = kwargs["num_beams"] if "num_beams" in kwargs else 1
self.decoder = BertGenerationDecoder.from_pretrained(
ckpt_dir,
config=config,
)
def forward(self, x):
## Get last hidden state of the encoder
encoder_hidden_state = self.encoder(
input_ids=x["input_ids"],
attention_mask=x["attention_mask"],
).last_hidden_state
## Teacher forcing: labels are given as input
outp = self.decoder(
input_ids=x["decoder_input_ids"],
attention_mask=x["decoder_attention_mask"],
encoder_hidden_states=encoder_hidden_state,
)
return outp["logits"]
def generate(self, input_ids, attention_mask):
## Get last hidden state of the encoder
encoder_hidden_state = self.encoder(
input_ids=input_ids,
attention_mask=attention_mask,
).last_hidden_state
print("\n Output of encoder:")
print(encoder_hidden_state)
bos_ids = (
torch.ones(
(encoder_hidden_state.size()[0], 1),
dtype=torch.long,
device=self.decoder.device,
)
* self.decoder.config.bos_token_id
)
## Autoregresively generate predictions
return self.decoder.generate(
input_ids=bos_ids,
encoder_hidden_states=encoder_hidden_state,
)
class Seq2Seq(pl.LightningModule):
def __init__(
self,
encoder_init,
decoder_init,
encoder_tokenizer,
decoder_tokenizer,
permute_outp=False,
ckpt_base="",
ver="tmp",
print_model=True,
**kwargs,
):
super(Seq2Seq, self).__init__()
self.save_hyperparameters()
self.permute_outp = permute_outp
self.ckpt_base = ckpt_base
self.ver = ver
self.encoder_tokenizer = encoder_tokenizer
self.decoder_tokenizer = decoder_tokenizer
self.seq2seq = BERT2BERT(
encoder_init=encoder_init,
decoder_init=decoder_init,
encoder_tokenizer=encoder_tokenizer,
decoder_tokenizer=decoder_tokenizer,
ckpt_base=ckpt_base,
**kwargs,
)
## Loss function
self.loss = torch.nn.CrossEntropyLoss()
def forward(self, x):
## Iterate through the networks
return self.seq2seq(x)
def training_step(self, batch, batch_idx):
## Target
y = batch["decoder_input_ids"]
## Inference
y_hat = self(batch)
## Permute output
if self.permute_outp:
y_hat = y_hat.permute(*self.permute_outp)
## Loss
train_loss = self.loss(y_hat, y)
## Compute and log metrics
logs = {"train_loss": train_loss}
self.log_dict(logs, on_step=False, on_epoch=True)
######### TEMPORARY!!!
if batch_idx % 100 == 0:
pred = self.seq2seq.generate(
batch["input_ids"],
batch["attention_mask"],
)
pred_str = self.decoder_tokenizer.batch_decode(pred, skip_special_tokens=True)
ref_str = self.decoder_tokenizer.batch_decode(y, skip_special_tokens=True)
print("\nTraining reference labels:")
print(ref_str)
print("\n Training predictions:")
print(pred_str)
print("\n\n")
## Return training loss
return train_loss
def validation_step(self, batch, batch_idx):
print("\n\n\n Validation input_ids:")
print(batch["input_ids"])
## Generate outputs autoregresively
pred = self.seq2seq.generate(
batch["input_ids"],
batch["attention_mask"],
)
pred_str = self.decoder_tokenizer.batch_decode(pred, skip_special_tokens=True)
ref_str = self.decoder_tokenizer.batch_decode(batch["decoder_input_ids"], skip_special_tokens=True)
print("Validation reference labels:")
print(ref_str)
print("Validation predictions:")
print(pred_str)
print("\n\n")
pred_str = [i.split() for i in pred_str]
ref_str = [i.split() for i in ref_str]
self.log_dict({"val_bleu": bleu_score(pred_str, ref_str)})
def test_step(self, batch, batch_idx):
## Generate outputs autoregresively
pred = self.seq2seq.generate(
batch["input_ids"],
batch["attention_mask"],
)
pred_str = self.decoder_tokenizer.batch_decode(pred, skip_special_tokens=True)
ref_str = self.decoder_tokenizer.batch_decode(batch["decoder_input_ids"], skip_special_tokens=True)
pred_str = [i.split() for i in pred_str]
ref_str = [i.split() for i in ref_str]
self.log_dict({"test_bleu": bleu_score(pred_str, ref_str)})
def configure_optimizers(self):
self.optimisers = [torch.optim.Adam(self.parameters(), lr=4e-5)]
return self.optimisers
if __name__ == "__main__":
ckpt_base = ""
encoder_init = "bert-base-uncased"
decoder_init = "dbmdz/bert-base-german-uncased"
dataset_path = ""
encoder_tokenizer = AutoTokenizer.from_pretrained(
os.path.join(ckpt_base, encoder_init),
)
decoder_tokenizer = AutoTokenizer.from_pretrained(
os.path.join(ckpt_base, decoder_init),
)
dataset = Dataset(
mbatch_size=4,
dataset_path=dataset_path,
encoder_tokenizer=encoder_tokenizer,
decoder_tokenizer=decoder_tokenizer,
max_len=512,
)
trainer = pl.Trainer(
max_epochs=2,
num_sanity_val_steps=0,
fast_dev_run=True,
accelerator="ddp" if torch.cuda.device_count() > 1 else None,
gpus=torch.cuda.device_count() if torch.cuda.is_available() else None,
precision=16 if torch.cuda.is_available() else 32,
log_gpu_memory=log_gpu_memory if torch.cuda.is_available() else False,
plugins=plugins if torch.cuda.device_count() > 1 else None,
)
seq2seq = Seq2Seq(
encoder_init=encoder_init,
decoder_init=decoder_init,
encoder_tokenizer=encoder_tokenizer,
decoder_tokenizer=decoder_tokenizer,
ckpt_base=ckpt_base,
permute_outp=[0, 2, 1],
)
trainer.fit(seq2seq, datamodule=dataset)
# trainer.test(seq2seq, datamodule=dataset)
```
## Outputs of script demonstrating the issue
#### During training:
Output of encoder (to demonstrate that there is a difference per input):
```
tensor([[[-0.1545, 0.0785, 0.4573, ..., -0.3254, 0.5409, 0.4258],
[ 0.2935, -0.1310, 0.4843, ..., -0.4160, 0.8018, 0.2589],
[ 0.0649, -0.5836, 1.9177, ..., -0.3412, 0.2852, 0.8098],
...,
[ 0.1109, 0.1653, 0.5843, ..., -0.3402, 0.1081, 0.2566],
[ 0.3011, 0.0258, 0.4950, ..., -0.2070, 0.1684, -0.0199],
[-0.1004, -0.0299, 0.4860, ..., -0.2958, -0.1653, 0.0719]],
[[-0.3105, 0.0351, -0.5714, ..., -0.1062, 0.3461, 0.8927],
[ 0.0727, 0.2580, -0.6962, ..., 0.3195, 0.9559, 0.6534],
[-0.6213, 0.9008, 0.2194, ..., 0.1259, 0.1122, 0.7071],
...,
[ 0.2667, -0.1453, -0.2017, ..., 0.5667, -0.0772, -0.2298],
[ 0.4050, 0.0916, 0.2218, ..., 0.0295, -0.2065, 0.1230],
[-0.1895, 0.0259, -0.1619, ..., -0.1657, -0.0760, -0.6030]],
[[-0.1366, 0.2778, 0.1203, ..., -0.4764, 0.4009, 0.2918],
[ 0.2401, -0.2308, 1.1218, ..., -0.2140, 0.7054, 0.6656],
[-0.7005, -0.9183, 1.6280, ..., 0.2339, -0.1870, 0.0630],
...,
[-0.0212, -0.2678, 0.0711, ..., 0.2884, 0.3741, -0.2103],
[-0.0058, -0.2364, 0.2587, ..., 0.0689, 0.2010, -0.0315],
[ 0.1869, -0.0784, 0.2257, ..., -0.1498, 0.0935, -0.0234]],
[[ 0.1023, 0.0532, 0.2052, ..., -0.5335, 0.0676, 0.2436],
[-0.2254, 1.0484, -0.1338, ..., -0.9030, -0.1407, -0.2173],
[-0.8384, 0.3990, 0.6661, ..., -0.4869, 0.7780, -0.5461],
...,
[ 0.4410, 0.1868, 0.6844, ..., -0.2972, -0.1069, -0.1848],
[-0.0021, -0.0537, 0.2477, ..., 0.1877, -0.0479, -0.3762],
[ 0.1981, 0.0980, 0.3827, ..., 0.1449, 0.0403, -0.2863]]],
grad_fn=<NativeLayerNormBackward>)
```
Training reference labels:
```
[
'pau @ @ schal @ @ preis 80 β¬ / person auf basis von 2 person @ @ nen.',
'ich finde es be @ @ denk @ @ lich, dass der bericht, den wir im ausschuss angenommen haben, so unterschiedlich ausgelegt wird.',
'die globalisierung hat eine betrachtliche veranderung der bedeutung ge @ @ ok @ @ ultur @ @ eller regionen in der welt mit sich gebracht.',
'falls sie eigentumer einer immobili @ @ e in andor @ @ ra sind, kontaktieren sie uns, um ihr apartment oder hotel hier auf @ @ zun @ @ ehem @ @ en.',
]
```
Training predictions after `.generate()` and `.batch_decode()` (garbage, but different per input):
```
[
'##exe int int int int fid fid fid fid fid fid fid fid fid fid fid fid lanz urn',
'##schleschleually vno stadien stadien stadienherzherzherzherzherzherzherzherzherzherzherzherz', '##betrtghattkerlabend verpackungahmahm te te teila einfl einfl einflierende add adduff',
'##reisreisviert fairrug ganze ganze ganze veh wz wz wz ihr x ihrverdverdverdverd',
]
```
#### During validation:
Input IDs to encoder:
```
tensor([[ 101, 1037, 3072, ..., 0, 0, 0],
[ 101, 3072, 1030, ..., 0, 0, 0],
[ 101, 2174, 1010, ..., 0, 0, 0],
[ 101, 5262, 1010, ..., 0, 0, 0]])
```
Output of encoder (to demonstrate that there is a difference per input):
```
tensor([[[-0.2494, -0.2050, -0.2032, ..., -1.0734, 0.1397, 0.4336],
[-0.2473, 0.0091, -0.2359, ..., -0.6884, 0.2158, -0.0761],
[-0.5098, -0.1364, 0.7411, ..., -1.0496, -0.0250, -0.2929],
...,
[-0.1039, -0.2547, 0.2264, ..., -0.2483, -0.2153, 0.0748],
[ 0.2561, -0.3465, 0.5167, ..., -0.2460, -0.1611, 0.0155],
[-0.0767, -0.3239, 0.4679, ..., -0.2552, -0.1551, -0.1501]],
[[-0.3001, 0.0428, -0.3463, ..., -0.6265, 0.3733, 0.3856],
[-0.1463, -0.0212, 0.1447, ..., -0.7843, -0.0542, 0.2394],
[ 0.7481, -0.3762, 0.6301, ..., 0.2269, 0.0267, -0.4466],
...,
[ 0.3723, -0.2708, 0.2251, ..., -0.0096, -0.0072, -0.2217],
[ 0.4360, -0.1101, 0.3447, ..., 0.0117, -0.0956, -0.1236],
[ 0.3221, -0.1846, 0.3263, ..., -0.0600, -0.0025, -0.1883]],
[[-0.1365, 0.1746, 0.1038, ..., -0.2151, 0.7875, 0.8574],
[ 0.1072, 0.2133, -0.8644, ..., 0.0739, 1.0464, 0.3385],
[ 0.7204, 0.2680, 0.0991, ..., -0.2964, -0.8238, -0.0604],
...,
[ 0.2686, -0.0701, 0.8973, ..., -0.0366, -0.2160, 0.0276],
[ 0.2265, -0.2171, 0.4239, ..., 0.0833, -0.0573, 0.0297],
[ 0.0690, -0.2430, 0.4186, ..., 0.0897, -0.0287, 0.0762]],
[[ 0.0408, 0.2332, -0.0992, ..., -0.2242, 0.6512, 0.4630],
[ 0.3257, 0.1358, -0.3344, ..., 0.0866, 1.0004, -0.0733],
[ 0.6827, 0.3013, 0.0672, ..., -0.2793, -0.8870, -0.0024],
...,
[ 0.4291, -0.5344, 0.0134, ..., 0.0439, 0.0617, -0.4433],
[ 0.4847, -0.2888, 0.2942, ..., 0.0153, 0.0121, -0.1231],
[ 0.4725, -0.3132, 0.3458, ..., -0.0207, 0.0517, -0.4281]]])
```
Validation reference labels:
```
[
'eine repub @ @ li @ @ kanische strategie, um der wieder @ @ wahl von obama entgegen @ @ zu @ @ treten',
'die fuhrungs @ @ krafte der republi @ @ kaner rechtfertigen ihre politik mit der notwendigkeit, den wahl @ @ betrug zu bekampfen.',
'allerdings halt das brenn @ @ an center letz @ @ teres fur einen my @ @ thos, indem es bekraftigt, dass der wahl @ @ betrug in den usa sel @ @ tener ist als die anzahl der vom bli @ @ tz @ @ schlag geto @ @ teten menschen.',
'die rechtsan @ @ walte der republi @ @ kaner haben in 10 jahren in den usa ubrigens nur 300 falle von wahl @ @ betrug ver @ @ zeichnet.',
]
```
Validation predictions after `.generate()` and `.batch_decode()` (garbage, but the same per input):
```
[
'##schleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschle',
'##schleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschle',
'##schleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschle',
'##schleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschleschle',
]
```
## Expected behavior
I would expect the model to generate a different output per input, as during training time.
## Thank you for your help!
Hopefully, it is something simple that I am missing. | 01-20-2021 00:10:09 | 01-20-2021 00:10:09 | Hi @anicolson ,
We would love to help, but sadly when you post such a long script it will be very hard and time-consuming for us to take a look at. We're happy to assist if you could provide a short, precise, and complete code snippet that is based on Transformers Seq2SeqTrainer only. Here's our guide on [how to request support](https://discuss.huggingface.co/t/how-to-request-support/3128).
Also from what I can see, seems like you are initializing bert encoder and bert decoder separately, you could directly instantiate it using the `EncoderDecoder` model class to get a seq2seq model. Here are two colab notebooks that show how to train `EncoderDecoder` models using `Seq2SeqTrainer`. The notebooks show how to fine-tune for summarization task, but could be easily adapted for translation as well.
[Leverage BERT for Encoder-Decoder Summarization on CNN/Dailymail](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)
[Leverage RoBERTa for Encoder-Decoder Summarization on BBC XSum](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb)<|||||>Thanks for your reply,
I am attempting to create a shorter version that is not so time-consuming.
Certainly, the `EncoderDecoder` is an attractive option if one is using natural language, but I would like to highlight that using `BertGenerateDecoder` allows the user to provide any sequence for cross-attention, even those derived from encoders that operate on modalities other than natural language, which I think is powerful.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> Thanks for your reply,
>
> I am attempting to create a shorter version that is not so time-consuming.
>
> Certainly, the `EncoderDecoder` is an attractive option if one is using natural language, but I would like to highlight that using `BertGenerateDecoder` allows the user to provide any sequence for cross-attention, even those derived from encoders that operate on modalities other than natural language, which I think is powerful.
Hi, have you tackled the problem? I encounter the exactly same problem. Any cues? |
transformers | 9,685 | closed | Fix Trainer and Args to mention AdamW, not Adam. | This PR fixed the issue with Docs and labels in Trainer and TrainingArguments Class for AdamW, current version mentions adam in several places.
Fixes #9628
The Trainer class in `trainer.py` uses AdamW as the default optimizer. The TrainingArguments class mentions it as Adam in the documentation, which was confusing.
I have also changed variable names to `adamw_beta1`, `adamw_beta2`, `adamw_epsilon` in `trainer.py`.
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@LysandreJik | 01-19-2021 23:13:38 | 01-19-2021 23:13:38 | Thanks for opening the PR. As it stands this would break every existing script leveraging the parameters defined, so renaming the parameters is probably not the way to go.
@sgugger, your insight on this would be very welcome.
<|||||>There is no reason to change all the names of the parameters indeed, and it would be a too-heavy breaking change. `AdamW` is not a different optimizer from `Adam`, it's just `Adam` with a different way (some might say the right way) of doing weight decay. I don't think we need to do more than a mention at the beginning of the docstring saying that all mentions of `Adam` are actually about `AdamW`, with a link to the paper.<|||||>Hi @LysandreJik @sgugger. Thanks for your comments, I'll be changing the variables back.
I apologize if this is too silly a question, but how can I run and see how the docs look on a browser after the changes?<|||||>You should check [this page](https://github.com/huggingface/transformers/tree/master/docs#generating-the-documentation) for all the information on generating/writing the documentation :-)<|||||>I have updated it and also added that by default, the weight decay is applied to all layers except bias and LayerNorm weights while training.<|||||>@sgugger My code passed only 3 out of 12 checks, I was unable to run CirlceCI properly. Can you point out the reasons why this happened?<|||||>We are trying to get support from them to understand why, but the checks on your PR were all cancelled. I couldn't retrigger them from our interface either.<|||||>Hi @sgugger
I believe a possible reason could be that I followed `transformers` on CircleCI. Maybe it performs checks on my fork of transformers and expects to find some "resources" which aren't there.
I'm not sure how CircleCI works, so this is just a wild guess. |
transformers | 9,684 | closed | Fix model templates and use less than 119 chars | # What does this PR do?
This PR fixes the model templates that were broken by #9596 (copies not inline with the original anymore). In passing since I'm a dictator, I've rewritten the warning to take less than 119 chars.
Will merge as soon as CI is green. | 01-19-2021 21:35:04 | 01-19-2021 21:35:04 | |
transformers | 9,683 | closed | Fix Funnel Transformer conversion script | # What does this PR do?
The conversion script was using the wrong kind of model, so wasn't working. I've also added the option to convert the base models.
Fixes #9644
| 01-19-2021 21:02:34 | 01-19-2021 21:02:34 | Do not miss ```transformers/cammand/convert.py``` for ```transformer-cli``` user.
Need ```base_model``` arg for ```convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output)```. |
transformers | 9,682 | closed | Add a community page to the docs | # What does this PR do?
This PR adds a new "community" page in the documentation that aims to gather information about all resources developed by the community. I copied all the community notebooks there, and we have an open PR that will also populate it. | 01-19-2021 20:39:33 | 01-19-2021 20:39:33 | |
transformers | 9,681 | closed | Restrain tokenizer.model_max_length default | # What does this PR do?
Apply the same fix to `run_mlm` (when line_by_line is not selected) as we did previously in `run_clm`. Since the tokenizer model_max_length can be excessively large, we should restrain it when no `max_seq_length` is passed.
Fixes #9665 | 01-19-2021 20:20:35 | 01-19-2021 20:20:35 | |
transformers | 9,680 | closed | Generating sentence embeddings from pretrained transformers model | Hi, I have a pretrained BERT based model hosted on huggingface.
https://huggingface.co/microsoft/SportsBERT
How do I generate sentence vectors using this model? I have explored sentence bert but it doesn't allow you to use custom trained models. I have also seen Bert as a client. It works but for my current scenario, I was wondering if there's something which could be done without running a server for converting to vectors. | 01-19-2021 20:12:33 | 01-19-2021 20:12:33 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 9,679 | closed | Visualize self-attention for GLUE task | Is there a way to visualize the self-attention weights for different spans in a sentence, for instance, a sequence classification task inside [`run_glue.py`](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py)?
[See here for a sample](https://imgur.com/a/7gAJvCJ) | 01-19-2021 19:57:13 | 01-19-2021 19:57:13 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 9,678 | closed | bert-base-cased predicts tokens instead of whole words after fine-tuning on fill-mask task | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux-4.15.0-126-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu92 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> @mfuntowicz, @sgugger
## Information
Model I am using (Bert, XLNet ...): bert-base-cased
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Extract the [training_data.zip](https://github.com/huggingface/transformers/files/5837438/training_data.zip). The traininig_data is structured like it is explained in [BertForMaskedLM](https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm).
2. Execute the code for fine-tuning to get the fine-tuned bert-base-cased (first script)
3. Evaluate the fine-tuned bert-base-cased with the code for evaluation (second script)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
#code for fine-tuning of bert-base-cased on fill-mask-task using the files train_queries.json and train_labels.json
from transformers import BertForMaskedLM, Trainer, TrainingArguments
import json
from transformers import BertTokenizer
import torch
import shutil
import os
class MaskedDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
if __name__ == "__main__":
#used LM
lm_name = 'bert-base-cased'
model_path = "bert_base_cased_finetuned"
if os.path.exists(model_path):
print("remove dir of model")
shutil.rmtree(model_path)
os.mkdir(model_path)
#pepare training dataset
#read datasets from path
train_queries = json.load(open("train_queries.json", "r"))
train_labels = json.load(open("train_labels.json", "r"))
#use tokenizer to get encodings
tokenizer = BertTokenizer.from_pretrained(lm_name)
train_question_encodings = tokenizer(train_queries, truncation=True, padding='max_length', max_length=256)
train_label_encodings = tokenizer(train_labels, truncation=True, padding='max_length', max_length=256)["input_ids"]
#get final datasets for training
train_dataset = MaskedDataset(train_question_encodings, train_label_encodings)
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir=model_path+'/logs', # directory for storing logs
logging_steps=10,
save_total_limit=0
)
model = BertForMaskedLM.from_pretrained(lm_name)
trainer = Trainer(
model=model, # the instantiated π€ Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset # training dataset
)
trainer.train()
trainer.save_model(model_path)
```
```
#code for evaluating the fine-tuned bert-base-cased
import json
from transformers import pipeline, BertForMaskedLM
from transformers import BertTokenizer
lm_name = "bert-base-cased"
test_queries = {"Rps26p56 is a subclass of [MASK] .": "pseudogene", "[MASK] is the capital of Hammerfest .": "Hammerfest", "Cubaedomus is a [MASK] .": "taxon", "[MASK] is named after Renfrew .": "Renfrew"}
#bert-base-cased with fine-tuning on train_queries.json and train_labels.json
unmasker_finetuned = pipeline('fill-mask', tokenizer= lm_name, model = BertForMaskedLM.from_pretrained("bert_base_cased_finetuned"), device=0, top_k=5)
#bert-base-cased tokenizer
tokenizer = BertTokenizer.from_pretrained(lm_name)
for query in test_queries:
correct_answer = test_queries[query]
#get the answer of the [MASK]-token of bert-base-cased-finetuned
finetuned_result = unmasker_finetuned(query)
finetuned_all_answers = []
for result in finetuned_result:
finetuned_all_answers.append(result["token_str"])
correct_answer_ids = tokenizer(correct_answer)["input_ids"]
correct_answer_tokens = tokenizer.convert_ids_to_tokens(correct_answer_ids)
correct_answer_tokens.remove("[SEP]")
correct_answer_tokens.remove("[CLS]")
print("query:", query)
print("correct answer:", correct_answer)
print("correct answer tokens:", correct_answer_tokens)
print("-----real behavior----------")
print("finetuned all answers:", finetuned_all_answers)
print("finetuned first answer:", finetuned_result[0]["token_str"])
print("-----expected behavior------")
print("finetuned first answer:", correct_answer, "\n")
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The language model should predict the whole word for the [MASK]-token and not only tokens. In the following, four queries were evaluated with the code for evaluation. For the first two queries, the finetuned language model predicts the correct tokens in the first five answers but does not match them together. For the last two queries, the finetuned language model predicts at least the correct first token but not all tokens.
My guess is, that something went wrong in the training when the word for the [MASK]-token is not in the vocabulary and the tokenizer splits the word into more than one token.
```
query: Rps26p56 is a subclass of [MASK] .
correct answer: pseudogene
correct answer tokens: ['pseudo', '##gene']
-----real behavior----------
finetuned all answers: ['pseudo', 'gene', 'protein', '##gene', 'sub']
finetuned first answer: pseudo
-----expected behavior------
finetuned first answer: pseudogene
query: [MASK] is the capital of Hammerfest .
correct answer: Hammerfest
correct answer tokens: ['Hammer', '##fest']
-----real behavior----------
finetuned all answers: ['Hammer', 'Metal', 'Hell', 'Lock', '##fest']
finetuned first answer: Hammer
-----expected behavior------
finetuned first answer: Hammerfest
query: Cubaedomus is a [MASK] .
correct answer: taxon
correct answer tokens: ['tax', '##on']
-----real behavior----------
finetuned all answers: ['tax', 'genus', 'pseudo', 'synonym', 'is']
finetuned first answer: tax
-----expected behavior------
finetuned first answer: taxon
query: [MASK] is named after Renfrew .
correct answer: Renfrew
correct answer tokens: ['Ren', '##f', '##rew']
-----real behavior----------
finetuned all answers: ['Ren', 'Re', 'R', 'Fe', 'Bo']
finetuned first answer: Ren
-----expected behavior------
finetuned first answer: Renfrew
```
| 01-19-2021 17:43:44 | 01-19-2021 17:43:44 | The pipeline for masked filling can only be used to fill one token, so you should be using different code for your evaluation if you want to be able to predict more than one masked token.<|||||>> The pipeline for masked filling can only be used to fill one token, so you should be using different code for your evaluation if you want to be able to predict more than one masked token.
Thank you for your reply. I am not sure, whether I understand you right. So do you mean, that it is not possible to predict words like "Los Angeles" with two words or that it is also not possible to predict words like "pseudogene", which are one word but are not in the vocabulary and so the tokenizer splits it into ['pseudo', '##gene']? I would only like to predict words like "pseudogene".<|||||>The pipeline in itself is only coded to return one token to replace the [MASK]. So it won't be able to predict two tokens to replace one [MASK]. The model is also only trained to replace each [MASK] in its sentence by one token, so it won't be able to predict two tokens for one [MASK].
For this task, you need to either use a different model (coded yourself as it's not present in the library) or have your training set contain one [MASK] per token you want to mask. For instance if you want to mask all the tokens corresponding to one word (a technique called whole-word masking) what is typically done in training scripts is to replace all parts of one word by [MASK]. For pseudogener tokenized as pseudo, ##gene, that would mean having [MASK] [MASK].
Also, this is not a bug of the library, so the discussion should continue on the [forum](https://discuss.huggingface.co/)
|
transformers | 9,677 | closed | Use datasets squad_v2 metric in run_qa | # What does this PR do?
The `run_qa` example script was using a copied and fixed version of the "squad_v2" version while waiting for the fix to be merged and released in datasets. That is now the case, so removing the band-aid and adjusting the version of datasets in requirements.
Fixes #9620
| 01-19-2021 17:21:37 | 01-19-2021 17:21:37 | |
transformers | 9,676 | closed | Fix GPT conversion script | # What does this PR do?
One forgotten file in #9674, sorry about that! | 01-19-2021 14:43:10 | 01-19-2021 14:43:10 | |
transformers | 9,675 | closed | Fix old Seq2SeqTrainer | # What does this PR do?
Removes the reference to the `_actual_model` method that was removed recently in the old `Seq2SeqTrainer`. | 01-19-2021 14:38:09 | 01-19-2021 14:38:09 | |
transformers | 9,674 | closed | Fix imports in conversion scripts | # What does this PR do?
During the rework of the new init for fast imports, all absolute imports were switched to relative ones indiscriminately (because they usually don't work anymore for the core of the lib). However, the conversion scripts are supposed to be executed as scripts and relative imports can't work there (that's how Python works). This PR fixes those, and it seems that it doesn't hurt the transformers-cli convert command (which import things from those modules). | 01-19-2021 14:30:35 | 01-19-2021 14:30:35 | |
transformers | 9,673 | closed | add mbart to automodel for masked lm | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9653
Bart and MBart are the only Encoder-Decoder models that can do mask-filling -> so add MBart also to `AutoModelForMaskedLM`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 01-19-2021 14:01:46 | 01-19-2021 14:01:46 | |
transformers | 9,672 | closed | AttributeError: 'Seq2SeqTrainer' object has no attribute '_actual_model' | _actual_model method is not defined in the Seq2SeqTrainer class, nor in the Trainer class from which is derived
https://github.com/huggingface/transformers/blob/12c1b5b8f448d652f5e1fa0f069b9569f4540948/examples/seq2seq/seq2seq_trainer.py#L63 | 01-19-2021 10:46:03 | 01-19-2021 10:46:03 | Hi @caralen
the `Seq2SeqTrainer` is now integrated with the main lib, now it's under `src/trainer_seq2seq.py`, and the seq2seq_trainer in examples is about to be deprecated, this bug is fixed in the new version, so I would recommend you to use the new`Seq2SeqTrainer` from the lib rather than examples folder,
you could directly import it from transformers using
```python
from transformers import Seq2SeqTrainer
```<|||||>Hi @patil-suraj, thanks for the quick reply. I will close this issue now. |
transformers | 9,671 | closed | How to enable tokenizer padding option in feature extraction pipeline? | I am trying to use our pipeline() to extract features of sentence tokens.
Because the lengths of my sentences are not same, and I am then going to feed the token features to RNN-based models, I want to padding sentences to a fixed length to get the same size features.
Before knowing our convenient pipeline() method, I am using a general version to get the features, which works fine but inconvenient, like that:
```
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
text = 'After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank.'
encoded_input = tokenizer(text, padding='max_length', truncation=True, max_length=40)
indexed_tokens = encoded_input['input_ids']
segments_ids = encoded_input['token_type_ids']
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
model = AutoModel.from_pretrained('bert-base-uncased', output_hidden_states=True)
model.eval()
with torch.no_grad():
outputs = model(tokens_tensor, segments_tensors)
hidden_states = outputs[2]
```
Then I also need to merge (or select) the features from returned **hidden_states** by myself... and finally get a [40,768] padded feature for this sentence's tokens as I want. However, as you can see, it is very inconvenient.
Compared to that, the pipeline method works very well and easily, which only needs the following 5-line codes.
```
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
model = AutoModel.from_pretrained('bert-base-uncased', output_hidden_states=True)
nlp = pipeline('feature-extraction', model=model, tokenizer=tokenizer)
text = "After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank."
features = nlp(text)
```
Then I can directly get the tokens' features of original (length) sentence, which is [22,768].
**However, how can I enable the padding option of the tokenizer in pipeline?**
As I saw #9432 and #9576 , I knew that now we can add truncation options to the pipeline object (here is called **nlp**), so I imitated and wrote this code:
```
text = "After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank."
features = nlp(text, padding='max_length', truncation=True, max_length=40)
```
The program did not throw me an error though, but just return me a [512,768] vector...?
So is there any method to correctly enable the padding options? Thank you! | 01-19-2021 10:22:24 | 01-19-2021 10:22:24 | Hi! I think you're looking for `padding="longest"`?<|||||>Your result if of length 512 because you asked `padding="max_length"`, and the tokenizer max length is 512. If you ask for `"longest"`, it will pad up to the longest value in your batch:
```py
>>> text = "After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank."
... features = nlp([text, text * 2], padding="longest", truncation=True, max_length=40)
```
returns features which are of size [42, 768].<|||||>> Your result if of length 512 because you asked `padding="max_length"`, and the tokenizer max length is 512. If you ask for `"longest"`, it will pad up to the longest value in your batch:
>
> ```python
> >>> text = "After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank."
> ... features = nlp([text, text * 2], padding="longest", truncation=True, max_length=40)
> ```
>
> returns features which are of size [42, 768].
Thank you very much! This method works! And I think the 'longest' padding strategy is enough for me to use in my dataset.
But I just wonder that can I specify a fixed padding size? Like all sentence could be padded to length 40?
Because in my former 'inconvenient general method', I just use
```
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
text = 'After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank.'
encoded_input = tokenizer(text, padding='max_length', truncation=True, max_length=40)
```
and get the fixed size padding sentence though...
(I found this method from the official documentation [https://huggingface.co/transformers/preprocessing.html#everything-you-always-wanted-to-know-about-padding-and-truncation](url)<|||||>Well it seems impossible for now... I just tried
```
text = "After stealing money from the bank vault, the bank robber was seen " \
"fishing on the Mississippi river bank."
features = nlp(text, padding='length', truncation=True, length=40)
```
And the error message showed that:
**ValueError: 'length' is not a valid PaddingStrategy, please select one of ['longest', 'max_length', 'do_not_pad']**
Anyway, thank you very much!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,670 | closed | bert_tokenizer.decode(bert_tokenizer.encode(sentence))!=sentence | from transformers import AutoTokenizer # transformers==4.2.1
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
paragraph = "no_passages_used __knowledge__ no_passages_used"
print(tokenizer.encode(paragraph))
print(tokenizer.decode(tokenizer.encode(paragraph)))
"""
>[101, 2053, 1035, 13768, 1035, 2109, 1035, 1035, 3716, 1035, 1035, 2053, 1035, 13768, 1035, 2109, 102]
>[CLS] no _ passages _ used _ _ knowledge _ _ no _ passages _ used [SEP]
""" | 01-19-2021 09:11:37 | 01-19-2021 09:11:37 | Hi! This is a normal behavior of the BERT tokenizer. You can add the tokens you do not wish to see split to the vocabulary:
```py
>>> tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
... tokenizer.add_tokens(["no_passages_used"]) # <--------------------------------- Here
... paragraph = "no_passages_used knowledge no_passages_used"
... print(tokenizer.encode(paragraph))
... print(tokenizer.decode(tokenizer.encode(paragraph)))
[101, 30522, 3716, 30522, 102]
[CLS] no_passages_used knowledge no_passages_used [SEP]
```<|||||>Don't forget to resize the embedding matrix of your model if you add new tokens to the vocabulary: [docs for add_tokens method](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=resize_token_embeddings#transformers.tokenization_utils_base.SpecialTokensMixin.add_tokens) |
transformers | 9,669 | closed | [Bart-like tests] Fix torch device for bart tests | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes failing circle ci on GPU due to this commit https://github.com/huggingface/transformers/commit/357fb1c5d8b6a16f042f9b504f023d935086e8e5
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 01-19-2021 07:56:03 | 01-19-2021 07:56:03 | |
transformers | 9,668 | closed | Cannot compile tokenizers on PowerPC 9 while installing transformers | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: PowerPC 9
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5 w/ GPU
- Tensorflow version (GPU?): n/a
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@mfuntowicz
## Information
I am trying to install `transformers==3.4.0` on an PowerPC 9 system. It's an IBM compute rig.
## To reproduce
Steps to reproduce the behavior:
1. Create new `conda` environment with python 3.7
2. Run `pip install transformers==3.4.0` (the version that I need)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Compiling tokenizers v0.10.1 (/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/tokenizers-lib)
Running `rustc --crate-name tokenizers --edition=2018 tokenizers-lib/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=204c4d103d08e9e3 -C extra-filename=-204c4d103d08e9e3 --out-dir /tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps -L dependency=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps --extern clap=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libclap-b8e428690762cf7e.rmeta --extern derive_builder=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libderive_builder-247f4f57ff4bf4c7.so --extern esaxx_rs=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libesaxx_rs-28ce6f8a8d31c937.rmeta --extern indicatif=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libindicatif-280a1d33f346e384.rmeta --extern itertools=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libitertools-759131012594af62.rmeta --extern lazy_static=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/liblazy_static-0f749853bc34e9e0.rmeta --extern log=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/liblog-12a018fba7f0b36d.rmeta --extern onig=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libonig-3ca2736cdef653d2.rmeta --extern rand=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/librand-52622a6339ec540d.rmeta --extern rayon=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/librayon-f4508233e0c77565.rmeta --extern rayon_cond=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/librayon_cond-d89d0c7f0a1d1a11.rmeta --extern regex=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libregex-dbb55ca763c16a0e.rmeta --extern regex_syntax=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libregex_syntax-c7a8a1f28fe982ac.rmeta --extern serde=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libserde-11e7f5f85ab52b72.rmeta --extern serde_json=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libserde_json-477c52136da5fafe.rmeta --extern spm_precompiled=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libspm_precompiled-39a90f21c16965ef.rmeta --extern unicode_normalization_alignments=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libunicode_normalization_alignments-157a660dec7f1476.rmeta --extern unicode_segmentation=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libunicode_segmentation-66856f91381ae1a4.rmeta --extern unicode_categories=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libunicode_categories-209e6f430e5d88d1.rmeta -L native=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/build/esaxx-rs-62ba703c44f19ac6/out -L native=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/build/onig_sys-091ecfe4b66243c7/out`
error[E0603]: module `export` is private
--> tokenizers-lib/src/tokenizer/mod.rs:24:12
|
24 | use serde::export::Formatter;
| ^^^^^^ private module
|
note: the module `export` is defined here
--> /home/mengk/.cargo/registry/src/github.com-1ecc6299db9ec823/serde-1.0.119/src/lib.rs:275:5
|
275 | use self::__private as export;
| ^^^^^^^^^^^^^^^^^^^^^^^^^
error: aborting due to previous error
For more information about this error, try `rustc --explain E0603`.
error: could not compile `tokenizers`.
Caused by:
process didn't exit successfully: `rustc --crate-name tokenizers --edition=2018 tokenizers-lib/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=204c4d103d08e9e3 -C extra-filename=-204c4d103d08e9e3 --out-dir /tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps -L dependency=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps --extern clap=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libclap-b8e428690762cf7e.rmeta --extern derive_builder=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libderive_builder-247f4f57ff4bf4c7.so --extern esaxx_rs=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libesaxx_rs-28ce6f8a8d31c937.rmeta --extern indicatif=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libindicatif-280a1d33f346e384.rmeta --extern itertools=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libitertools-759131012594af62.rmeta --extern lazy_static=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/liblazy_static-0f749853bc34e9e0.rmeta --extern log=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/liblog-12a018fba7f0b36d.rmeta --extern onig=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libonig-3ca2736cdef653d2.rmeta --extern rand=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/librand-52622a6339ec540d.rmeta --extern rayon=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/librayon-f4508233e0c77565.rmeta --extern rayon_cond=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/librayon_cond-d89d0c7f0a1d1a11.rmeta --extern regex=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libregex-dbb55ca763c16a0e.rmeta --extern regex_syntax=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libregex_syntax-c7a8a1f28fe982ac.rmeta --extern serde=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libserde-11e7f5f85ab52b72.rmeta --extern serde_json=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libserde_json-477c52136da5fafe.rmeta --extern spm_precompiled=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libspm_precompiled-39a90f21c16965ef.rmeta --extern unicode_normalization_alignments=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libunicode_normalization_alignments-157a660dec7f1476.rmeta --extern unicode_segmentation=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libunicode_segmentation-66856f91381ae1a4.rmeta --extern unicode_categories=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/deps/libunicode_categories-209e6f430e5d88d1.rmeta -L native=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/build/esaxx-rs-62ba703c44f19ac6/out -L native=/tmp/pip-install-4jlvfd19/tokenizers_3c1b3bbe26064417aa8614a7fb564203/target/release/build/onig_sys-091ecfe4b66243c7/out` (exit code: 1)
cargo rustc --lib --manifest-path Cargo.toml --features pyo3/extension-module --release --verbose -- --crate-type cdylib
error: cargo failed with code: 101
----------------------------------------
ERROR: Failed building wheel for tokenizers
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly
```
## Expected behavior
There shouldn't be any error messages.
Sidenote: before getting this, I had an error complaining that I didn't have rust installed, but I did so using the command given on the official website.
<!-- A clear and concise description of what you would expect to happen. -->
| 01-19-2021 03:28:43 | 01-19-2021 03:28:43 | Hello! Could you open an issue on the [tokenizers](https://github.com/huggingface/tokenizers) repository instead? @n1t0 will probably know what's up!<|||||>Done! https://github.com/huggingface/tokenizers/issues/604<|||||>This is actually a transformers problem I think. It's the old versions of tokenizers imported using a path that has since become private. It's fixed in the newer versions, but transformers is still pinned to the old version: https://github.com/huggingface/transformers/issues/9649<|||||>Ah, installing something newer, e.g. `transformers==4.2.2`, has fixed it. Thanks so much! |
transformers | 9,667 | closed | Add new model docs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds more information on how to add a model to Transformers docs.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
## UPDATE
The `model_doc/add_new_model.rst` is now finished for a first merge IMO. It would be amazing if @LysandreJik @sgugger you could review the file real quick again - I tried to add all of your suggestions. Also, I added a diagram showing the model design of Transformers - which was not reviewed yet. Note that I did not add a clear design for Tokenizers since it takes a lot of time to do so and I want to iteratively improve this step-by-step explanation. The first model, for which I'd like to mentor someone from the community would also be BigBird which does not need a new tokenizer.
In addition, I would be extremely grateful if @stas00 @abhishekkrthakur @patil-suraj @stefan-it @NielsRogge you have 10 minutes review the `model_doc/add_model.rst` file for possible improvements since you guys just recently added a new model. Your feedback would be especially useful since you might have a much more "unbiased" view what is difficult/easy when adding a model. | 01-18-2021 22:16:01 | 01-18-2021 22:16:01 | |
transformers | 9,666 | closed | Fine-tuning LM with NSP | Environment info
transformers-4.2.1
PyTorch
tokenizers-0.9.4
sentencepiece-0.1.95
when finetuning bert my script run as well but not complete the running and because of this error Traceback (most recent call last):
File "/content/run.py", line 757, in <module>
main()
File "/content/run.py", line 656, in main
labels=lm_label_ids, next_sentence_label=is_next)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 1065, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 968, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 566, in forward
output_attentions,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 460, in forward
past_key_value=self_attn_past_key_value,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 393, in forward
output_attentions,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 314, in forward
attention_probs = nn.Softmax(dim=-1)(attention_scores)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/activation.py", line 1198, in forward
return F.softmax(input, self.dim, _stacklevel=5)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1512, in softmax
ret = input.softmax(dim)
RuntimeError: CUDA error: device-side assert `triggered
++++++++++++++and my code is here+++++++++++++++++
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import logging
import argparse
#from tqdm import tqdm
#from tqdm import trange
from tqdm import notebook , trange
import numpy as np
import torch
from torch.utils.data import DataLoader, RandomSampler , SequentialSampler
from torch.utils.data.distributed import DistributedSampler
#from pytorch_pretrained_bert.tokenization import BertTokenizer
#from pytorch_pretrained_bert.modeling import BertForPreTraining
from transformers import BertTokenizer, BertForPreTraining
#from pytorch_pretrained_bert.optimization import BertAdam
from transformers import XLNetTokenizer
from transformers import AdamW, get_linear_schedule_with_warmup
#from transformers import BertForPreTraining
import sentencepiece as spm
from torch.utils.data import Dataset
import random
logging.basicConfig(format='%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt='%m/%d/%Y %H:%M:%S',
level=logging.INFO)
logger = logging.getLogger(__name__)
def warmup_linear(x, warmup=0.002):
if x < warmup:
return x / warmup
return 1.0 - x
def accuracy(out, labels, total_test):
class_preds = out.data.cpu().numpy().argmax(axis=-1)
labels = labels.data.cpu().numpy()
return np.sum(class_preds == labels) / total_test
class BERTDataset(Dataset):
def __init__(self, corpus_path, tokenizer, seq_len, encoding="utf-8", corpus_lines=None, on_memory=True):
self.vocab = tokenizer.get_vocab()
self.tokenizer = tokenizer
self.seq_len = seq_len
self.on_memory = on_memory
self.corpus_lines = corpus_lines # number of non-empty lines in input corpus
self.corpus_path = corpus_path
self.encoding = encoding
self.current_doc = 0 # to avoid random sentence from same doc
# for loading samples directly from file
self.sample_counter = 0 # used to keep track of full epochs on file
self.line_buffer = None # keep second sentence of a pair in memory and use as first sentence in next pair
# for loading samples in memory
self.current_random_doc = 0
self.num_docs = 0
self.sample_to_doc = [] # map sample index to doc and line
# load samples into memory
if on_memory:
self.all_docs = []
doc = []
self.corpus_lines = 0
with open(corpus_path, "r", encoding=encoding) as f:
for line in notebook.tqdm(f, desc="Loading Dataset", total=corpus_lines):
line = line.strip()
if line == "":
self.all_docs.append(doc)
doc = []
# remove last added sample because there won't be a subsequent line anymore in the doc
self.sample_to_doc.pop()
else:
# store as one sample
sample = {"doc_id": len(self.all_docs),
"line": len(doc)}
self.sample_to_doc.append(sample)
doc.append(line)
self.corpus_lines = self.corpus_lines + 1
# if last row in file is not empty
if self.all_docs[-1] != doc:
self.all_docs.append(doc)
self.sample_to_doc.pop()
self.num_docs = len(self.all_docs)
# load samples later lazily from disk
else:
if self.corpus_lines is None:
with open(corpus_path, "r", encoding=encoding) as f:
self.corpus_lines = 0
for line in notebook.tqdm(f, desc="Loading Dataset", total=corpus_lines):
if line.strip() == "":
self.num_docs += 1
else:
self.corpus_lines += 1
# if doc does not end with empty line
if line.strip() != "":
self.num_docs += 1
self.file = open(corpus_path, "r", encoding=encoding)
self.random_file = open(corpus_path, "r", encoding=encoding)
def __len__(self):
# last line of doc won't be used, because there's no "nextSentence". Additionally, we start counting at 0.
return self.corpus_lines - self.num_docs - 1
def __getitem__(self, item):
cur_id = self.sample_counter
self.sample_counter += 1
if not self.on_memory:
# after one epoch we start again from beginning of file
if cur_id != 0 and (cur_id % len(self) == 0):
self.file.close()
self.file = open(self.corpus_path, "r", encoding=self.encoding)
t1, t2, is_next_label = self.random_sent(item)
# tokenize
tokens_a = self.tokenizer.tokenize(t1)
tokens_b = self.tokenizer.tokenize(t2)
# combine to one sample
cur_example = InputExample(guid=cur_id, tokens_a=tokens_a, tokens_b=tokens_b, is_next=is_next_label)
# transform sample to features
cur_features = convert_example_to_features(cur_example, self.seq_len, self.tokenizer)
cur_tensors = (torch.tensor(cur_features.input_ids),
torch.tensor(cur_features.input_mask),
torch.tensor(cur_features.segment_ids),
torch.tensor(cur_features.lm_label_ids),
torch.tensor(cur_features.is_next))
return cur_tensors
def random_sent(self, index):
"""
Get one sample from corpus consisting of two sentences. With prob. 50% these are two subsequent sentences
from one doc. With 50% the second sentence will be a random one from another doc.
:param index: int, index of sample.
:return: (str, str, int), sentence 1, sentence 2, isNextSentence Label
"""
t1, t2 = self.get_corpus_line(index)
if random.random() > 0.5:
label = 0
else:
t2 = self.get_random_line()
label = 1
assert len(t1) > 0
assert len(t2) > 0
return t1, t2, label
def get_corpus_line(self, item):
"""
Get one sample from corpus consisting of a pair of two subsequent lines from the same doc.
:param item: int, index of sample.
:return: (str, str), two subsequent sentences from corpus
"""
t1 = ""
t2 = ""
assert item < self.corpus_lines
if self.on_memory:
sample = self.sample_to_doc[item]
t1 = self.all_docs[sample["doc_id"]][sample["line"]]
t2 = self.all_docs[sample["doc_id"]][sample["line"] + 1]
# used later to avoid random nextSentence from same doc
self.current_doc = sample["doc_id"]
return t1, t2
else:
if self.line_buffer is None:
# read first non-empty line of file
while t1 == "":
t1 = self.file.__next__().strip()
t2 = self.file.__next__().strip()
else:
# use t2 from previous iteration as new t1
t1 = self.line_buffer
t2 = self.file.__next__().strip()
# skip empty rows that are used for separating documents and keep track of current doc id
while t2 == "" or t1 == "":
t1 = self.file.__next__().strip()
t2 = self.file.__next__().strip()
self.current_doc = self.current_doc + 1
self.line_buffer = t2
assert t1 != ""
assert t2 != ""
return t1, t2
def get_random_line(self):
"""
Get random line from another document for nextSentence task.
:return: str, content of one line
"""
# Similar to original tf repo: This outer loop should rarely go for more than one iteration for large
# corpora. However, just to be careful, we try to make sure that
# the random document is not the same as the document we're processing.
for _ in range(10):
if self.on_memory:
rand_doc_idx = random.randint(0, len(self.all_docs) - 1)
rand_doc = self.all_docs[rand_doc_idx]
line = rand_doc[random.randrange(len(rand_doc))]
else:
rand_index = random.randint(1, self.corpus_lines if self.corpus_lines < 1000 else 1000)
# pick random line
for _ in range(rand_index):
line = self.get_next_line()
# check if our picked random line is really from another doc like we want it to be
if self.current_random_doc != self.current_doc:
break
return line
def get_next_line(self):
""" Gets next line of random_file and starts over when reaching end of file"""
try:
line = self.random_file.__next__().strip()
# keep track of which document we are currently looking at to later avoid having the same doc as t1
if line == "":
self.current_random_doc = self.current_random_doc + 1
line = self.random_file.__next__().strip()
except StopIteration:
self.random_file.close()
self.random_file = open(self.corpus_path, "r", encoding=self.encoding)
line = self.random_file.__next__().strip()
return line
class InputExample(object):
"""A single training/test example for the language model."""
def __init__(self, guid, tokens_a, tokens_b=None, is_next=None, lm_labels=None):
"""Constructs a InputExample.
Args:
guid: Unique id for the example.
tokens_a: string. The untokenized text of the first sequence. For single
sequence tasks, only this sequence must be specified.
tokens_b: (Optional) string. The untokenized text of the second sequence.
Only must be specified for sequence pair tasks.
label: (Optional) string. The label of the example. This should be
specified for train and dev examples, but not for test examples.
"""
self.guid = guid
self.tokens_a = tokens_a
self.tokens_b = tokens_b
self.is_next = is_next # nextSentence
self.lm_labels = lm_labels # masked words for language model
class InputFeatures(object):
"""A single set of features of data."""
def __init__(self, input_ids, input_mask, segment_ids, is_next, lm_label_ids):
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.is_next = is_next
self.lm_label_ids = lm_label_ids
def random_word(tokens, tokenizer):
"""
Masking some random tokens for Language Model task with probabilities as in the original BERT paper.
:param tokens: list of str, tokenized sentence.
:param tokenizer: Tokenizer, object used for tokenization (we need it's vocab here)
:return: (list of str, list of int), masked tokens and related labels for LM prediction
"""
output_label = []
for i, token in enumerate(tokens):
prob = random.random()
# mask token with 15% probability
if prob < 0.15:
prob /= 0.15
# 80% randomly change token to mask token
if prob < 0.8:
tokens[i] = "[MASK]"
# 10% randomly change token to random token
elif prob < 0.9:
tokens[i] = random.choice(list(tokenizer.get_vocab()))
# -> rest 10% randomly keep current token
# append current token to output (we will predict these later)
try:
output_label.append(tokenizer.convert_tokens_to_ids(token))
except KeyError:
# For unknown words (should not occur with BPE vocab)
output_label.append(tokenizer.convert_tokens_to_ids("[UNK]"))
logger.warning("Cannot find token '{}' in vocab. Using [UNK] insetad".format(token))
else:
# no masking token (will be ignored by loss function later)
output_label.append(-100)
return tokens, output_label
def convert_example_to_features(example, max_seq_length, tokenizer):
"""
Convert a raw sample (pair of sentences as tokenized strings) into a proper training sample with
IDs, LM labels, input_mask, CLS and SEP tokens etc.
:param example: InputExample, containing sentence input as strings and is_next label
:param max_seq_length: int, maximum length of sequence.
:param tokenizer: Tokenizer
:return: InputFeatures, containing all inputs and labels of one sample as IDs (as used for model training)
"""
tokens_a = example.tokens_a
tokens_b = example.tokens_b
# Modifies `tokens_a` and `tokens_b` in place so that the total
# length is less than the specified length.
# Account for [CLS], [SEP], [SEP] with "- 3"
_truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)
t1_random, t1_label = random_word(tokens_a, tokenizer)
t2_random, t2_label = random_word(tokens_b, tokenizer)
# concatenate lm labels and account for CLS, SEP, SEP
cls_id = tokenizer.convert_tokens_to_ids(["[CLS]"])[0]
sep_id = tokenizer.convert_tokens_to_ids(["[SEP]"])[0]
pad_id = tokenizer.convert_tokens_to_ids(["[PAD]"])[0]
lm_label_ids = ([cls_id] + t1_label + [sep_id] + t2_label + [sep_id])
# The convention in BERT is:
# (a) For sequence pairs:
# tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
# type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
# (b) For single sequences:
# tokens: [CLS] the dog is hairy . [SEP]
# type_ids: 0 0 0 0 0 0 0
#
# Where "type_ids" are used to indicate whether this is the first
# sequence or the second sequence. The embedding vectors for `type=0` and
# `type=1` were learned during pre-training and are added to the wordpiece
# embedding vector (and position vector). This is not *strictly* necessary
# since the [SEP] token unambigiously separates the sequences, but it makes
# it easier for the model to learn the concept of sequences.
#
# For classification tasks, the first vector (corresponding to [CLS]) is
# used as as the "sentence vector". Note that this only makes sense because
# the entire model is fine-tuned.
tokens = []
segment_ids = []
tokens.append("[CLS]")
segment_ids.append(0)
for token in tokens_a:
tokens.append(token)
segment_ids.append(0)
tokens.append("[SEP]")
segment_ids.append(0)
assert len(tokens_b) > 0
for token in tokens_b:
tokens.append(token)
segment_ids.append(1)
tokens.append("[SEP]")
segment_ids.append(1)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
while len(input_ids) < max_seq_length:
input_ids.append(0)
input_mask.append(0)
segment_ids.append(0)
lm_label_ids.append(-100)
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
assert len(lm_label_ids) == max_seq_length
if example.guid < 5:
logger.info("*** Example ***")
logger.info("guid: %s" % (example.guid))
logger.info("tokens: %s" % " ".join(
[str(x) for x in tokens]))
logger.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
logger.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
logger.info(
"segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
logger.info("LM label: %s " % (lm_label_ids))
logger.info("Is next sentence label: %s " % (example.is_next))
features = InputFeatures(input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
lm_label_ids=lm_label_ids,
is_next=example.is_next)
return features
def main():
parser = argparse.ArgumentParser()
## Required parameters
parser.add_argument("--train_file",
default=None,
type=str,
required=True,
help="The input train corpus.")
parser.add_argument("--test_file",
default=None,
type=str,
required=True,
help="The input test corpus.")
parser.add_argument("--tokenizer_model", default=None, type=str, required=True,
help="tokenizer pre-trained model selected in the list: bert-base-uncased, "
"bert-large-uncased, bert-base-cased, bert-base-multilingual, bert-base-chinese.")
parser.add_argument("--bert_model", default=None, type=str, required=True,
help="Bert pre-trained model selected in the list: bert-base-uncased, "
"bert-large-uncased, bert-base-cased, bert-base-multilingual, bert-base-chinese.")
parser.add_argument("--config_file", default=None, type=str, required=True,
help="Bert pre-trained model selected in the list: bert-base-uncased, "
"bert-large-uncased, bert-base-cased, bert-base-multilingual, bert-base-chinese.")
parser.add_argument("--output_dir",
default=None,
type=str,
required=True,
help="The output directory where the model checkpoints will be written.")
## Other parameters
parser.add_argument("--max_seq_length",
default=128,
type=int,
help="The maximum total input sequence length after WordPiece tokenization. \n"
"Sequences longer than this will be truncated, and sequences shorter \n"
"than this will be padded.")
parser.add_argument("--train_batch_size",
default=32,
type=int,
help="Total batch size for training.")
parser.add_argument("--eval_batch_size",
default=32,
type=int,
help="Total batch size for eval.")
parser.add_argument("--learning_rate",
default=5e-5,
type=float,
help="The initial learning rate for Adam.")
parser.add_argument("--num_train_epochs",
default=4,
type=float,
help="Total number of training epochs to perform.")
parser.add_argument("--adam_epsilon",
default=1e-8,
type=float,
help="Proportion of training to perform linear learning rate warmup for. "
"E.g., 0.1 = 10%% of training.")
parser.add_argument("--no_cuda",
action='store_true',
help="Whether not to use CUDA when available")
parser.add_argument("--on_memory",
action='store_true',
help="Whether to load train samples into memory or use disk")
parser.add_argument("--do_lower_case",
action='store_true',
help="Whether to lower case the input text. True for uncased models, False for cased models.")
parser.add_argument("--local_rank",
type=int,
default=-1,
help="local_rank for distributed training on gpus")
parser.add_argument('--seed',
type=int,
default=42,
help="random seed for initialization")
parser.add_argument('--gradient_accumulation_steps',
type=int,
default=1,
help="Number of updates steps to accumualte before performing a backward/update pass.")
parser.add_argument('--fp16',
action='store_true',
help="Whether to use 16-bit float precision instead of 32-bit")
parser.add_argument('--loss_scale',
type=float, default=0,
help="Loss scaling to improve fp16 numeric stability. Only used when fp16 set to True.\n"
"0 (default value): dynamic loss scaling.\n"
"Positive power of 2: static loss scaling value.\n")
args = parser.parse_args()
if args.local_rank == -1 or args.no_cuda:
device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
n_gpu = torch.cuda.device_count()
else:
torch.cuda.set_device(args.local_rank)
device = torch.device("cuda", args.local_rank)
n_gpu = 1
# Initializes the distributed backend which will take care of sychronizing nodes/GPUs
torch.distributed.init_process_group(backend='nccl')
logger.info("device: {} n_gpu: {}, distributed training: {}, 16-bits training: {}".format(
device, n_gpu, bool(args.local_rank != -1), args.fp16))
if args.gradient_accumulation_steps < 1:
raise ValueError("Invalid gradient_accumulation_steps parameter: {}, should be >= 1".format(
args.gradient_accumulation_steps))
args.train_batch_size = int(args.train_batch_size / args.gradient_accumulation_steps)
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if n_gpu > 0:
torch.cuda.manual_seed_all(args.seed)
#if not args.do_train and not args.do_eval:
# raise ValueError("At least one of `do_train` or `do_eval` must be True.")
if os.path.exists(args.output_dir) and os.listdir(args.output_dir):
raise ValueError("Output directory ({}) already exists and is not empty.".format(args.output_dir))
os.makedirs(args.output_dir, exist_ok=True)
# tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case)
tokenizer = XLNetTokenizer.from_pretrained(args.tokenizer_model)
# train_examples = None
num_train_steps = None
print("Loading Train Dataset", args.train_file)
train_dataset = BERTDataset(args.train_file, tokenizer, seq_len=args.max_seq_length,
corpus_lines=None, on_memory=args.on_memory)
print("Loading eval Dataset", args.test_file)
eval_dataset = BERTDataset(args.test_file, tokenizer, seq_len=args.max_seq_length,
corpus_lines=None, on_memory=args.on_memory)
num_train_steps = int(
len(train_dataset) / args.train_batch_size / args.gradient_accumulation_steps * args.num_train_epochs)
# Prepare model
model = BertForPreTraining.from_pretrained(
args.bert_model,
config=args.config_file,
output_attentions=False, # Whether the model returns attentions weights.
output_hidden_states=False, # Whether the model returns all hidden-states.
)
# Tell pytorch to run this model on the GPU.
model.to(device)
if args.fp16:
model.half()
if args.local_rank != -1:
try:
from apex.parallel import DistributedDataParallel as DDP
except ImportError:
raise ImportError(
"Please install apex from https://www.github.com/nvidia/apex to use distributed and fp16 training.")
model = DDP(model)
elif n_gpu > 1:
model = torch.nn.DataParallel(model)
# Prepare optimizer
'''
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
if args.fp16:
try:
from apex.optimizers import FP16_Optimizer
from apex.optimizers import FusedAdam
except ImportError:
raise ImportError(
"Please install apex from https://www.github.com/nvidia/apex to use distributed and fp16 training.")
optimizer = FusedAdam(optimizer_grouped_parameters,
lr=args.learning_rate,
bias_correction=False,
max_grad_norm=1.0)
if args.loss_scale == 0:
optimizer = FP16_Optimizer(optimizer, dynamic_loss_scale=True)
else:
optimizer = FP16_Optimizer(optimizer, static_loss_scale=args.loss_scale)
else:
optimizer = AdamW(optimizer_grouped_parameters,
lr=args.learning_rate,
warmup=args.warmup_proportion,
t_total=num_train_steps)
'''
logger.info("***** Running training *****")
logger.info(" Num examples = %d", len(train_dataset))
logger.info(" Batch size = %d", args.train_batch_size)
logger.info(" Num steps = %d", num_train_steps)
if args.local_rank == -1:
train_sampler = SequentialSampler(train_dataset)
eval_sampler = SequentialSampler(eval_dataset)
else:
# TODO: check if this works with current data generator from disk that relies on file.__next__
# (it doesn't return item back by index)
train_sampler = DistributedSampler(train_dataset)
eval_sampler = DistributedSampler(eval_dataset)
train_dataloader = DataLoader(train_dataset, sampler=train_sampler, batch_size=args.train_batch_size)
eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler, batch_size=args.train_batch_size)
# optimizer
t_total = len(train_dataloader) // args.train_batch_size
no_decay = ['bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in model.named_parameters() if
not any(nd in n for nd in no_decay)],
'weight_decay': 0.01},
{'params': [p for n, p in model.named_parameters() if any(
nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, eps=args.adam_epsilon)
scheduler = get_linear_schedule_with_warmup(
optimizer, 0, t_total)
model.train()
tr_loss = 0
global_step = 0
acc = 0
train_loss = 0.0
nb_tr_examples, nb_tr_steps = 0, 0
for _ in trange(int(args.num_train_epochs), desc="Epoch"):
for batch in notebook.tqdm(train_dataloader, desc="Train Iteration"):
batch = tuple(t.to(device) for t in batch)
input_ids, input_mask, segment_ids, lm_label_ids, is_next = batch
outputs = model(input_ids=input_ids, attention_mask=input_mask, token_type_ids=segment_ids,
labels=lm_label_ids, next_sentence_label=is_next)
loss = outputs.loss
'''
if n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu.
if args.gradient_accumulation_steps > 1:
loss = loss / args.gradient_accumulation_steps
if args.fp16:
optimizer.backward(outputs.loss)
else:
loss.backward()
'''
print(loss)
loss.backward()
tr_loss += loss.item()
nb_tr_examples += input_ids.size(0)
nb_tr_steps += 1
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1)
optimizer.step()
scheduler.step()
model.zero_grad()
global_step += 1
'''
if (step + 1) % args.gradient_accumulation_steps == 0:
# modify learning rate with special warm up BERT uses
lr_this_step = args.learning_rate * warmup_linear(global_step / num_train_steps, args.warmup_proportion)
for param_group in optimizer.param_groups:
param_group['lr'] = lr_this_step
optimizer.step()
scheduler.step()
optimizer.zero_grad()
global_step += 1
'''
train_loss = tr_loss / global_step
perplexity = torch.exp(torch.tensor(train_loss)).item()
print("Training loss {} ".format("{:.3f}".format(train_loss)))
print("Training perplexity {}".format("{:.3f}".format(perplexity)))
logger.info("***** Running evaluation *****")
logger.info(" Num examples = %d", len(eval_dataset))
logger.info(" Batch size = %d", batch_size)
eval_loss = 0.0
acc = 0
nb_eval_steps = 0
for batch in notebook.tqdm(eval_dataloader, desc='Evaluating'):
batch = tuple(t.to(device) for t in batch)
input_ids, input_mask, segment_ids, lm_label_ids, is_next = batch
with torch.no_grad():
outputs = model(input_ids, segment_ids, input_mask, lm_label_ids, is_next)
loss = outputs.loss
eval_loss += loss.mean().item()
nb_eval_steps += 1
eval_loss = eval_loss / nb_eval_steps
perplexity = torch.exp(torch.tensor(eval_loss)).item()
print("Evalution loss {} ".format("{:.3f}".format(eval_loss)))
print("Evalution perplexity {}".format("{:.3f}".format(perplexity)))
if not os.path.exists(output_dir):
os.makedirs(output_dir)
print("Saving model to %s" % args.output_dir)
# Save a trained model, configuration and tokenizer using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training
model_to_save.save_pretrained(args.output_dir)
tokenizer.save_pretrained(args.output_dir)
# Save a trained model
#logger.info("** ** * Saving fine - tuned model ** ** * ")
model_to_save = model.module if hasattr(model, 'module') else model # Only save the model it-self
#if args.do_train:
# model_to_save.save_pretrained(self.output_dir)
# tokenizer.save_pretrained(self.output_dir)
def _truncate_seq_pair(tokens_a, tokens_b, max_length):
"""Truncates a sequence pair in place to the maximum length."""
# This is a simple heuristic which will always truncate the longer sequence
# one token at a time. This makes more sense than truncating an equal percent
# of tokens from each, since if one sequence is very short then each token
# that's truncated likely contains more information than a longer sequence.
while True:
total_length = len(tokens_a) + len(tokens_b)
if total_length <= max_length:
break
if len(tokens_a) > len(tokens_b):
tokens_a.pop()
else:
tokens_b.pop()
if __name__ == "__main__":
main()
`
| 01-18-2021 21:47:07 | 01-18-2021 21:47:07 | Hi @ahmedkotb98
We would love to help, but it would be better if you could only post the relevant minimal code snippet to reproduce the issue, rather than a bunch of scripts.
Also, it's better to ask such type of questions on the [forum](https://discuss.huggingface.co/) first. Here's our guide on [how to request support](https://discuss.huggingface.co/t/how-to-request-support/3128).
Thanks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,665 | closed | IndexError: index out of bounds when running run_mlm.py |
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0.dev0
- Platform: Linux-4.15.0-46-generic-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.7
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
?
## Information
Model I am using (Bert, XLNet ...): neuralmind/bert-base-portuguese-cased
## To reproduce
Steps to reproduce the behavior:
I want to fine-tune a pretrained language model using [run_mlm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). I have a corpus file (ful_corpus.csv) that contains one doc (raw text) per line. When I run the following command:
`python run_mlm.py --model_name_or_path "neuralmind/bert-base-portuguese-cased" --train_file ../data/full_corpus.csv --cache_dir /home/mwon/data-mwon/paperChega/src_classificador/data/hugingface --output models/ --do_train`
it results in the error:
```
Traceback (most recent call last):
File "run_mlm.py", line 449, in <module>
main()
File "run_mlm.py", line 384, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/mnt/sdb/data-mwon/paperChega/env2/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/mnt/sdb/data-mwon/paperChega/env2/lib/python3.6/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/mnt/sdb/data-mwon/paperChega/env2/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1260, in map
update_data=update_data,
File "/mnt/sdb/data-mwon/paperChega/env2/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 157, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/mnt/sdb/data-mwon/paperChega/env2/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper
out = func(self, *args, **kwargs)
File "/mnt/sdb/data-mwon/paperChega/env2/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1529, in _map_single
writer.write_batch(batch)
File "/mnt/sdb/data-mwon/paperChega/env2/lib/python3.6/site-packages/datasets/arrow_writer.py", line 278, in write_batch
pa_table = pa.Table.from_pydict(typed_sequence_examples)
File "pyarrow/table.pxi", line 1474, in pyarrow.lib.Table.from_pydict
File "pyarrow/array.pxi", line 322, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 222, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/mnt/sdb/data-mwon/paperChega/env2/lib/python3.6/site-packages/datasets/arrow_writer.py", line 100, in __arrow_array__
if trying_type and out[0].as_py() != self.data[0]:
File "pyarrow/array.pxi", line 1058, in pyarrow.lib.Array.__getitem__
File "pyarrow/array.pxi", line 540, in pyarrow.lib._normalize_index
IndexError: index out of bounds
```
| 01-18-2021 21:35:24 | 01-18-2021 21:35:24 | @sgugger <|||||>It's very hard to help you without being able to reproduce the bug. Could you share a small version of your csv file that reproduces it?<|||||>Yes, no problem. I just tried with a sample created from the `head`of my `full_corpus.csv` file and got the same error. This is the head:
```
A tomada de posse jΓ‘ estΓ‘ marcada para esta quarta feira ao fim da tarde...
Lobo Xavier estΓ‘ infetado com Covid-19. Esteve no Conselho de Estado na terΓ§a-feira.
"Porque estΓ‘ descida Γ© temporΓ‘ria. Se descessem agora, depois nΓ£o poderiam explicar a necessidade de uma nova subida."
Em acumulaΓ§Γ£o com o Banco de Portugal.
"EUA: HΓ‘ muitas maneiras de isto acabar mal. A newsletter Novo Normal do β¦β© no ECO. Um guia do que pode suceder nas eleiΓ§Γ΅es americanas (sentem-se, Γ© melhor)"
Costa vai substituir presidente do Tribunal de Contas via
Como criar filhos felizes?
Uma economia a 90 por cento via
Apoio Γ Retoma Progressiva vai permitir suspender contratos via Falta saber qual o valor do salΓ‘rio e quem o paga.
O perigo de esperar que o Estado nos salve
```<|||||>The problem is that you are not passing a `max_seq_length` so the script uses the tokenizer `model_lax_length`, which is in turn excessively large (1000000000000000019884624838656). So this results in all your texts not even being able to produce one batch.
Just pass `--max_seq_length 512` or something else and you should be good.<|||||>Ok, thanks. It's working now. |
transformers | 9,664 | closed | Missing `return_dict` in Doc example | # What does this PR do?
Fixes a crash in [Summary of the tasks](https://huggingface.co/transformers/task_summary.html) documentation, by adding `return_dict=True` to the model() function, as we need `start_logits` and `end_logits` afterwards.
## Fixes
[Issue 9043](https://github.com/huggingface/transformers/issues/9043) (reproduced in 4.2.1)
## Who can review?
@LysandreJik @sgugger
| 01-18-2021 20:54:19 | 01-18-2021 20:54:19 | Hello! Are you sure you checked in v4.2.1? I just checked on both `master` and v4.2.1 and the code executes (as it should!).
The `return_dict` was set to be `True` by default in v4.0.0.<|||||>My bad, I was in the wrong pyenv. Closing the PR. |
transformers | 9,663 | closed | Fix DPRReaderTokenizer's attention_mask | # What does this PR do?
This PR fixes an issue with the `attention_mask` not being properly generated by the DPRReaderTokenizer. Please see issue #9555 for more details.
I added an integration test that checks the DPRReader following a similar example in the test file.
I have some test failures due to
```
"AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess'"
```
and
```
"AttributeError: module 'wandb' has no attribute 'ensure_configured'"
```
which seem to be unrelated to my code changes.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik and @lhoestq would probably be best positioned to review.
| 01-18-2021 20:44:00 | 01-18-2021 20:44:00 | Thank you both. Should I close the corresponding issue?<|||||>Just did! Thanks! |
transformers | 9,662 | closed | Fix TFTrainer prediction output | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This PR fixes two issues:
1) The prediction output (specifically prediction_loop) of TFTrainer does not match the dataset cardinality. If the number of examples is divisible by eval_batch_size, the first batch is predicted twice. Else, the first n examples, where n = eval_batch_size - num_examples % eval_batch_size, are predicted twice.
This results in an output shape that is different from dataset cardinality. This also causes the output of evaluate(), including eval_loss, to be incorrect (e.g. loss is computed twice for the first few examples).
2) The evaluation loss only works the first time it is computed. Subsequent computations result to 0. Below is a sample output when an evaluation strategy is set during training:
[INFO|trainer_tf.py:398] 2021-01-18 01:34:58,856 >> {**'eval_loss': 0.6290212145038679**, 'eval_acc': 0.6875, ... 'step': 10}
[INFO|trainer_tf.py:398] 2021-01-18 01:35:10,852 >> {**'eval_loss': 0.0**, 'eval_acc': 0.6875, ..., 'step': 20}
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
tensorflow: @jplu
Trainer: @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 01-18-2021 15:57:32 | 01-18-2021 15:57:32 | Sorry for the confusion. Here is a detailed description of the first issue:
If the number of examples is divisible by eval_batch_size, the first batch is predicted twice. Say we have the following examples (n=8): A, B, C, D, E, F, G, H and eval_batch_size = 4. This will create 2 batches: (A,B,C,D) and (E,F,G,H). The current implementation of prediction_loop() will run batch (A,B,C,D), (E,F,G,H) and (A,B,C,D) again. This produces an output with a shape of 12 instead of 8.
This causes eval_loss to be computed as (loss(A,B,C,D) + loss(E,F,G,H) + loss(A,B,C,D)) / 2. Other metrics are also computed on A, B, C, D, E, F, G, H, A, B, C, D instead of only on A, B, C, D, E, F, G, H.
If the number of examples is not divisible by eval_batch_size, the first batch and part of the second batch are predicted twice. Say we have examples (n=8): A, B, C, D, E, F, G, H and eval_batch_size = 5. The current implementation of prediction_loop() will run batch (A,B,C,D,E), (F,G,H,A,B) and (C,D,E,F,G). This produces an output with a shape of 15 instead of 8. Again, the eval_loss and other metrics are computed incorrectly.
[This](https://colab.research.google.com/drive/1JH-269TcWWzowDngCmtpEwnmsFbVtL7K) is an example on an actual dataset (code is taken from the run_tf_glue.py example). Notice the assertion error when comparing the number of predicted results to the number of examples in the dataset.
I did realize that I made a complicated solution to the problem. The main change needed was to not call repeat(). I have committed the simpler solution.<|||||>Ok, from what I understand of the problem, what you have done is still not acceptable sorry, the build of a dataset must stay as it is because, `repeat` is very important and is mandatory when training on TPU, and `drop_remainder` must stay accessible through the training argument `dataloader_drop_last`, what if someone want it to be `False`?
If the problem is in the `prediction_loop` and goes one step to far, you can just stop the loop one step before by replacing:
```
if step == steps:
break
```
by
```
if step == steps: - 1:
break
```
This should work.<|||||>` if step == steps: - 1` only fixes the problem when `num_examples % batch_size`.
I am sorry if I missed this, but it seems like `prediction_loop` is only called during evaluation/prediction. I am not quite sure how it affects the training process. I did not change `get_train_tfdataset()`; `repeat()` is still in it. At the very least, `repeat()` should be removed in `get_test_tfdataset()` (i.e.during prediction), although I argue that it should be removed in
`get_test_tfdataset()` too because the evaluation loss being reported is incorrect.
<|||||>After giving a deeper look at the issue, I can see three things to fix:
1. Replace `if step == steps:` by `if step == steps: - 1:` line 348
2. Replace `metrics["eval_loss"] = self.eval_loss.result().numpy() / steps` by `metrics["eval_loss"] = self.eval_loss.result().numpy() / (steps - 1)` line 356
3. Move `self.eval_loss = tf.keras.metrics.Sum()` from the `prediction_loop` method inside the `__init__` method.
After having done those changes, run a training with the `--dataloader_drop_last` argument. Now you should not see the `0.0` loss value anymore.
The argument `--dataloader_drop_last` removes the last batch of the dataset. In order to be sure of that you can know the real size of your dataset by doing `(dataset.cardinality().numpy()// eval_batch_size) * eval_batch_size`. In the case of the MRPC dataset, `dataset.cardinality().numpy() == 408`, while the effective number of examples on which you will evaluate your model is `400`.<|||||>Thank you for fixing the 0.0 loss value!
Unfortunately, I think the first issue I mentioned still persists even with your first two fixes. If my math is correct, with your current solution, you use an `eval_batch_size` of 10 when evaluating the MRPC dataset, your model will be evaluated on 410 examples (the loss from the first two examples will be added twice to `eval_loss `). One way to check this is to log/print the shape of `preds`.
I am sorry if my explanations are confusing. I think my main point is that if I want to evaluate/predict on X examples, I expect it to evaluate/predict on exactly X examples, i.e. `preds.shape = (X, n_tags)` . I actually found this issue when I was using my trained model to predict 929 examples. I was getting 944 predictions, i.e. `predictions.shape = (944, n_tags)`. With your solution, I will be getting 936 examples.
On a related note, may I ask if there are unit tests for TFTrainer and if so, where these are located? I was only able to locate the Trainer tests.<|||||>Also, it seems like `drop_remainder` does not have an effect if you are using `repeat()`.
```python
import tensorflow as tf
dummy_data = tf.data.Dataset.range(8)
```
```python
test_batches = (
dummy_data.repeat()
.batch(3, drop_remainder=True)
.prefetch(tf.data.experimental.AUTOTUNE)
)
steps = 5
for step, batch in enumerate(test_batches):
print(batch)
if step == steps:
break
```
```python
test_batches_nodrop = (
dummy_data.repeat()
.batch(3, drop_remainder=False)
.prefetch(tf.data.experimental.AUTOTUNE)
)
print('Batches when drop_remainder=False')
steps = 5
for step, batch in enumerate(test_batches_nodrop):
print(batch)
if step == steps:
break
```
Both code blocks produce the same output:
```python
tf.Tensor([0 1 2], shape=(3,), dtype=int64)
tf.Tensor([3 4 5], shape=(3,), dtype=int64)
tf.Tensor([6 7 0], shape=(3,), dtype=int64)
tf.Tensor([1 2 3], shape=(3,), dtype=int64)
tf.Tensor([4 5 6], shape=(3,), dtype=int64)
tf.Tensor([7 0 1], shape=(3,), dtype=int64)
```
It does have effect on the behavior of `prediction_loop()` because it changes the `steps` to `steps-1`, but that is due to this line:
`approx = math.floor if self.args.dataloader_drop_last else math.ceil`<|||||>> I am sorry if my explanations are confusing. I think my main point is that if I want to evaluate/predict on X examples, I expect it to evaluate/predict on exactly X examples, i.e. preds.shape = (X, n_tags) .
Yes, but for this you have to use a compliant batch size, the requirements are to drop the last batch if its size is lower than the required batch size. This is the wanted and expected behavior. See my explanation on MRPC, with a batch size of 16 only 400 examples over 408 will be evaluated. If you want to evaluate over all the examples, you can use a batch size of 8.
> Also, it seems like drop_remainder does not have an effect if you are using repeat().
Yes, this is normal, as detailed in the documentation. This is one of the reason why we use the approx variable.
<|||||>I see. Thank you for the explanation, and I am sorry for overlooking the comment about `drop_remainder` not having an effect. In this case, may I ask if it is ok to log the actual number of examples the model is evaluated on, e.g. adding something like: "Number of examples used for evaluation = 400"?
Does `predict()` require a similar behavior, e.g. is `repeat()` required in `get_test_tfdataset()`? Unlike `evaluate()` it is never used in training. If I set `dataloader_drop_last=True` during training then perform prediction on unlabeled examples after, only 928 of my 929 examples are given a prediction.<|||||>> I see. Thank you for the explanation, and I am sorry for overlooking the comment about drop_remainder not having an effect.
No worries, that's ok :)
> In this case, may I ask if it is ok to log the actual number of examples the model is evaluated on, e.g. adding something like: "Number of examples used for evaluation = 400"?
Sure! That would be a good idea!!
> Does predict() require a similar behavior, e.g. is repeat() required in get_test_tfdataset()? Unlike evaluate() it is never used in training. If I set dataloader_drop_last=True during training then perform prediction on unlabeled examples after, only 928 of my 929 examples are given a prediction.
I agree, no need to use `repeat` for predict that uses the test dataset π <|||||>I have pushed the discussed changes. A couple of final things:
> 2. Replace `metrics["eval_loss"] = self.eval_loss.result().numpy() / steps` by `metrics["eval_loss"] = self.eval_loss.result().numpy() / (steps - 1)` line 356
This does not need to be replaced. Since the loop terminates when `step== steps - 1`, it actually runs for n=steps times.
> 3. Move `self.eval_loss = tf.keras.metrics.Sum()` from the `prediction_loop` method inside the `__init__` method.
This works but I had to call `eval_loss.reset_states()` inside `prediction_loop()` so the sum from the previous calculations is not added.
|
transformers | 9,661 | closed | Fix TF Flaubert and XLM | # What does this PR do?
By doing some experiments on Flaubert and XLM I realized that building a model with a `None` argument forces this value to be `None` when served. Then to fix this issue, the build takes a proper input without a `None` value.
| 01-18-2021 14:48:41 | 01-18-2021 14:48:41 | I'm checking what is going wrong as the tests of equivalence should not fail.<|||||>Ok, now it works π |
transformers | 9,660 | closed | run_ner.py crashes when dev or test contain previously unseen labels | ## Environment info
- `transformers` version: 4.2.0dev0
- Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-centos-7.8.2003-Core
- Python version: 3.6.8
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: don't know
### Who can help
@stefan-it
## Information
Model I am using (Bert, XLNet ...): KB/bert-base-swedish-cased
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I am using run_ner.py to POS-tag a Swedish corpus (TalbankenSBX). If dev and/or test files contain a label that is not present in the train file, the script crashes. The same issue arises with any other corpus I try.
## To reproduce
Steps to reproduce the behavior:
1. Create two toy files with the following contents:
sbx-1-train.json:
```
{"words": ["Vem", "fΓ₯r", "rΓΆsta", "?"], "pos": ["HP.UTR.SIN.IND", "VB.PRS.AKT", "VB.INF.AKT", "MAD"]}
```
sbx-1-dev.json
```
{"words": ["Γ€r", "fΓΆdd", "1950", "eller", "tidigare", ","], "pos": ["VB.PRS.AKT", "PC.PRF.UTR.SIN.IND.NOM", "RG.NOM", "KN", "AB.KOM", "MID"]}
```
2. Run python run_ner.py --model_name_or_path KB/bert-base-swedish-cased --train_file sbx-1-train.json --validation_file sbx-1-dev.json --output_dir sbx1 --do_train --do_eval
This results in:
```
Traceback (most recent call last):
File "run_ner.py", line 412, in <module>
main()
File "run_ner.py", line 303, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/home/sasha/venvs/hugtrans/lib64/python3.6/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/home/sasha/venvs/hugtrans/lib64/python3.6/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/home/sasha/venvs/hugtrans/lib64/python3.6/site-packages/datasets/arrow_dataset.py", line 1240, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/home/sasha/venvs/hugtrans/lib64/python3.6/site-packages/datasets/arrow_dataset.py", line 1211, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "run_ner.py", line 288, in tokenize_and_align_labels
label_ids.append(label_to_id[label[word_idx]])
KeyError: 'PC.PRF.UTR.SIN.IND.NOM'
```
...where PC.PRF.UTR.SIN.IND.NOM is the tag which is not present in the train set.
## Expected behavior
The script should not crash when encountering unseen tags. | 01-18-2021 14:35:43 | 01-18-2021 14:35:43 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,659 | closed | Wav2Vec2 | # What does this PR do?
Adds Wav2Vec2 from https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/README.md
This PR adds the wav2vec2 Acoustic model to Transformers. The model is different from "conventional" transformer
models since it classifies a raw waveform input (float array) into logits. Therefore the `Wav2Vec2Tokenizer` behaves quite differently from usual tokenizers in that it only pads an input instead of encoding it to token ids.
The fully functional model should be added in three steps:
1) Add the Acoustic model ready to be used for inference (This PR)
2) Add fine-tuning + pertaining functionality to the model (Next PR)
3) Add an example script showing how Wav2Vec2 can be used with a language model
4) Add an Encoder/Decoder version of Wav2Vec2.
# Usage
One can play around with a quick demo here: https://colab.research.google.com/drive/1xaVKQ739-ue0v8IuMZoMzOFc4-NlGQyd?usp=sharing and some usage examples as described on the model cards:
https://huggingface.co/models?filter=wav2vec2
# Review
In this PR, no training functionality is added to the model. Because this is quite complex for Wav2Vec2 this will be done in a follow up PR.
It would be great, if we can however already merge this first part which allows to use Wav2Vec2 for inference.
The tokenizer is quite different, so it would be great if @thomwolf @n1t0 can also take a look here.
Since the model expects the raw waveform signal as an input, the name `input_ids` is changed to `input_values` standing for "a tensor of float values" - would be great if you can check this @LysandreJik @sgugger @thomwolf.
# Done:
- [x] load pretrained weight into model
- [x] make sure forward pass yields equal outputs
- [x] successful transcription
- [x] add tokenizer
- [x] Think about how to add the two different architectures `Wav2Vec 2.0 Large (LV-60)`/`Wav2Vec 2.0 Large (LV-60) + Self Training` is different from `Wav2Vec 2.0 Large`/`Wav2Vec 2.0 Base` (layer_norm is switched and no group norm is used)
- [x] add model tests
- [x] add tokenizer tests
- [x] add docstring
- [x] clean config
# Future TODO:
- [ ] Add PreTraining & Fine-Tuning to model
- [ ] Add Encoder Decoder model / CTC decoding
| 01-18-2021 13:28:42 | 01-18-2021 13:28:42 | hi @patrickvonplaten thank you for creating this PR and converting some model from original to test it
i want to test by convert XLSR [model](https://github.com/pytorch/fairseq/tree/master/examples/wav2vec ) using your `convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py`
by following command `python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py --checkpoint_path /content/xlsr_53_56k.pt --pytorch_dump_folder_path huggingface_model`
i got error
```
File "convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 147, in convert_wav2vec2_checkpoint
[checkpoint_path], arg_overrides={"data": dict_path}
File "/usr/local/lib/python3.6/dist-packages/fairseq/checkpoint_utils.py", line 279, in load_model_ensemble_and_task
state = load_checkpoint_to_cpu(filename, arg_overrides)
File "/usr/local/lib/python3.6/dist-packages/fairseq/checkpoint_utils.py", line 231, in load_checkpoint_to_cpu
setattr(args, arg_name, arg_val)
AttributeError: 'NoneType' object has no attribute 'data'
```
do i need to specify the --dict_path argument...if so ..where can i get them ? thanks<|||||>[this issue](https://github.com/pytorch/fairseq/issues/3050) seems fix my previous problem but the new error comes up
```
Feat extract conv layer 0 was initialized from feature_extractor.conv_layers.0.0.weight.
Traceback (most recent call last):
File "convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 162, in <module>
convert_wav2vec2_checkpoint(args.checkpoint_path, args.pytorch_dump_folder_path, args.dict_path)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 151, in convert_wav2vec2_checkpoint
recursively_load_weights(model, hf_wav2vec)
File "convert.py", line 77, in recursively_load_weights
hf_model.config.feat_extract_norm == "group",
File "convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 112, in load_conv_layer
value.shape == feature_extractor.conv_layers[layer_id].conv.bias.data.shape
AttributeError: 'NoneType' object has no attribute 'data'
```<|||||>> ```
> AttributeError: 'NoneType' object has no attribute 'data'
> ```
Hey @acul3,
thanks for trying out the `XLSR` model! I haven't added it to the conversion script yet - will do it this week!<|||||>thank you for the great idea and work to merge wav2vec2 to transformers. I am wondering:
1. how to use a transformer LM for decoding as Fairseq uses wav2letter's decoder for better accuracy.
2. it seems to be much convenient if the output has a confidence score too<|||||>> thank you for the great idea and work to merge wav2vec2 to transformers. I am wondering:
>
> 1. how to use a transformer LM for decoding as Fairseq uses wav2letter's decoder for better accuracy.
> 2. it seems to be much convenient if the output has a confidence score too
1. I'm also still working on figuring out the best way to do this!
2. Yeah that will be a nice-to-have, but it will require some time to be added.<|||||>@patrickvonplaten Another great PR!
I am wondering whether this current implementation supports self-supervise training of user's custom dataset ?
<|||||>> supervise
Not yet :-) Working on it right now!<|||||>Inspiring feat Patrick! I remember this model was like a puzzle for me the first time I tried to make it work. You've made it incredibly easy to use. Can't wait for the decoders and finetuning<|||||>
> > ```
> > AttributeError: 'NoneType' object has no attribute 'data'
> > ```
>
> Hey @acul3,
>
> thanks for trying out the `XLSR` model! I haven't added it to the conversion script yet - will do it this week!
hi @patrickvonplaten is this available yet? thank you<|||||>@patrickvonplaten
Strange bug, if I use the self trained lv60 960h version on CPU the results are very good
Using it on cuda the results are pretty strange
I am using the code provided on model card<|||||>> > ```
> > AttributeError: 'NoneType' object has no attribute 'data'
> > ```
>
>
> Hey @acul3,
>
>
> thanks for trying out the `XLSR` model! I haven't added it to the conversion script yet - will do it this week!
I am trying to convert `XLSR` models too, by modifying the config below, it seems that I used all weights as wav2vec_small do
```
{
"activation_dropout": 0.0,
"apply_spec_augment": true,
"architectures": [
"Wav2Vec2Model"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"conv_bias": true,
"conv_dim": [
512,
512,
512,
512,
512,
512,
512
],
"conv_kernel": [
10,
3,
3,
3,
3,
2,
2
],
"conv_stride": [
5,
2,
2,
2,
2,
2,
2
],
"do_stable_layer_norm": false,
"eos_token_id": 2,
"feat_extract_activation": "gelu",
"feat_extract_norm": "layer",
"feat_proj_dropout": 0.1,
"final_dropout": 0.0,
"freeze_feat_extract_train": true,
"hidden_act": "gelu",
"hidden_dropout": 0.1,
"hidden_size": 1024,
"initializer_range": 0.02,
"intermediate_size": 4096,
"gradient_checkpointing": true,
"layer_norm_eps": 1e-05,
"layerdrop": 0.0,
"mask_channel_length": 10,
"mask_channel_min_space": 1,
"mask_channel_other": 0.0,
"mask_channel_prob": 0.0,
"mask_channel_selection": "static",
"mask_time_length": 10,
"mask_time_min_space": 1,
"mask_time_other": 0.0,
"mask_time_prob": 0.05,
"mask_time_selection": "static",
"model_type": "wav2vec2",
"no_mask_channel_overlap": false,
"no_mask_time_overlap": false,
"num_attention_heads": 16,
"num_conv_pos_embedding_groups": 16,
"num_conv_pos_embeddings": 128,
"num_feat_extract_layers": 7,
"num_hidden_layers": 24,
"pad_token_id": 0
}
```
After converting:
> Unused weights ['quantizer.vars', 'quantizer.weight_proj.weight', 'quantizer.weight_proj.bias', 'project_q.weight', 'project_q.bias', 'layer_norm.weight', 'layer_norm.bias', 'final_proj.weight', 'final_proj.bias']
Nevertheless, The logits are the same on testing, seems that I still left something not converted? Do you have any ideas what's going on ?
```
tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base")
model = Wav2Vec2Model.from_pretrained("./xlsr")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
input_values = tokenizer(ds["speech"][0], return_tensors="pt").input_values # Batch size 1
logits = model(input_values).last_hidden_state
```
Logits:
```
tensor([[[ 0.2285, 0.6376, 0.0164, ..., 0.0275, 0.1215, -0.2684],
[ 0.2285, 0.6376, 0.0164, ..., 0.0275, 0.1215, -0.2684],
[ 0.2285, 0.6376, 0.0164, ..., 0.0275, 0.1215, -0.2684],
...,
[ 0.2285, 0.6376, 0.0164, ..., 0.0275, 0.1215, -0.2684],
[ 0.2285, 0.6376, 0.0164, ..., 0.0275, 0.1215, -0.2684],
[ 0.2285, 0.6376, 0.0164, ..., 0.0275, 0.1215, -0.2684]]],
grad_fn=<NativeLayerNormBackward>)
```
|
transformers | 9,658 | closed | Tokenizstion | Hi all, help me please. I'm trying to solve multilingual WiC problem using XLM-R.
I have a word and a sentence, that has this word, but may be in different form. So I want to find the position of its token ids in an encoded sentence.
There problems with Chinese, cause they don't have space between many symbols and tokenizer can encode a pair of symbols differently. | 01-18-2021 12:49:55 | 01-18-2021 12:49:55 | So in short:
- in public open-source projects like this you can say "Hi all" or "Hi folks", this way you'll address contributors of all genders
- also, do you mind closing this issue and opening a thread on the forum at https://discuss.huggingface.co? We keep the issues for bug reports and feature requests which this issue is not.
- last note for your future issues on open-source projects: when there is an issue template like here you should fill it that's typically the first thing the maintainers will ask you if you didn't do it.
This looks like a nice project good luck with it and I hope you'll succeed! |
transformers | 9,657 | closed | ModuleAttributeError occurs during Converting TensorFlow Checkpoints (BERT) | ## Environment info
- `transformers` version: 4.1.1
- Platform: Linux-4.15.0-129-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* Convert TF v1 ckpt to PyTorch
## To reproduce
I tried to convert a TensorFlow checkpoint, but `ModuleAttributeError` occurred.
What I run:
```
****@**** $ transformers-cli convert --model_type bert \
> --tf_checkpoint $MODEL_DIR/model.ckpt \
> --config ****/bert_config.json \
> --pytorch_dump_output $MODEL_DIR/pytorch_model.bin
```
(In this time, `bert_config.json` is in a separate folder, but it corresponds to the `ckpt`.)
Output is:
```
Traceback (most recent call last):
File "/****/.pyenv/versions/anaconda3-2020.07/bin/transformers-cli", line 8, in <module>
sys.exit(main())
File "/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/commands/transformers_cli.py", line 51, in main
service.run()
File "/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/commands/convert.py", line 105, in run
convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output)
File "/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_bert(model, config, tf_checkpoint_path)
File "/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 155, in load_tf_weights_in_bert
pointer.shape == array.shape
File "/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/torch/nn/modules/module.py", line 778, in __getattr__
raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
torch.nn.modules.module.ModuleAttributeError: 'BertEmbeddings' object has no attribute 'shape'
```
## Expected behavior
I think it is not strange that `BertEmbeddings` (`nn.Module`) doesn't have `shape.`
Is it possible to get such an error depending on the original TensorFlow checkpoint?γ
In such a case, is there any tips to deal with it?
I really appreciate any help you can provide.
| 01-18-2021 12:37:11 | 01-18-2021 12:37:11 | Hi, how did you obtain your TensorFlow checkpoint? Was it trained with http://github.com/google-research/bert?<|||||>Hi @LysandreJik,
Thank you for giving me the comment.
The TensorFlow checkpoint is not my own but is provided by a researcher. There may be my misreading about the related paper, but in the paper the researcher says:
- They fine-tune all the parameters including the BERT and the two additional linear layers.
- They directly used public pretrained parameters of BERT from https://github.com/google-research/bert
From the information in the paper, I think the checkpoint is trained with the URL you showed me.<|||||>I'm having a hard time reproducing the issue, the following works:
```
transformers-cli convert --model_type bert \
--tf_checkpoint bert_model.ckpt \
--config bert_config.json \
--pytorch_dump_output pytorch_model.bin
```
on both of these:
```
BERT-Base, Uncased: 12-layer, 768-hidden, 12-heads, 110M parameters
BERT-Large, Uncased: 24-layer, 1024-hidden, 16-heads, 340M parameters
```
Do you know of any difference between those architectures and the one you have?<|||||>Thank you for taking your time to reproduce this issue.
The checkpoint is using `uncased_L-12_H-768_A-12/bert_model.ckpt` as an initial checkpoint for fine-tuning.
Hence, the checkpoint seems to be an `Uncased` model.
The `BertConfig` of the checkpoint says the architecture has the following parameters:
```
12-layer, 768-hidden, 12-heads
```
For your reference, the items below in the fine-tuned checkpoint folder (which I referred to as`$MODEL_DIR`).
``` sh
****@**** $ ls
checkpoint model.ckpt.data-00000-of-00001 model.ckpt.index model.ckpt.meta
```
I would like to try converting on another checkpoint as well and see if I get the same problem.
<|||||>```
****@**** $ pwd
/****/uncased_L-12_H-768_A-12
****@**** $ ls
bert_config.json bert_model.ckpt.data-00000-of-00001 bert_model.ckpt.index bert_model.ckpt.meta vocab.txt
****@**** $ transformers-cli convert --model_type bert \
> --tf_checkpoint bert_model.ckpt \
> --config bert_config.json \
> --pytorch_dump_output pytorch_model.bin
```
It worked without any error, and showed `Save PyTorch model to pytorch_model.bin`.
Is it possible that `vocab.txt` needs to be in the same folder as `ckpt`, not in the same folder as `bert_config.json`?
I'm sorry if I'm missing the point.<|||||>So it worked with the second BERT model but not with the first? Do you know of any difference between the first and second?
The `vocab.txt` shouldn't have an impact; this is for the tokenizer and it can be automatically loaded by the `BertTokenizer`<|||||>Yes, it worked with the second one but not with the first one.
- the first BERT model: fine-tuned and provided by a third-party, using the official pre-trained model as an initial point.
- the second BERT model: official pre-trained model provided in https://github.com/google-research/bert
Thank you for telling me that `vocab.txt` is not the cause.
Before I saw your comment, I had tried putting vocab.txt in the same folder, but I still got the same error.
The output during the conversion of the first model is as below.
It seems `bert/embeddings` is skipped.
``` sh
2021-01-19 15:00:49.880931: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
Building PyTorch model from configuration: BertConfig {
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"type_vocab_size": 2,
"vocab_size": 30522
}
Converting TensorFlow checkpoint from /****/model.ckpt
Loading TF weight bert/embeddings/LayerNorm/beta with shape [768]
Loading TF weight bert/embeddings/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/embeddings/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/embeddings/LayerNorm/gamma with shape [768]
Loading TF weight bert/embeddings/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/embeddings/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/embeddings/position_embeddings with shape [512, 768]
Loading TF weight bert/embeddings/position_embeddings/adam_m with shape [512, 768]
Loading TF weight bert/embeddings/position_embeddings/adam_v with shape [512, 768]
Loading TF weight bert/embeddings/relation_embedding with shape [47, 768]
Loading TF weight bert/embeddings/token_type_embeddings with shape [2, 768]
Loading TF weight bert/embeddings/token_type_embeddings/adam_m with shape [2, 768]
Loading TF weight bert/embeddings/token_type_embeddings/adam_v with shape [2, 768]
Loading TF weight bert/embeddings/word_embeddings with shape [30522, 768]
2021-01-19 15:00:58.056232: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 93763584 exceeds 10% of free system memory.
Loading TF weight bert/embeddings/word_embeddings/adam_m with shape [30522, 768]
2021-01-19 15:00:58.912772: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 93763584 exceeds 10% of free system memory.
Loading TF weight bert/embeddings/word_embeddings/adam_v with shape [30522, 768]
2021-01-19 15:00:59.755334: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 93763584 exceeds 10% of free system memory.
Loading TF weight bert/encoder/layer_0/attention/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_0/attention/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_0/attention/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_0/attention/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_0/attention/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_0/attention/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_0/attention/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_0/attention/output/dense/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_0/attention/output/dense/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_0/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_0/attention/self/key/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_0/attention/self/key/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_0/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_0/attention/self/key/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_0/attention/self/key/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_0/attention/self/query/bias with shape [768]
Loading TF weight bert/encoder/layer_0/attention/self/query/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_0/attention/self/query/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_0/attention/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_0/attention/self/query/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_0/attention/self/query/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_0/attention/self/value/bias with shape [768]
Loading TF weight bert/encoder/layer_0/attention/self/value/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_0/attention/self/value/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_0/attention/self/value/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_0/attention/self/value/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_0/attention/self/value/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_0/intermediate/dense/bias with shape [3072]
Loading TF weight bert/encoder/layer_0/intermediate/dense/bias/adam_m with shape [3072]
Loading TF weight bert/encoder/layer_0/intermediate/dense/bias/adam_v with shape [3072]
Loading TF weight bert/encoder/layer_0/intermediate/dense/kernel with shape [768, 3072]
Loading TF weight bert/encoder/layer_0/intermediate/dense/kernel/adam_m with shape [768, 3072]
Loading TF weight bert/encoder/layer_0/intermediate/dense/kernel/adam_v with shape [768, 3072]
Loading TF weight bert/encoder/layer_0/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_0/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_0/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_0/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_0/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_0/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_0/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_0/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_0/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_0/output/dense/kernel with shape [3072, 768]
Loading TF weight bert/encoder/layer_0/output/dense/kernel/adam_m with shape [3072, 768]
Loading TF weight bert/encoder/layer_0/output/dense/kernel/adam_v with shape [3072, 768]
Loading TF weight bert/encoder/layer_1/attention/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_1/attention/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_1/attention/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_1/attention/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_1/attention/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_1/attention/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_1/attention/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_1/attention/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_1/attention/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_1/attention/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_1/attention/output/dense/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_1/attention/output/dense/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_1/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_1/attention/self/key/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_1/attention/self/key/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_1/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_1/attention/self/key/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_1/attention/self/key/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_1/attention/self/query/bias with shape [768]
Loading TF weight bert/encoder/layer_1/attention/self/query/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_1/attention/self/query/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_1/attention/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_1/attention/self/query/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_1/attention/self/query/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_1/attention/self/value/bias with shape [768]
Loading TF weight bert/encoder/layer_1/attention/self/value/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_1/attention/self/value/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_1/attention/self/value/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_1/attention/self/value/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_1/attention/self/value/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_1/intermediate/dense/bias with shape [3072]
Loading TF weight bert/encoder/layer_1/intermediate/dense/bias/adam_m with shape [3072]
Loading TF weight bert/encoder/layer_1/intermediate/dense/bias/adam_v with shape [3072]
Loading TF weight bert/encoder/layer_1/intermediate/dense/kernel with shape [768, 3072]
Loading TF weight bert/encoder/layer_1/intermediate/dense/kernel/adam_m with shape [768, 3072]
Loading TF weight bert/encoder/layer_1/intermediate/dense/kernel/adam_v with shape [768, 3072]
Loading TF weight bert/encoder/layer_1/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_1/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_1/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_1/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_1/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_1/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_1/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_1/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_1/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_1/output/dense/kernel with shape [3072, 768]
Loading TF weight bert/encoder/layer_1/output/dense/kernel/adam_m with shape [3072, 768]
Loading TF weight bert/encoder/layer_1/output/dense/kernel/adam_v with shape [3072, 768]
Loading TF weight bert/encoder/layer_10/attention/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_10/attention/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_10/attention/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_10/attention/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_10/attention/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_10/attention/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_10/attention/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_10/attention/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_10/attention/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_10/attention/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_10/attention/output/dense/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_10/attention/output/dense/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_10/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_10/attention/self/key/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_10/attention/self/key/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_10/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_10/attention/self/key/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_10/attention/self/key/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_10/attention/self/query/bias with shape [768]
Loading TF weight bert/encoder/layer_10/attention/self/query/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_10/attention/self/query/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_10/attention/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_10/attention/self/query/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_10/attention/self/query/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_10/attention/self/value/bias with shape [768]
Loading TF weight bert/encoder/layer_10/attention/self/value/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_10/attention/self/value/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_10/attention/self/value/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_10/attention/self/value/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_10/attention/self/value/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_10/intermediate/dense/bias with shape [3072]
Loading TF weight bert/encoder/layer_10/intermediate/dense/bias/adam_m with shape [3072]
Loading TF weight bert/encoder/layer_10/intermediate/dense/bias/adam_v with shape [3072]
Loading TF weight bert/encoder/layer_10/intermediate/dense/kernel with shape [768, 3072]
Loading TF weight bert/encoder/layer_10/intermediate/dense/kernel/adam_m with shape [768, 3072]
Loading TF weight bert/encoder/layer_10/intermediate/dense/kernel/adam_v with shape [768, 3072]
Loading TF weight bert/encoder/layer_10/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_10/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_10/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_10/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_10/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_10/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_10/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_10/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_10/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_10/output/dense/kernel with shape [3072, 768]
Loading TF weight bert/encoder/layer_10/output/dense/kernel/adam_m with shape [3072, 768]
Loading TF weight bert/encoder/layer_10/output/dense/kernel/adam_v with shape [3072, 768]
Loading TF weight bert/encoder/layer_11/attention/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_11/attention/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_11/attention/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_11/attention/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_11/attention/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_11/attention/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_11/attention/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_11/attention/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_11/attention/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_11/attention/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_11/attention/output/dense/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_11/attention/output/dense/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_11/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_11/attention/self/key/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_11/attention/self/key/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_11/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_11/attention/self/key/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_11/attention/self/key/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_11/attention/self/query/bias with shape [768]
Loading TF weight bert/encoder/layer_11/attention/self/query/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_11/attention/self/query/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_11/attention/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_11/attention/self/query/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_11/attention/self/query/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_11/attention/self/value/bias with shape [768]
Loading TF weight bert/encoder/layer_11/attention/self/value/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_11/attention/self/value/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_11/attention/self/value/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_11/attention/self/value/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_11/attention/self/value/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_11/intermediate/dense/bias with shape [3072]
Loading TF weight bert/encoder/layer_11/intermediate/dense/bias/adam_m with shape [3072]
Loading TF weight bert/encoder/layer_11/intermediate/dense/bias/adam_v with shape [3072]
Loading TF weight bert/encoder/layer_11/intermediate/dense/kernel with shape [768, 3072]
Loading TF weight bert/encoder/layer_11/intermediate/dense/kernel/adam_m with shape [768, 3072]
Loading TF weight bert/encoder/layer_11/intermediate/dense/kernel/adam_v with shape [768, 3072]
Loading TF weight bert/encoder/layer_11/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_11/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_11/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_11/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_11/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_11/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_11/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_11/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_11/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_11/output/dense/kernel with shape [3072, 768]
Loading TF weight bert/encoder/layer_11/output/dense/kernel/adam_m with shape [3072, 768]
Loading TF weight bert/encoder/layer_11/output/dense/kernel/adam_v with shape [3072, 768]
Loading TF weight bert/encoder/layer_2/attention/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_2/attention/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_2/attention/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_2/attention/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_2/attention/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_2/attention/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_2/attention/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_2/attention/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_2/attention/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_2/attention/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_2/attention/output/dense/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_2/attention/output/dense/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_2/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_2/attention/self/key/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_2/attention/self/key/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_2/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_2/attention/self/key/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_2/attention/self/key/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_2/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_2/attention/self/key/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_2/attention/self/key/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_2/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_2/attention/self/key/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_2/attention/self/key/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_2/attention/self/query/bias with shape [768]
Loading TF weight bert/encoder/layer_2/attention/self/query/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_2/attention/self/query/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_2/attention/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_2/attention/self/query/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_2/attention/self/query/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_2/attention/self/value/bias with shape [768]
Loading TF weight bert/encoder/layer_2/attention/self/value/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_2/attention/self/value/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_2/attention/self/value/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_2/attention/self/value/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_2/attention/self/value/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_2/intermediate/dense/bias with shape [3072]
Loading TF weight bert/encoder/layer_2/intermediate/dense/bias/adam_m with shape [3072]
Loading TF weight bert/encoder/layer_2/intermediate/dense/bias/adam_v with shape [3072]
Loading TF weight bert/encoder/layer_2/intermediate/dense/kernel with shape [768, 3072]
Loading TF weight bert/encoder/layer_2/intermediate/dense/kernel/adam_m with shape [768, 3072]
Loading TF weight bert/encoder/layer_2/intermediate/dense/kernel/adam_v with shape [768, 3072]
Loading TF weight bert/encoder/layer_2/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_2/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_2/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_2/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_2/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_2/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_2/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_2/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_2/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_2/output/dense/kernel with shape [3072, 768]
Loading TF weight bert/encoder/layer_2/output/dense/kernel/adam_m with shape [3072, 768]
Loading TF weight bert/encoder/layer_2/output/dense/kernel/adam_v with shape [3072, 768]
Loading TF weight bert/encoder/layer_3/attention/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_3/attention/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_3/attention/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_3/attention/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_3/attention/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_3/attention/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_3/attention/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_3/attention/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_3/attention/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_3/attention/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_3/attention/output/dense/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_3/attention/output/dense/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_3/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_3/attention/self/key/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_3/attention/self/key/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_3/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_3/attention/self/key/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_3/attention/self/key/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_3/attention/self/query/bias with shape [768]
Loading TF weight bert/encoder/layer_3/attention/self/query/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_3/attention/self/query/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_3/attention/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_3/attention/self/query/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_3/attention/self/query/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_3/attention/self/value/bias with shape [768]
Loading TF weight bert/encoder/layer_3/attention/self/value/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_3/attention/self/value/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_3/attention/self/value/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_3/attention/self/value/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_3/attention/self/value/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_3/intermediate/dense/bias with shape [3072]
Loading TF weight bert/encoder/layer_3/intermediate/dense/bias/adam_m with shape [3072]
Loading TF weight bert/encoder/layer_3/intermediate/dense/bias/adam_v with shape [3072]
Loading TF weight bert/encoder/layer_3/intermediate/dense/kernel with shape [768, 3072]
Loading TF weight bert/encoder/layer_3/intermediate/dense/kernel/adam_m with shape [768, 3072]
Loading TF weight bert/encoder/layer_3/intermediate/dense/kernel/adam_v with shape [768, 3072]
Loading TF weight bert/encoder/layer_3/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_3/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_3/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_3/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_3/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_3/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_3/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_3/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_3/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_3/output/dense/kernel with shape [3072, 768]
Loading TF weight bert/encoder/layer_3/output/dense/kernel/adam_m with shape [3072, 768]
Loading TF weight bert/encoder/layer_3/output/dense/kernel/adam_v with shape [3072, 768]
Loading TF weight bert/encoder/layer_4/attention/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_4/attention/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_4/attention/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_4/attention/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_4/attention/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_4/attention/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_4/attention/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_4/attention/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_4/attention/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_4/attention/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_4/attention/output/dense/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_4/attention/output/dense/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_4/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_4/attention/self/key/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_4/attention/self/key/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_4/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_4/attention/self/key/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_4/attention/self/key/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_4/attention/self/query/bias with shape [768]
Loading TF weight bert/encoder/layer_4/attention/self/query/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_4/attention/self/query/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_4/attention/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_4/attention/self/query/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_4/attention/self/query/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_4/attention/self/value/bias with shape [768]
Loading TF weight bert/encoder/layer_4/attention/self/value/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_4/attention/self/value/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_4/attention/self/value/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_4/attention/self/value/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_4/attention/self/value/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_4/intermediate/dense/bias with shape [3072]
Loading TF weight bert/encoder/layer_4/intermediate/dense/bias/adam_m with shape [3072]
Loading TF weight bert/encoder/layer_4/intermediate/dense/bias/adam_v with shape [3072]
Loading TF weight bert/encoder/layer_4/intermediate/dense/kernel with shape [768, 3072]
Loading TF weight bert/encoder/layer_4/intermediate/dense/kernel/adam_m with shape [768, 3072]
Loading TF weight bert/encoder/layer_4/intermediate/dense/kernel/adam_v with shape [768, 3072]
Loading TF weight bert/encoder/layer_4/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_4/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_4/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_4/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_4/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_4/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_4/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_4/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_4/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_4/output/dense/kernel with shape [3072, 768]
Loading TF weight bert/encoder/layer_4/output/dense/kernel/adam_m with shape [3072, 768]
Loading TF weight bert/encoder/layer_4/output/dense/kernel/adam_v with shape [3072, 768]
Loading TF weight bert/encoder/layer_5/attention/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_5/attention/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_5/attention/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_5/attention/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_5/attention/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_5/attention/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_5/attention/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_5/attention/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_5/attention/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_5/attention/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_5/attention/output/dense/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_5/attention/output/dense/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_5/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_5/attention/self/key/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_5/attention/self/key/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_5/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_5/attention/self/key/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_5/attention/self/key/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_5/attention/self/query/bias with shape [768]
Loading TF weight bert/encoder/layer_5/attention/self/query/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_5/attention/self/query/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_5/attention/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_5/attention/self/query/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_5/attention/self/query/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_5/attention/self/value/bias with shape [768]
Loading TF weight bert/encoder/layer_5/attention/self/value/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_5/attention/self/value/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_5/attention/self/value/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_5/attention/self/value/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_5/attention/self/value/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_5/intermediate/dense/bias with shape [3072]
Loading TF weight bert/encoder/layer_5/intermediate/dense/bias/adam_m with shape [3072]
Loading TF weight bert/encoder/layer_5/intermediate/dense/bias/adam_v with shape [3072]
Loading TF weight bert/encoder/layer_5/intermediate/dense/kernel with shape [768, 3072]
Loading TF weight bert/encoder/layer_5/intermediate/dense/kernel/adam_m with shape [768, 3072]
Loading TF weight bert/encoder/layer_5/intermediate/dense/kernel/adam_v with shape [768, 3072]
Loading TF weight bert/encoder/layer_5/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_5/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_5/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_5/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_5/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_5/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_5/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_5/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_5/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_5/output/dense/kernel with shape [3072, 768]
Loading TF weight bert/encoder/layer_5/output/dense/kernel/adam_m with shape [3072, 768]
Loading TF weight bert/encoder/layer_5/output/dense/kernel/adam_v with shape [3072, 768]
Loading TF weight bert/encoder/layer_6/attention/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_6/attention/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_6/attention/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_6/attention/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_6/attention/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_6/attention/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_6/attention/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_6/attention/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_6/attention/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_6/attention/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_6/attention/output/dense/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_6/attention/output/dense/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_6/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_6/attention/self/key/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_6/attention/self/key/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_6/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_6/attention/self/key/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_6/attention/self/key/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_6/attention/self/query/bias with shape [768]
Loading TF weight bert/encoder/layer_6/attention/self/query/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_6/attention/self/query/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_6/attention/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_6/attention/self/query/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_6/attention/self/query/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_6/attention/self/value/bias with shape [768]
Loading TF weight bert/encoder/layer_6/attention/self/value/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_6/attention/self/value/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_6/attention/self/value/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_6/attention/self/value/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_6/attention/self/value/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_6/intermediate/dense/bias with shape [3072]
Loading TF weight bert/encoder/layer_6/intermediate/dense/bias/adam_m with shape [3072]
Loading TF weight bert/encoder/layer_6/intermediate/dense/bias/adam_v with shape [3072]
Loading TF weight bert/encoder/layer_6/intermediate/dense/kernel with shape [768, 3072]
Loading TF weight bert/encoder/layer_6/intermediate/dense/kernel/adam_m with shape [768, 3072]
Loading TF weight bert/encoder/layer_6/intermediate/dense/kernel/adam_v with shape [768, 3072]
Loading TF weight bert/encoder/layer_6/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_6/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_6/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_6/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_6/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_6/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_6/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_6/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_6/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_6/output/dense/kernel with shape [3072, 768]
Loading TF weight bert/encoder/layer_6/output/dense/kernel/adam_m with shape [3072, 768]
Loading TF weight bert/encoder/layer_6/output/dense/kernel/adam_v with shape [3072, 768]
Loading TF weight bert/encoder/layer_7/attention/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_7/attention/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_7/attention/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_7/attention/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_7/attention/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_7/attention/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_7/attention/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_7/attention/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_7/attention/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_7/attention/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_7/attention/output/dense/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_7/attention/output/dense/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_7/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_7/attention/self/key/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_7/attention/self/key/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_7/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_7/attention/self/key/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_7/attention/self/key/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_7/attention/self/query/bias with shape [768]
Loading TF weight bert/encoder/layer_7/attention/self/query/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_7/attention/self/query/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_7/attention/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_7/attention/self/query/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_7/attention/self/query/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_7/attention/self/value/bias with shape [768]
Loading TF weight bert/encoder/layer_7/attention/self/value/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_7/attention/self/value/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_7/attention/self/value/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_7/attention/self/value/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_7/attention/self/value/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_7/intermediate/dense/bias with shape [3072]
Loading TF weight bert/encoder/layer_7/intermediate/dense/bias/adam_m with shape [3072]
Loading TF weight bert/encoder/layer_7/intermediate/dense/bias/adam_v with shape [3072]
Loading TF weight bert/encoder/layer_7/intermediate/dense/kernel with shape [768, 3072]
Loading TF weight bert/encoder/layer_7/intermediate/dense/kernel/adam_m with shape [768, 3072]
Loading TF weight bert/encoder/layer_7/intermediate/dense/kernel/adam_v with shape [768, 3072]
Loading TF weight bert/encoder/layer_7/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_7/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_7/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_7/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_7/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_7/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_7/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_7/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_7/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_7/output/dense/kernel with shape [3072, 768]
Loading TF weight bert/encoder/layer_7/output/dense/kernel/adam_m with shape [3072, 768]
Loading TF weight bert/encoder/layer_7/output/dense/kernel/adam_v with shape [3072, 768]
Loading TF weight bert/encoder/layer_8/attention/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_8/attention/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_8/attention/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_8/attention/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_8/attention/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_8/attention/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_8/attention/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_8/attention/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_8/attention/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_8/attention/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_8/attention/output/dense/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_8/attention/output/dense/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_8/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_8/attention/self/key/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_8/attention/self/key/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_8/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_8/attention/self/key/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_8/attention/self/key/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_8/attention/self/query/bias with shape [768]
Loading TF weight bert/encoder/layer_8/attention/self/query/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_8/attention/self/query/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_8/attention/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_8/attention/self/query/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_8/attention/self/query/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_8/attention/self/value/bias with shape [768]
Loading TF weight bert/encoder/layer_8/attention/self/value/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_8/attention/self/value/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_8/attention/self/value/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_8/attention/self/value/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_8/attention/self/value/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_8/intermediate/dense/bias with shape [3072]
Loading TF weight bert/encoder/layer_8/intermediate/dense/bias/adam_m with shape [3072]
Loading TF weight bert/encoder/layer_8/intermediate/dense/bias/adam_v with shape [3072]
Loading TF weight bert/encoder/layer_8/intermediate/dense/kernel with shape [768, 3072]
Loading TF weight bert/encoder/layer_8/intermediate/dense/kernel/adam_m with shape [768, 3072]
Loading TF weight bert/encoder/layer_8/intermediate/dense/kernel/adam_v with shape [768, 3072]
Loading TF weight bert/encoder/layer_8/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_8/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_8/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_8/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_8/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_8/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_8/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_8/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_8/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_8/output/dense/kernel with shape [3072, 768]
Loading TF weight bert/encoder/layer_8/output/dense/kernel/adam_m with shape [3072, 768]
Loading TF weight bert/encoder/layer_8/output/dense/kernel/adam_v with shape [3072, 768]
Loading TF weight bert/encoder/layer_9/attention/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_9/attention/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_9/attention/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_9/attention/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_9/attention/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_9/attention/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_9/attention/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_9/attention/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_9/attention/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_9/attention/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_9/attention/output/dense/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_9/attention/output/dense/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_9/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_9/attention/self/key/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_9/attention/self/key/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_9/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_9/attention/self/key/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_9/attention/self/key/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_9/attention/self/query/bias with shape [768]
Loading TF weight bert/encoder/layer_9/attention/self/query/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_9/attention/self/query/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_9/attention/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_9/attention/self/query/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_9/attention/self/query/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_9/attention/self/value/bias with shape [768]
Loading TF weight bert/encoder/layer_9/attention/self/value/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_9/attention/self/value/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_9/attention/self/value/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_9/attention/self/value/kernel/adam_m with shape [768, 768]
Loading TF weight bert/encoder/layer_9/attention/self/value/kernel/adam_v with shape [768, 768]
Loading TF weight bert/encoder/layer_9/intermediate/dense/bias with shape [3072]
Loading TF weight bert/encoder/layer_9/intermediate/dense/bias/adam_m with shape [3072]
Loading TF weight bert/encoder/layer_9/intermediate/dense/bias/adam_v with shape [3072]
Loading TF weight bert/encoder/layer_9/intermediate/dense/kernel with shape [768, 3072]
Loading TF weight bert/encoder/layer_9/intermediate/dense/kernel/adam_m with shape [768, 3072]
Loading TF weight bert/encoder/layer_9/intermediate/dense/kernel/adam_v with shape [768, 3072]
Loading TF weight bert/encoder/layer_9/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_9/output/LayerNorm/beta/adam_m with shape [768]
Loading TF weight bert/encoder/layer_9/output/LayerNorm/beta/adam_v with shape [768]
Loading TF weight bert/encoder/layer_9/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_9/output/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight bert/encoder/layer_9/output/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight bert/encoder/layer_9/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_9/output/dense/bias/adam_m with shape [768]
Loading TF weight bert/encoder/layer_9/output/dense/bias/adam_v with shape [768]
Loading TF weight bert/encoder/layer_9/output/dense/kernel with shape [3072, 768]
Loading TF weight bert/encoder/layer_9/output/dense/kernel/adam_m with shape [3072, 768]
Loading TF weight bert/encoder/layer_9/output/dense/kernel/adam_v with shape [3072, 768]
Loading TF weight bert/final_output/token_score/bias with shape [3]
Loading TF weight bert/final_output/token_score/kernel with shape [768, 3]
Loading TF weight bert/pooler/dense/bias with shape [768]
Loading TF weight bert/pooler/dense/bias/adam_m with shape [768]
Loading TF weight bert/pooler/dense/bias/adam_v with shape [768]
Loading TF weight bert/pooler/dense/kernel with shape [768, 768]
Loading TF weight bert/pooler/dense/kernel/adam_m with shape [768, 768]
Loading TF weight bert/pooler/dense/kernel/adam_v with shape [768, 768]
Loading TF weight bert/relation/bias with shape [768]
Loading TF weight bert/relation/kernel with shape [768, 768]
Loading TF weight global_step with shape []
Loading TF weight loss/cls/predictions/output_bias with shape [30522]
Loading TF weight loss/cls/predictions/output_bias/adam_m with shape [30522]
Loading TF weight loss/cls/predictions/output_bias/adam_v with shape [30522]
Loading TF weight loss/cls/predictions/transform/LayerNorm/beta with shape [768]
Loading TF weight loss/cls/predictions/transform/LayerNorm/beta/adam_m with shape [768]
Loading TF weight loss/cls/predictions/transform/LayerNorm/beta/adam_v with shape [768]
Loading TF weight loss/cls/predictions/transform/LayerNorm/gamma with shape [768]
Loading TF weight loss/cls/predictions/transform/LayerNorm/gamma/adam_m with shape [768]
Loading TF weight loss/cls/predictions/transform/LayerNorm/gamma/adam_v with shape [768]
Loading TF weight loss/cls/predictions/transform/dense/bias with shape [768]
Loading TF weight loss/cls/predictions/transform/dense/bias/adam_m with shape [768]
Loading TF weight loss/cls/predictions/transform/dense/bias/adam_v with shape [768]
Loading TF weight loss/cls/predictions/transform/dense/kernel with shape [768, 768]
Loading TF weight loss/cls/predictions/transform/dense/kernel/adam_m with shape [768, 768]
Loading TF weight loss/cls/predictions/transform/dense/kernel/adam_v with shape [768, 768]
Loading TF weight output_bias with shape [2]
Loading TF weight output_bias/adam_m with shape [2]
Loading TF weight output_bias/adam_v with shape [2]
Loading TF weight output_weights with shape [2, 768]
Loading TF weight output_weights/adam_m with shape [2, 768]
Loading TF weight output_weights/adam_v with shape [2, 768]
Initialize PyTorch weight ['bert', 'embeddings', 'LayerNorm', 'beta']
Skipping bert/embeddings/LayerNorm/beta/adam_m
Skipping bert/embeddings/LayerNorm/beta/adam_v
Initialize PyTorch weight ['bert', 'embeddings', 'LayerNorm', 'gamma']
Skipping bert/embeddings/LayerNorm/gamma/adam_m
Skipping bert/embeddings/LayerNorm/gamma/adam_v
Initialize PyTorch weight ['bert', 'embeddings', 'position_embeddings']
Skipping bert/embeddings/position_embeddings/adam_m
Skipping bert/embeddings/position_embeddings/adam_v
Skipping bert/embeddings/relation_embedding
Traceback (most recent call last):
File "/****/.pyenv/versions/anaconda3-2020.07/bin/transformers-cli", line 8, in <module>
sys.exit(main())
File "/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/commands/transformers_cli.py", line 51, in main
service.run()
File "/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/commands/convert.py", line 105, in run
convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output)
File "/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_bert(model, config, tf_checkpoint_path)
File "/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 155, in load_tf_weights_in_bert
pointer.shape == array.shape
File "/****/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/torch/nn/modules/module.py", line 778, in __getattr__
raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
torch.nn.modules.module.ModuleAttributeError: 'BertEmbeddings' object has no attribute 'shape'
```
<|||||>Hmmm it seems your model contains an additional weight? `bert/embeddings/relation_embedding` is not in the PyTorch model.<|||||>I'd like to say thank you for considering this matter together.
Referring to your comment, I've just now checked the open-sourced code of the model I'd like to convert to PyTorch.
However, I cannot find `relation_embedding` there.
Maybe the version of the fine-tuned model provided by the author is different from the published implementation.
As a test, I tried to convert the model which I had fine-tuned by myself using the author's published implementation.
In this case, the `relation_embedding` error did not occur, but the `Skipping global_step` caused an error shown below:
``` sh
ModuleAttributeError: 'BertForPreTraining' object has no attribute 'bias'.
```
This `global_step` is included in the author's published implementation, and I think it is defined by the author.
I think I was able to load the model provided by the author with the code published by the author, but it may be my misunderstanding.
I would like to verify that point as well.
<|||||>Hmmm I understand.
I don't think it's the `global_step`, as this gets skipped here:
https://github.com/huggingface/transformers/blob/b020a736c374460af1b34267283f957988350630/src/transformers/models/bert/modeling_bert.py#L120-L125
As a way to debug what's happening here, could you add the following log statement:
```py
logger.info(f"Trying to assign {name}")
```
right after the following line:
https://github.com/huggingface/transformers/blob/b020a736c374460af1b34267283f957988350630/src/transformers/models/bert/modeling_bert.py#L116
It would then look like:
```py
for name, array in zip(names, arrays):
logger.info(f"Trying to assign {name}")
name = name.split("/")
# adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v
# which are not required for using pretrained model
if any(
n in ["adam_v", "adam_m", "AdamWeightDecayOptimizer", "AdamWeightDecayOptimizer_1", "global_step"]
for n in name
):
```
we can then try to identify what's happening with the checkpoint.<|||||>Thanks! I was also just checking modeling_bert.py#L116-L125 now.
It seems the author's code skips a variable if its name is not in the model for loading.
I've just read the `load_tf_weights_in_bert` used in `transformers-cli convert --model_type bert`, and understood how items such as `adam_v` and `adam_m` (which are not required for use pretrained model) are skipped.
https://github.com/huggingface/transformers/blob/b020a736c374460af1b34267283f957988350630/src/transformers/models/bert/modeling_bert.py#L116-L125
At this time, I think I can skip the `relation_embedding` for my usage.
Hence, I'll try to modify the `convert` code to skip for this time.
Also, I'll try the snippet you kindly wrote!<|||||>I inserted `logger.info(f"Trying to assign {name}")` and got the following outputs.
When try to convert the author provided fine-tuned model, the output is as below:
```
Trying to assign bert/embeddings/LayerNorm/beta
Initialize PyTorch weight ['bert', 'embeddings', 'LayerNorm', 'beta']
Trying to assign bert/embeddings/LayerNorm/beta/adam_m
Skipping bert/embeddings/LayerNorm/beta/adam_m
Trying to assign bert/embeddings/LayerNorm/beta/adam_v
Skipping bert/embeddings/LayerNorm/beta/adam_v
Trying to assign bert/embeddings/LayerNorm/gamma
Initialize PyTorch weight ['bert', 'embeddings', 'LayerNorm', 'gamma']
Trying to assign bert/embeddings/LayerNorm/gamma/adam_m
Skipping bert/embeddings/LayerNorm/gamma/adam_m
Trying to assign bert/embeddings/LayerNorm/gamma/adam_v
Skipping bert/embeddings/LayerNorm/gamma/adam_v
Trying to assign bert/embeddings/position_embeddings
Initialize PyTorch weight ['bert', 'embeddings', 'position_embeddings']
Trying to assign bert/embeddings/position_embeddings/adam_m
Skipping bert/embeddings/position_embeddings/adam_m
Trying to assign bert/embeddings/position_embeddings/adam_v
Skipping bert/embeddings/position_embeddings/adam_v
Trying to assign bert/embeddings/relation_embedding
Skipping bert/embeddings/relation_embedding
Traceback (most recent call last):
```
When try to convert my own fine-tuned model with the author's code, the output is as below:
```
...
Trying to assign bert/encoder/layer_9/output/dense/bias/adam_v
Skipping bert/encoder/layer_9/output/dense/bias/adam_v
Trying to assign bert/encoder/layer_9/output/dense/kernel
Initialize PyTorch weight ['bert', 'encoder', 'layer_9', 'output', 'dense', 'kernel']
Trying to assign bert/encoder/layer_9/output/dense/kernel/adam_m
Skipping bert/encoder/layer_9/output/dense/kernel/adam_m
Trying to assign bert/encoder/layer_9/output/dense/kernel/adam_v
Skipping bert/encoder/layer_9/output/dense/kernel/adam_v
Trying to assign bert/pooler/dense/bias
Initialize PyTorch weight ['bert', 'pooler', 'dense', 'bias']
Trying to assign bert/pooler/dense/bias/adam_m
Skipping bert/pooler/dense/bias/adam_m
Trying to assign bert/pooler/dense/bias/adam_v
Skipping bert/pooler/dense/bias/adam_v
Trying to assign bert/pooler/dense/kernel
Initialize PyTorch weight ['bert', 'pooler', 'dense', 'kernel']
Trying to assign bert/pooler/dense/kernel/adam_m
Skipping bert/pooler/dense/kernel/adam_m
Trying to assign bert/pooler/dense/kernel/adam_v
Skipping bert/pooler/dense/kernel/adam_v
Trying to assign global_step
Skipping global_step
Trying to assign output_bias
Traceback (most recent call last):
```
As you have pointed out, what caused the error is not `global_step`, but `output_bias`.<|||||>It seems that `output_bias` is not the part of BERT, but of the linear layer, as the related paper said that the authors fine-tune all the parameters including the BERT and the two additional linear layers.
```
Loading TF weight output_bias with shape [2]
Loading TF weight output_bias/adam_m with shape [2]
Loading TF weight output_bias/adam_v with shape [2]
Loading TF weight output_weights with shape [2, 768]
Loading TF weight output_weights/adam_m with shape [2, 768]
Loading TF weight output_weights/adam_v with shape [2, 768]
```<|||||>Hmmm indeed it seems that the model doesn't fit one-to-one to our architecture. You might need to slightly tweak the architecture and conversion script to load it, but you're probably the most expert on the matter. If you want me to take a deeper look, feel free to send me the weights/config so I can take a look locally.<|||||>Thank you for your kind and encouraging comment!
Thanks to your advice, it seems that what is a problem I should solve becomes clear.
I'll do my best to solve it!<|||||>Hi,
Sorry it's been a few days because I had another issue, but I am working on this issue again.
I would like to ask one question about the relationship between `m_name` and `name`.
I'm assuming that `name` is split into the parts of the name hierarchy and that `m_name` handles each part.
`m_name` is referred after the `for` statement, is it safe to consider it as the same as `name[-1]`?
It seems that it is judged whether `m_name` is `_embedding` or `kernel`, is that correct?
Is there any reason why `m_name` is used instead of `name[-1]` (after the end of the `for` statement)?
https://github.com/huggingface/transformers/blob/9152f16023b59d262b51573714b40325c8e49370/src/transformers/models/bert/modeling_bert.py#L116-L152
I added two log statements to check if there are differences in `m_name` (after `for` statement) and `name[-1]`, but cannot found them.
`added`
```python
logger.info(f"name: {name}")
logger.info(f"m_name: {m_name}")
if m_name[-11:] == "_embeddings":
pointer = getattr(pointer, "weight")
elif m_name == "kernel":
array = np.transpose(array)
```
`output`
```
2021-01-24 08:49:34,227 | INFO : Skipping bert/embeddings/LayerNorm/gamma/adam_m
2021-01-24 08:49:34,227 | INFO : Skipping bert/embeddings/LayerNorm/gamma/adam_v
2021-01-24 08:49:34,228 | INFO : name: ['bert', 'embeddings', 'position_embeddings']
2021-01-24 08:49:34,229 | INFO : m_name: position_embeddings
2021-01-24 08:49:34,230 | INFO : Initialize PyTorch weight ['bert', 'embeddings', 'position_embeddings']
2021-01-24 08:49:34,231 | INFO : Skipping bert/embeddings/position_embeddings/adam_m
2021-01-24 08:49:34,232 | INFO : Skipping bert/embeddings/position_embeddings/adam_v
2021-01-24 08:49:34,233 | INFO : Skipping bert/embeddings/relation_embedding
2021-01-24 08:49:34,233 | INFO : name: ['bert', 'embeddings', 'relation_embedding']
2021-01-24 08:49:34,234 | INFO : m_name: relation_embedding
```
<|||||>Thanks to your advice, I think I've almost achieved the conversion I'm aiming for.
I added a `force` option to force skip the unrelated items and save them separately as `npy.`
I split the loop part form `load_tf_weights_in_bert`, and defined new function `getpointer`.
If `force` is `True,` some items that cannot be found in BERT is skipped and saved separately.
Here is my code snippet.
```python
def getpointer(pointer, m_name, name):
if re.fullmatch(r"[A-Za-z]+_\d+", m_name):
scope_names = re.split(r"_(\d+)", m_name)
else:
scope_names = [m_name]
if scope_names[0] == "kernel" or scope_names[0] == "gamma":
pointer = getattr(pointer, "weight")
elif scope_names[0] == "output_bias" or scope_names[0] == "beta":
pointer = getattr(pointer, "bias")
elif scope_names[0] == "output_weights":
pointer = getattr(pointer, "weight")
elif scope_names[0] == "squad":
pointer = getattr(pointer, "classifier")
else:
try:
pointer = getattr(pointer, scope_names[0])
except AttributeError:
logger.info("Skipping {}".format("/".join(name)))
# continue
return pointer
if len(scope_names) >= 2:
num = int(scope_names[1])
pointer = pointer[num]
return pointer
def load_tf_weights_in_bert(model, config, tf_checkpoint_path, force=True, skipped_save_path="./skipped"):
"""Load tf checkpoints in a pytorch model."""
if force:
logger.warning("The 'force' option is set to be True. It will force conversion even if the model types do not match.")
os.makedirs(os.path.join(skipped_save_path, "skipped"), exist_ok=True)
skipped_names = []
skipped_arrays = []
try:
import tensorflow as tf
except ImportError:
logger.error(
"Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see "
"https://www.tensorflow.org/install/ for installation instructions."
)
raise
tf_path = os.path.abspath(tf_checkpoint_path)
logger.info("Converting TensorFlow checkpoint from {}".format(tf_path))
# Load weights from TF model
init_vars = tf.train.list_variables(tf_path)
names = []
arrays = []
for name, shape in init_vars:
logger.info("Loading TF weight {} with shape {}".format(name, shape))
array = tf.train.load_variable(tf_path, name)
names.append(name)
arrays.append(array)
for name, array in zip(names, arrays):
# logger.info(f"Trying to assign {name}")
name = name.split("/")
# adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v
# which are not required for using pretrained model
if any(
n in ["adam_v", "adam_m", "AdamWeightDecayOptimizer", "AdamWeightDecayOptimizer_1", "global_step"]
for n in name
):
logger.info("Skipping {}".format("/".join(name)))
continue
pointer = model
for m_name in name:
if force:
# Skip unrelated items and save them separately.
try:
pointer = getpointer(pointer, m_name, name)
except AttributeError:
logger.info("Skipping {}".format("/".join(name)))
skipped_names.append(name)
skipped_arrays.append(array)
else:
pointer = getpointer(pointer, m_name, name)
if m_name[-11:] == "_embeddings":
pointer = getattr(pointer, "weight")
elif m_name == "kernel":
array = np.transpose(array)
try:
if force:
try:
pointer.shape
except AttributeError:
logger.info("Skipping {}".format("/".join(name)))
skipped_names.append(name)
skipped_arrays.append(array)
else:
assert (
pointer.shape == array.shape
), f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched"
except AssertionError as e:
e.args += (pointer.shape, array.shape)
raise
logger.info("Initialize PyTorch weight {}".format(name))
pointer.data = torch.from_numpy(array)
if force:
logger.info("Save force skipped files")
for name, array in zip(skipped_names, skipped_arrays):
skipped_to_save = os.path.join(skipped_save_path, "skipped", "-".join(name) + ".npy")
logger.info("Save force skipped {} to {}".format("/".join(name), skipped_to_save))
np.save(skipped_to_save, array)
return model
```
The conversion is done!
In my case, the skipped items are saved as follows.
``` sh
$ ls converted/skipped/
bert-embeddings-relation_embedding.npy bert-final_output-token_score-kernel.npy bert-relation-kernel.npy output_weights.npy
bert-final_output-token_score-bias.npy bert-relation-bias.npy output_bias.npy
```
I should manage these "unrelated" `np.array` files by making appropriate layers and write the arrays as weight and bias of the layer.
Moreover, I have trouble loading the generated `pytorch_model.bin`. It is said that there is no `config.json` file. I'd like to check how to generate the correct one (just copying TF `bert_config.json` doesn't work) with converted `pytorch_model.bin`.
Thank you very much for your help!<|||||>Excuse me for my frequent posting.
To get the appropriate `config.json`, I've changed the last part of the conversion function, where the model is saved, to as follows (changed from `torch.save()` to `model.save_pretrained()`):
```python
# Save pytorch-model
print("Save the PyTorch model and the config file to {}".format(pytorch_dump_dir))
# torch.save(model.state_dict(), pytorch_dump_path)
model.save_pretrained(pytorch_dump_dir)
```
(The original code is in
https://github.com/huggingface/transformers/blob/9152f16023b59d262b51573714b40325c8e49370/src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py#L38-L40)
Then, the output of my modified script is:
```python
...
2021-01-25 04:27:00,442 | INFO : Initialize PyTorch weight ['output_bias']
2021-01-25 04:27:00,442 | INFO : Skipping output_bias/adam_m
2021-01-25 04:27:00,442 | INFO : Skipping output_bias/adam_v
2021-01-25 04:27:00,443 | INFO : Skipping output_weights
2021-01-25 04:27:00,443 | INFO : Skipping output_weights
2021-01-25 04:27:00,444 | INFO : Initialize PyTorch weight ['output_weights']
2021-01-25 04:27:00,444 | INFO : Skipping output_weights/adam_m
2021-01-25 04:27:00,444 | INFO : Skipping output_weights/adam_v
2021-01-25 04:27:00,445 | INFO : Save force skipped files
2021-01-25 04:27:00,445 | INFO : Save force skipped bert/embeddings/relation_embedding to ./converted/skipped/bert-embeddings-relation_embedding.npy
2021-01-25 04:27:00,456 | INFO : Save force skipped bert/final_output/token_score/bias to ./converted/skipped/bert-final_output-token_score-bias.npy
2021-01-25 04:27:00,458 | INFO : Save force skipped bert/final_output/token_score/kernel to ./converted/skipped/bert-final_output-token_score-kernel.npy
2021-01-25 04:27:00,461 | INFO : Save force skipped bert/final_output/token_score/kernel to ./converted/skipped/bert-final_output-token_score-kernel.npy
2021-01-25 04:27:00,463 | INFO : Save force skipped bert/relation/bias to ./converted/skipped/bert-relation-bias.npy
2021-01-25 04:27:00,466 | INFO : Save force skipped bert/relation/kernel to ./converted/skipped/bert-relation-kernel.npy
2021-01-25 04:27:00,492 | INFO : Save force skipped bert/relation/kernel to ./converted/skipped/bert-relation-kernel.npy
2021-01-25 04:27:00,519 | INFO : Save force skipped output_bias to ./converted/skipped/output_bias.npy
2021-01-25 04:27:00,521 | INFO : Save force skipped output_bias to ./converted/skipped/output_bias.npy
2021-01-25 04:27:00,523 | INFO : Save force skipped output_weights to ./converted/skipped/output_weights.npy
2021-01-25 04:27:00,525 | INFO : Save force skipped output_weights to ./converted/skipped/output_weights.npy
Save the PyTorch model and the config file to ./converted/
Configuration saved in ./converted/config.json
Model weights saved in ./converted/pytorch_model.bin
```
Now I can load the converted model!
About the skipped items (saved as `npy`), I think I can convert them to `nn.Module` by referring to:
https://github.com/huggingface/transformers/blob/9152f16023b59d262b51573714b40325c8e49370/src/transformers/models/bert/modeling_bert.py#L161
Thank you again!<|||||>Fantastic! Great job, thank you for sharing your progress!<|||||>I greatly appreciate your help on this issue.
It's my pleasure if someone who will come across a similar problem can look at this issue and solve the problem.
I think my `force` convert script is not adequately simple and is a bit hard to apply to the all models,
but changing from `torch.save()` to `model.save_pretrained()` may help some users.
If you donβt mind, could you please tell me what do you think about this change?
```python
# Save pytorch-model
print("Save the PyTorch model and the config file to {}".format(pytorch_dump_dir))
# torch.save(model.state_dict(), pytorch_dump_path)
model.save_pretrained(pytorch_dump_dir)
```
It seems for me that creating `config.json` with the converted model `pytorch_model.bin` would be useful, or, for models where the convert command works correctly, is the `config.json` generated elsewhere?
If this point can be changed, the main code changes I assume are as follows:
- The save statement shown above.
- The option `--pytorch_dump_output` of convert command will be changed to have `/path/to/directory/` instead of `/path/to/directory/pytorch_model.bin`. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,656 | closed | "Converting Tensorflow Checkpoints" document has wrong link in v4.2.0+ | ## Environment info
- `transformers` version: v4.2.0+
### Who can help
documentation: @sgugger
## Information
I'd like to convert BERT ckpt to PyTorch, and read [Converting Tensorflow Checkpoints](https://huggingface.co/transformers/converting_tensorflow_models.html) document.
It seems that the link to `convert_bert_original_tf_checkpoint_to_pytorch.py` is outdated.
It is linked to https://huggingface.co/transformers/converting_tensorflow_models.html, but `convert_bert_original_tf_checkpoint_to_pytorch.py` is now placed in https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py (I found the information in https://github.com/huggingface/transformers/issues/9556).
It seems that in https://github.com/huggingface/transformers/pull/9217 the document is updated to use a prefix to get the `release` variable.
However, perhaps the document is not yet changed to match the folder structure in v4?
Sorry if I misunderstand something. | 01-18-2021 12:21:00 | 01-18-2021 12:21:00 | Hi! Indeed this link needs to be updated. Do you want to open a PR to fix it?<|||||>@LysandreJik
Thank you for your comment! I'd love to open a PR to fix it.
I would like to open a PR by the end of this week.
Should I devise a way to change the link destination from the document before and after the change in the folder structure (version 3 to 4)?<|||||>Excuse me my opening a PR is delayed even though I said: "by the end of this week".
I haven't been able to find the time to do this, but your advice on another issue has helped me understand `convert` better, so I'm going to work on it.
I'll try to update the documentation on how to explain the differences between version 3 and 4, and I'd be happy to receive your comments in the PR (of course, any advice in advance would be greatly appreciated).
<|||||>I apologize for the delay in getting the work done later than I said it would be.
I opened PR #9791.
If you have time, I would appreciate it if you could take a look.<|||||>Merged, thanks! No worries for the delay!<|||||>@LysandreJik
Thank you for merging and giving me your kind words! |
transformers | 9,655 | closed | BertTokenizer and encode_plus() | I see that from version 2.4.0 I was able to use `encode_plus()` with `BertTokenizer`
However it seems like that is not the case anymore.
`AttributeError: 'BertTokenizer' object has no attribute 'encoder_plus'`
Is there a replacement to `encode_plus`? | 01-18-2021 12:09:45 | 01-18-2021 12:09:45 | No itβs still there and still identical. Itβs just that you made a typo and typed `encoder_plus` instead of `encode_plus` for what I can tell.
Though we recommand using just the `__call__` method now which is a shortcut wrapping all the encode method in a single API. You can read more details on the additional features that have been added in v3 and v4 in the doc if you want to simplify your preprocessing.<|||||>Here: https://huggingface.co/transformers/preprocessing.html<|||||>> No itβs still there and still identical. Itβs just that you made a typo and typed `encoder_plus` instead of `encode_plus` for what I can tell.
>
> Though we recommand using just the `__call__` method now which is a shortcut wrapping all the encode method in a single API. You can read more details on the additional features that have been added in v3 and v4 in the doc if you want to simplify your preprocessing.
Oops sorry I completely missed that. Thank you!<|||||>long_text = "This is a very very long text. " * 300
tokenizer = BertTokenizer.from_pretrained("bert-large-uncased")
# tokenize without truncation
inputs_no_trunc = tokenizer.encode_plus(long_text, add_special_tokens=False, return_tensors='pt')
I get the following error:
AttributeError: 'BertTokenizer' object has no attribute 'encode_plus'
Is their a substitution for this? |
transformers | 9,654 | closed | Add t5 convert to transformers-cli | # What does this PR do?
add t5 model convert to transformers-cli
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @sgugger @LysandreJik
and
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR. | 01-18-2021 11:56:41 | 01-18-2021 11:56:41 | Can we trigger the CI again to make sure all tests are passing? Think you can add an empty git commit with
```
git commit --allow-empty -m "Trigger notification"
```<|||||>@patrickvonplaten done it!..it seems failing 1 check only<|||||>@patrickvonplaten all seems good now...kindly check |
transformers | 9,653 | closed | AutoModelForMaskedLM not working when using MBartForConditionalGeneration architecture. | ## Environment info
- `transformers` version: 4.2.1
- Platform: Linux-4.4.0-197-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten @LysandreJik
## Information
Model I am using (Bert, XLNet ...): BARThez, MBART
## To reproduce
```python
from transformers import (
BarthezTokenizer,
AutoModelForMaskedLM,
MBartForConditionalGeneration
)
barthez_tokenizer = BarthezTokenizer.from_pretrained("moussaKam/barthez")
barthez_model = AutoModelForMaskedLM.from_pretrained("moussaKam/barthez")
```
error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-11-cef569c55032> in <module>
9
10 barthez_tokenizer = BarthezTokenizer.from_pretrained("moussaKam/barthez")
---> 11 barthez_model = AutoModelForMaskedLM.from_pretrained("moussaKam/barthez")
12
13 input_ids = torch.tensor(
~/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1123 pretrained_model_name_or_path, *model_args, config=config, **kwargs
1124 )
-> 1125 raise ValueError(
1126 "Unrecognized configuration class {} for this kind of AutoModel: {}.\n"
1127 "Model type should be one of {}.".format(
ValueError: Unrecognized configuration class <class 'transformers.models.mbart.configuration_mbart.MBartConfig'> for this kind of AutoModel: AutoModelForMaskedLM.
Model type should be one of LayoutLMConfig, DistilBertConfig, AlbertConfig, BartConfig, CamembertConfig, XLMRobertaConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, BertConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, ReformerConfig, FunnelConfig, MPNetConfig, TapasConfig.
```
The model works as expected when using `MBartForConditionalGeneration` instead of `AutoModelForMaskedLM`.
After checking I see that the public model [MBart](https://huggingface.co/facebook/mbart-large-cc25/blob/main/config.json) itself is using `BartForConditionalGeneration` as default architecture, is that normal? | 01-18-2021 10:52:40 | 01-18-2021 10:52:40 | Hey @moussaKam,
Thanks for your issue! My opinion here is the following:
- `MBartForConditionalGeneration` should not work with `AutoModelForMaskedLM`, but only with `AutoModelForSeq2SeqLM` -> it's not a Bert-like autoencoding model, but an encoder-decoder model.
- You are completely right in that `MBartForConditionalGeneration` should have `MBartForConditionalGeneration` in its config and not `Bart...`. This should however not make a difference when loading the model with `AutoModelForSeq2SeqLM.from_pretrained(...)` -> I'll change that!
@LysandreJik what do you think?<|||||>Hi @moussaKam, I agree with @patrickvonplaten that `MBartForConditionalGeneration` should not work with `AutoModelForMaskedLM` but only with `AutoModelForSeq2SeqLM`.
I can confirm that
```py
>>> from transformers import AutoModelForSeq2SeqLM
>>> model = AutoModelForSeq2SeqLM.from_pretrained("moussaKam/barthez")
```
works correctly.
Also agree with you that the configuration looks better thanks to [huggingface.co#88467f](https://huggingface.co/facebook/mbart-large-cc25/commit/88467fef84ba338740dc562dec3a105c2b14de9f)!<|||||>Yes @LysandreJik @patrickvonplaten you are completely right, however the [inference API](https://huggingface.co/moussaKam/barthez?text=Paris+est+la+%3Cmask%3E+de+la+France.) is using `AutoModelForMaskedLM` for some reason, and returning the following error:
```
β οΈ Unrecognized configuration class for this kind of AutoModel: AutoModelForMaskedLM. Model type should be one of LayoutLMConfig, DistilBertConfig, AlbertConfig, BartConfig, CamembertConfig, XLMRobertaConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, BertConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, ReformerConfig, FunnelConfig, MPNetConfig, TapasConfig.
```
Is there anyway we can fix this issue?<|||||>That is problematic, indeed. Let me check what's going on.<|||||>Yes, same problem when using `pipeline`.
```python
from transformers import pipeline
pbase = pipeline(task="fill-mask", model="moussaKam/barthez")
src_text = ["Paris est la capitale de la <mask>"]
results = [x["token_str"] for x in pbase(src_text)]
```
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-12-399795ed06d8> in <module>
1 from transformers import pipeline
2
----> 3 pbase = pipeline(task="fill-mask", model="moussaKam/barthez")
4 src_text = ["Paris est la capitale de la <mask>"]
5 results = [x["token_str"] for x in pbase(src_text)]
~/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, framework, revision, use_fast, **kwargs)
403 )
404
--> 405 model = model_class.from_pretrained(model, config=config, revision=revision, **model_kwargs)
406 if task == "translation" and model.config.task_specific_params:
407 for key in model.config.task_specific_params:
~/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1123 pretrained_model_name_or_path, *model_args, config=config, **kwargs
1124 )
-> 1125 raise ValueError(
1126 "Unrecognized configuration class {} for this kind of AutoModel: {}.\n"
1127 "Model type should be one of {}.".format(
ValueError: Unrecognized configuration class <class 'transformers.models.mbart.configuration_mbart.MBartConfig'> for this kind of AutoModel: AutoModelForMaskedLM.
Model type should be one of LayoutLMConfig, DistilBertConfig, AlbertConfig, BartConfig, CamembertConfig, XLMRobertaConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, BertConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, ReformerConfig, FunnelConfig, MPNetConfig, TapasConfig.
```<|||||>I see - thanks for clarifying @moussaKam . The PR attached above should solve the problem :-) |
transformers | 9,652 | closed | Update integrations.py | File "/share/apps/anaconda3/envs/my_env/lib/python3.7/site-packages/transformers/integrations.py", line 419, in __init__
self._SummaryWriter = SummaryWriter
UnboundLocalError: local variable 'SummaryWriter' referenced before assignment
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 01-18-2021 10:24:20 | 01-18-2021 10:24:20 | |
transformers | 9,651 | closed | RAG Fine Tuning | How do we train RAG mode with custom data set.
Can we have detailed document on this.
Thanks | 01-18-2021 10:09:16 | 01-18-2021 10:09:16 | It is already [here](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag) .<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,650 | closed | Error w/Transformers 4.2.0 and TF Nightly | @jplu I am running into issues when running transformers w/tf-nightly.
I get the error when I am trying to load the TFDistilBERT model:
`model = TFDistilBertModel.from_pretrained('distilbert-base-uncased')`
this is the error message:
```
ImportError:
TFDistilBertModel requires the TensorFlow library but it was not found in your environment. Checkout the instructions on the
installation page: https://www.tensorflow.org/install and follow the ones that match your environment.
```
I came across this bug when running a CI test for Ludwig. I think many projects use tf-nightly in their CIs tests to make sure that the integrations are future proof! | 01-18-2021 04:42:56 | 01-18-2021 04:42:56 | Hello!
Please update to v4.2.1<|||||>This worked, thanks! |
transformers | 9,649 | closed | Does the latest huggingface-transformers version work with tokenizers==0.10.0? | ## Environment info
- `transformers` version: 4.3.0.dev0
- `tokenizers` version: 0.10.0

| 01-18-2021 01:58:57 | 01-18-2021 01:58:57 | Hello! We're [still pinned to 0.9.4](https://github.com/huggingface/transformers/blob/master/setup.py#L134). We'll pass to `0.10.0` soon.<|||||>@LysandreJik , if possible, could you provide your best guess as to when this requirement will be updated? I'm currently need to train a WordLevel Tokenizer, as well as use a transformer's model in the same process. I've broken the process up into two python files, which I run separately with different python environments, but it would be nice to have the full process in one file using one environment. Thanks!<|||||>@n1t0 tells me it should be ready sometimes next week! |
transformers | 9,648 | open | Easier perplexity computation | # π Feature request
The docs provide a method to evaluate perplexity for a GPT-2 model, one example at a time (https://huggingface.co/transformers/perplexity.html). However this can potentially be included in the library with the computation being done in a batched manner.
## Motivation
This would make it easier and faster for people to evaluate their language models in terms of perplexity.
If not a solution integrated in the library, the example given in the docs can be updated to do computation in a batched manner for speed.
| 01-17-2021 23:37:44 | 01-17-2021 23:37:44 | Hi @uditarora! That would be a nice addition, do you want to open a PR to add a batched computation to the documentation?<|||||>Sure! I can try to create one.
Might take me a couple of weeks before I can get started on it though, due to prior commitments.<|||||>Hi all. Is this issue still open? I like to contribute and collaborate.
Here's my take.
```python3
import torch
import torch.nn.functional as F
from tqdm import tqdm
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
from datasets import load_dataset
def batched_perplexity(model, dataset, tokenizer, batch_size, stride):
device = model.device
encodings = tokenizer("\n\n".join(dataset["text"]), return_tensors="pt")
text_len = encodings.input_ids.size(1)
lls = []
for i in tqdm(range(0, text_len, batch_size * stride)):
begin_locs, end_locs, trg_lens = [], [], []
for j in range(batch_size):
j = i + j * stride
if j >= text_len:
break
begin_loc = max(j + stride - max_len, 0)
end_loc = min(j + stride, text_len)
trg_len = end_loc - j # may be different from stride on last loop
begin_locs.append(begin_loc)
end_locs.append(end_loc)
trg_lens.append(trg_len)
input_ids = [encodings.input_ids[:, b:e] for b, e in zip(begin_locs, end_locs)]
target_end_locs = [sen.size(-1) for sen in input_ids]
input_ids = [
F.pad(sen, (0, max_len - sen.size(-1)), "constant", 0) for sen in input_ids
] # we dont need attention mask as long as these padded token is not involved in loss calculation
input_ids = torch.stack(input_ids, dim=1).squeeze(0).to(device)
target_ids = torch.ones_like(input_ids) * -100 # -100 is the default ingore_index value in torch.nn.CrossEntropyLoss
for i, (b, e) in enumerate(zip(trg_lens, target_end_locs)):
labels = input_ids[i, -b:e].clone()
target_ids[i, -b:e] = labels
with torch.no_grad():
outputs = model(input_ids, labels=target_ids)
log_likelihood = outputs["loss"] * sum(trg_lens)
lls.append(log_likelihood)
ppl = torch.exp(sum(torch.stack(lls) / end_locs[-1]))
return ppl
if __name__ == "__main__":
device = "cuda"
model_id = "distilgpt2"
model = GPT2LMHeadModel.from_pretrained(model_id).to(device)
tokenizer = GPT2TokenizerFast.from_pretrained(model_id)
max_len = model.config.n_positions
stride = 512
batch_size = 16
test = load_dataset("wikitext", "wikitext-2-raw-v1", split="test[:100%]")
ppl = batched_perplexity(model, test, tokenizer, batch_size, stride)
print(f"--------------{ppl=}-------------")
```<|||||>Hello! is this issue still open? Did you test that the batched example above gives the same value for GPT2 as the documentation? |
transformers | 9,647 | closed | Training Bert2Bert with EncoderDecoderModel and Seq2SeqTrainer results with Cuda OOM | Hi,
I am trying to train a Bert2Bert model for text summarization. I followed the exact steps in [BERT2BERT for CNN/Dailymail](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing#scrollTo=7zdm50ZotZqb). Only things that I changed are the training arguments and metrics. Additionally I have also tried to replace [seq2seq_trainer](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/seq2seq_trainer.py) with Seq2SeqTrainer from the package itself, the result was the same. I am using ``bert-base-uncased`` model for BERT and CNN/Dailymail as dataset (just like it was introduced in the [colab](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing#scrollTo=7zdm50ZotZqb)).
from transformers Seq2SeqTrainingArguments, Seq2SeqTrainer
def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
# all unnecessary tokens are removed
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
labels_ids[labels_ids == -100] = tokenizer.pad_token_id
label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge1", "rouge2"])
rouge1 = rouge_output["rouge1"].mid
rouge2 = rouge_output["rouge2"].mid
return {
"rouge1_precision": round(rouge1.precision, 4),
"rouge1_recall": round(rouge1.recall, 4),
"rouge1_fmeasure": round(rouge1.fmeasure, 4),
"rouge2_precision": round(rouge2.precision, 4),
"rouge2_recall": round(rouge2.recall, 4),
"rouge2_fmeasure": round(rouge2.fmeasure, 4),
}
training_args = Seq2SeqTrainingArguments(
output_dir=output_folder,
logging_dir=log_folder,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
predict_with_generate=True,
evaluation_strategy=EvaluationStrategy.STEPS,
do_train=True,
do_eval=True,
logging_steps=1000, # set to 1000 for full training
load_best_model_at_end=True,
metric_for_best_model='rouge1_fmeasure',
eval_steps=8000, # set to 8000 for full training
warmup_steps=2000, # set to 2000 for full training
overwrite_output_dir=True,
save_total_limit=2,
fp16=True,
)
trainer = Seq2SeqTrainer(
model=bert2bert,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_data,
eval_dataset=val_data,
)
Even with ``batch_size=1``, I am getting the OOM. It seems like the cuda does not free any memory at all.
versions of my ``transformers`` and ``torch`` are as followed.
`
transformers 4.2.0, torch 1.7.1+cu110
`
Can you help me with this issue? What do you think the issue might be? | 01-17-2021 22:31:07 | 01-17-2021 22:31:07 | Hello! What is your machine? When you run the script, at which point does it fail? Right off the bat, or after a few sequences have been processed?<|||||>I have tried it on my local GTX1650 and also on a 16gb T100. They both fail during processing the first sequence. It is not always at the same line but mostly during ``forward`` of ``SelfAttention`` module of the ``Bert``. I also decreased the input sizes while processing the data with the tokenizer. It manages to process one sequence but then it fails with OOM again while processing the second sequence. Additionally, I tried training it directly on [colab](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing#scrollTo=7zdm50ZotZqb), it fails with a OOM there, too.<|||||>Not sure how and why but the training started working on T100, even though I haven't really changed anything. The GPU might be just overloaded back then. I will close this issue. |
transformers | 9,646 | closed | RAG : Adding end to end training for the retriever (both question encoder and doc encoder) | # π Feature request
As mentioned in this recent paper [End-to-End Training of Neural Retrievers for Open-Domain Question Answering](https://arxiv.org/abs/2101.00408), we can get better results for QA tasks if we fine-tune the retriever section in an end-to-end manner.
## Paper's method
Fine-tune both doc-encoder and question encoder and update the pre computed index embedding in every 500 steps.
| 01-17-2021 21:39:04 | 01-17-2021 21:39:04 | @lhoestq
<|||||>Interesting :) I don't think I'll be able to work on this in the short term but if anyone wants to give it a try maybe I can help with some indications<|||||>@lhoestq
I kind of figured out a way to do this. I need a little clarification from you regarding the distributed retriever. As mentioned in this [line](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/finetune_rag.py#L84), we use **CustomAcc** class to load knowledgebase and load faiss index as a separate process.
I want to re-execute the above-mentioned process after several training steps. Let's say 1000.
with Pytorch lighting,
def optimizer_step(self, epoch_nb, batch_nb, optimizer, optimizer_i, opt_closure):
if self.trainer.global_step < 500:
****** run init_ddp_connection functiontion inside CustomAccel class*****
1. Reinitialize knowledgebase dataset
2. Relaod faiss index
<|||||>Hi @shamanez You can reload an updated index during train time this way:
1. recompute all the embeddings of your knowledge source using the context encoder (costly, depending on your knowledge source size)
2. recreate the FAISS index (which can also be costly)
The REALM model does this kind of back and forth between training and indexing, you may want to check out how they did that in the paper.
I think one approach would be to extend the [RayRetriever](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/distributed_ray_retriever.py) since you can define several workers depending on what you want to do (query the index, compute the embeddings or update the index). It's something that I feel is more natural to do with Ray than with pytorch distributed.<|||||>Ok. I was thinking the same steps looking at the REALM paper. In their code implementation, they run three separate processes and communicate in between processes when they need to compute new embedding, load them and finally feed them to the reader model. It only works with a single GPU.
Anyways I get when using RAG with PyTorch distributed, the loading operation is done before the trainer. So I was thinking can we execute that within the training loop. Anyways I get what you say.
@amogkam can you help with this?
<|||||>@lhoestq
I kind of tried to implement this with PyTorch distributed retriever method. So ideally I want to re-load the knowledge-based and rea-load the indexes inside the training step (assuming I have an updated doc-encoder). Here is my implementation. Can you please let me know whether it is correct.
```
def training_step(self, batch, batch_idx) -> Dict:
if not batch_idx==0 and batch_idx%10000==0:
self.model.retriever.re_index()
```
The reindex is a simple method inside **distributed_pytorch_retriever.py** that only re-load the dataset and idex in the main process.
```
def re_index(self):
# initialize retriever only on the main worker
if self._is_main():
logger.info("re initializing the index")
self.index.re_init_index()
```
Here my assumption is, we have already started a separate process with the custom accel. Now we are changing something inside it.
What do u this of it?
<|||||>It looks good :) Although I haven't tested things like that so let me know how it goes !
also one detail: maybe you wanted to write `batch_idx%10000 == 0` instead of `batch_idx%10000`<|||||>ah yeah, thanks for the update. So I am on it. I Will update you soon :) <|||||>@lhoestq I have another question regarding the **load_dataset** function:
Prior to starting the DDP process, the code loads the indexed dataset by accessing the saved file in the hard disk with the **load_from_disk** .([this line](https://github.com/huggingface/transformers/blob/641f418e102218c4bf16fcd3124bfebed6217ef6/src/transformers/models/rag/retrieval_rag.py#L397)).
During the training what if the data file (.arrow files) change? Here, the entire data structure is the same, it is just the values that change.
In this kind of scenario do we have to use the load_dataset function again or it will automatically access the updated file? <|||||>You would need to create a new arrow file and load the new arrow file.
If you overwrite the arrow file that is currently loaded I'm pretty sure things won't get updated properly.<|||||>yeah, that is what I actually observed. Btw I have implemented the end-to-end case with RAY. Currently doing the final testing. Will do a pull request if it is possible. <|||||>This is really cool thanks ! |
transformers | 9,645 | closed | Odd predictions of T5 models in recent versions | We are seeing odd predictions by T5 models ([UnifiedQA models](https://github.com/allenai/unifiedqa/)) when using the recent HF version (4.2.1). Here is the discussion: https://github.com/allenai/unifiedqa/issues/11
### Who can help
@TevenLeScao @patrickvonplaten
## To reproduce
Try running the following script:
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "allenai/unifiedqa-t5-small" # you can specify the model size here
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
return [tokenizer.decode(x) for x in res]
run_model("which is best conductor? \\n (a) iron (b) feather")
```
- For `transformers==4.2.1`, I am getting `['<pad> iron</s>']`, which is not good.
- However, `transformers==3.5.1`and `transformers==3.1.0` give me `['iron']`which is the expected response.
| 01-17-2021 20:19:52 | 01-17-2021 20:19:52 | Hello @danyaljj,
This is expected actually. Could you change your code as follows to get the previous results:
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "allenai/unifiedqa-t5-small" # you can specify the model size here
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
return tokenizer.batch_decode(res, skip_special_tokens=True)
run_model("which is best conductor? \\n (a) iron (b) feather")
```<|||||>Thanks for the quick reply! The new code works! Thanks! |
transformers | 9,644 | closed | Fail to convert the Funnel Transformer tensorflow version to transformer one when use the official script | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:3.5.1
- Platform:Centos
- Python version:3.7
- PyTorch version (GPU?):1.6.0
- Tensorflow version (GPU?):2.3.2
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:yes
## Information
Model I am using (Bert, XLNet ...):Funnel Transformer
## To reproduce
Steps to reproduce the behavior:
1.use the script `convert_funnel_original_tf_checkpoint_to_pytorch.py`@sgugger @LysandreJik
raise error
```
Traceback (most recent call last):
File "run_pretraining.py", line 158, in <module>
convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.config_file, args.pytorch_dump_path)
File "run_pretraining.py", line 40, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_funnel(model, config, tf_checkpoint_path)
File "run_pretraining.py", line 122, in load_tf_weights_in_funnel
pointer = getattr(pointer, _layer_map[m_name])
File "/root/miniconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 772, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'FunnelForPreTraining' object has no attribute 'embeddings'
```
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 01-17-2021 15:07:42 | 01-17-2021 15:07:42 | Hi! Could you explain the full procedure? Where did you obtain the Funnel transformer TensorFlow version? Is it a model you trained yourself using another framework? (like this one: https://github.com/laiguokun/Funnel-Transformer)<|||||>just use the official ones(like this one: https://github.com/laiguokun/Funnel-Transformer) @LysandreJik
the layer map "input" -> "embedding", raise error<|||||>Could you provide the configuration you used, as well as which Funnel Transformer (which identifier? Is it the TensorFlow or the TensorFlow-Full) you tried to convert? Thank you<|||||>@LysandreJik I was train my funnel with the official code, I think my pretrain tensorflow is Tensorflow-Full with the adam weight. May be I need to transform my pretrain model to the TensorFlow or the TensorFlow-Full one first, then use the convert script to change to the transformer one?<|||||>I see! Can you try the fix proposed in #9683 and let me know if it fixes your issue?
You can install it in your environment with:
```
pip install git+https://github.com/huggingface/transformers.git@convert_funnel
```
or if you have a clone of the repository, you can pull it and checkout the `convert_funnel` branch.<|||||>@LysandreJik Thanks for quickly reply. I will take a try.<|||||>@LysandreJik It has raise a new error, I cannot `convert_funnel ` branch, I found that it has merge to `master` branch, so I use the `master` branch
when set `base_model=False`
```
from transformers.models.funnel.convert_funnel_original_tf_checkpoint_to_pytorch import convert_tf_checkpoint_to_pytorch
tf_checkpoint_path = "xxxx/B6-6-6H768-ELEC-TF_model.ckpt"
config_file = "xxxxxx"
pytorch_dump_path = "xxxxxx/funnel-base"
convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, config_file, pytorch_dump_path, False)
```
```
File "test.py", line 9, in <module>
convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, config_file, pytorch_dump_path, False)
File "/root/miniconda3/envs/transformers/lib/python3.7/site-packages/transformers-4.3.0.dev0-py3.7.egg/transformers/models/funnel/convert_funnel_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_funnel(model, config, tf_checkpoint_path)
File "/root/miniconda3/envs/transformers/lib/python3.7/site-packages/transformers-4.3.0.dev0-py3.7.egg/transformers/models/funnel/modeling_funnel.py", line 136, in load_tf_weights_in_funnel
pointer = pointer.layers[layer_index]
File "/root/miniconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/container.py", line 164, in __getitem__
return self._modules[self._get_abs_string_index(idx)]
File "/root/miniconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/container.py", line 154, in _get_abs_string_index
raise IndexError('index {} is out of range'.format(idx))
IndexError: index 6 is out of range
```
when set `base_model=True`
```
from transformers.models.funnel.convert_funnel_original_tf_checkpoint_to_pytorch import convert_tf_checkpoint_to_pytorch
tf_checkpoint_path = "xxxx/B6-6-6H768-ELEC-TF_model.ckpt"
config_file = "xxxxxx"
pytorch_dump_path = "xxxxxx/funnel-base"
convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, config_file, pytorch_dump_path, True)
```
```
Traceback (most recent call last):
File "test.py", line 9, in <module>
convert_tf_checkpoint_to_pytorch(tf_checkpoint_path, config_file, pytorch_dump_path, True)
File "/root/miniconda3/envs/transformers/lib/python3.7/site-packages/transformers-4.3.0.dev0-py3.7.egg/transformers/models/funnel/convert_funnel_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_funnel(model, config, tf_checkpoint_path)
File "/root/miniconda3/envs/transformers/lib/python3.7/site-packages/transformers-4.3.0.dev0-py3.7.egg/transformers/models/funnel/modeling_funnel.py", line 136, in load_tf_weights_in_funnel
pointer = pointer.layers[layer_index]
File "/root/miniconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 772, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'FunnelEncoder' object has no attribute 'layers'
```<|||||>@sgugger do you have an idea of what might be going wrong?<|||||>@LysandreJik Which config file is using, I use the original full version tensorflow one `net_config.json`
```
{
"block_size": "6_6_6",
"d_embed": 768,
"d_head": 64,
"d_inner": 3072,
"d_model": 768,
"decoder_size": "2",
"dropact": 0.0,
"dropatt": 0.1,
"dropout": 0.1,
"ff_activation": "gelu",
"init": "truncated_normal",
"init_range": 0.1,
"init_std": 0.02,
"n_head": 12,
"pool_q_only": true,
"pooling_size": 2,
"pooling_type": "mean",
"rel_attn_type": "factorized",
"separate_cls": true,
"vocab_size": 21128
}
```<|||||>No you need to convert your configuration first to a proper `FunnelConfig`, that is what the conversion script is expecting.<|||||>@LysandreJik @sgugger Now the setting is the same, but still raise error, cannot convert the full tensorflow version to transformers ones<|||||>Like I said before, it works for me. So without more information about the environment, the command you launch and the stack trace, there is really nothing we can do to help.<|||||>> @LysandreJik Which config file is using, I use the original full version tensorflow one `net_config.json`
>
> ```
> {
> "block_size": "6_6_6",
> "d_embed": 768,
> "d_head": 64,
> "d_inner": 3072,
> "d_model": 768,
> "decoder_size": "2",
> "dropact": 0.0,
> "dropatt": 0.1,
> "dropout": 0.1,
> "ff_activation": "gelu",
> "init": "truncated_normal",
> "init_range": 0.1,
> "init_std": 0.02,
> "n_head": 12,
> "pool_q_only": true,
> "pooling_size": 2,
> "pooling_type": "mean",
> "rel_attn_type": "factorized",
> "separate_cls": true,
> "vocab_size": 21128
> }
> ```
I got the same problem as you and I manage to convert the checkpoint by using the config file at the hugging face model hub. If you use 6-6-6 block use this one https://huggingface.co/funnel-transformer/intermediate/raw/main/config.json and change vocab size.<|||||>@sgugger @LysandreJik I think is the config file problem, I try @NLP33 advise fix the problem<|||||>@RyanHuangNLP I have asked you before to give us the command your launch, the environment you use and a the content of the config file you are using. There is no point tagging me further on this issue with a vague message if you are not willing to share for those information as I cannot investigate a bug I cannot reproduce.
As I also said before and @NLP33 indicated, the script only supports config files corresponding to a config created by using `FunnelConfig` from transformers. It does not support the original config files from the original repo. |
transformers | 9,643 | closed | [Feature Request] Add 3D attention mask for T5Model | ## Environment info
- `transformers` version: 4.2.1
- Platform: Linux
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cuda11.0
### Who can help
T5: @patrickvonplaten
## Information
The `get_extended_attention_mask()` does not sufficiently address the case with 3D attention mask. The problem emerges for T5Model as `input_ids` and `decoder_input_ids` are of different length and the `attention_mask` is of shape [Batch_size, Seq_length, Seq_length]. The decoder uses `attention_mask` directly for `encoder_attention_mask` in cross-attention, which is of incorrect shape and the error message does give any information about why it happens.
## To reproduce
As described above. I can add code later if needed.
## Expected behavior
I propose to add a sanity check for attention masks or improve the `get_extended_attention_mask()` method.
| 01-17-2021 14:35:04 | 01-17-2021 14:35:04 | Hello @yongyi-wu,
yes you're right. T5 is not yet fully compatible with 3D attention_mask input. Currently, I won't find enough time to work on adding this feature, but I'll post it under "Community projects" in case someone from the community is interested in giving it a shot.
Also feel free to open a PR yourself, if you want to try :-) <|||||>Hi, I am new here but would like to give this a shot. Because it is my first issue I could use some direction if on how to tackle this, if this is okay with you.
- Would you prefer the sanity check or an improved `get_extended_attention_mask()` method?
- Do you know of an already existing implementations with 3D attention mask as a reference.
- Where would you like to see the solution implemented. <|||||>Hi, I'm a newbie in transformers and trying to make customized 3D attention mask with T5ConditionalGeneration.
I was googling 3D attention mask for T5 model and found it in here.
From what I understood above is that, I should add one more `encoder_sequence_length` variable at the end of line `encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)` in T5Stack forward function if I want to build 3D attention?
Do I need to edit anything else?
@lexhuismans
Thanks! |
transformers | 9,642 | closed | Multi-GPU inference with Tensorflow backend | Is this already supported maybe? I know that multi-GPU TRAINING is supported with TF* models pretty well. But not inference. What is the recommended way when one wants to do inference for a large batch of text (tens of millions rows)? Currently only one of GPUs gets loaded. Tensorflow have a [guide ](https://www.tensorflow.org/tutorials/distribute/save_and_load) on how to use model saved in the native tf format to do distributed inference under a scope:
```python
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
dist_predict_dataset = another_strategy.experimental_distribute_dataset(
predict_dataset)
# Calling the function in a distributed manner
for batch in dist_predict_dataset:
another_strategy.run(inference_func,args=(batch,))
```
However, it seems that transformers do not support saving in this native format? At least TFDistilBertForSequenceClassification, when loaded back, has damaged input signatures (no attention_mask, wrong sequence length, fake None inputs) and can't process anything. And this very tracker is crowded with similar questions which are left unanswered. Can anyone shed some light on best approach to distributed inference please? Also adding a bullet on this to the documentation would be extremely helpful for many folks. | 01-17-2021 14:25:43 | 01-17-2021 14:25:43 | Hello!!
Can you please share the code you are using as for me it works as expected:
```python
from transformers import TFBertModel, BertTokenizer
import tensorflow as tf
first_strategy = tf.distribute.MirroredStrategy()
with first_strategy.scope():
model = TFBertModel.from_pretrained("bert-base-cased")
model.save_pretrained("my_model")
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
another_strategy = tf.distribute.OneDeviceStrategy("/cpu:0")
with another_strategy.scope():
restored_keras_model_ds = TFBertModel.from_pretrained("my_model")
inputs = tokenizer("Hello world.", return_tensors="tf")
predict_dataset = tf.data.Dataset.from_tensor_slices(inputs).batch(1)
dist_predict_dataset = another_strategy.experimental_distribute_dataset(predict_dataset)
for batch in dist_predict_dataset:
another_strategy.run(restored_keras_model_ds, args=(batch,))
```<|||||>Thank you guys so much for the response! It was not obvious to use save_pretrained under the scope. Your example runs successfully, however on a 8 GPUs machine I observe (with bigh enough input list, of course) a weird pattern when maximum 2 GPUs are busy, and the rest are simply stale. Then after some seconds new pair of GPUs become active and rest are [waiting.](https://pasteboard.co/JKmszWl.png) It happens no matter what strategy I try, MirroredStrategy or MultiWorkerMirroredStrategy. @jplu what strategy would you recommend to utilize all 8 GPUs?<|||||>This simply means that TF needs no more than 2 GPUs to run your inference.<|||||>But it's taking more than 40 seconds to run it. It definitely needs to utilize more ...
>
> 2021-01-19 12:37:55,915 - INFO - <ipython-input-108-8a0316d72a4a> - root - infer_natively - line: 4 - Tokenizing dataset of length 100000...
> 2021-01-19 12:38:07,630 - INFO - <ipython-input-108-8a0316d72a4a> - root - infer_natively - line: 9 - Converting dataset to tf dataset using batch size 244...
> 2021-01-19 12:38:07,634 - INFO - <ipython-input-108-8a0316d72a4a> - root - infer_natively - line: 12 - Distributing tf dataset across replicas...
> 2021-01-19 12:38:07,714 - INFO - <ipython-input-108-8a0316d72a4a> - root - infer_natively - line: 16 - Inferencing using 8 GPUs
> 2021-01-19 12:39:38,318 - INFO - <ipython-input-108-8a0316d72a4a> - root - infer_natively - line: 36 - Done. nbatches processed: 26<|||||>Tensorflow doesn't take the time as reference but the size. If your data can fit on 2 GPUs then it uses only 2. I suggest you to read this to better understand how it works. https://www.tensorflow.org/guide/gpu<|||||>> as reference but the size. If your data can fit on 2 GPUs then it uses only 2. I suggest you to
Following this link, I was not able to find any mentioning of when tf can select lower number of GPUs to run inference on, depending on data size. I tried with a million sentences and I'm still observing that pattern when only 2 GPUs are heavily loaded, and the rest has 0% utilization. and that pair of active GPUs changes randomly as the time goes. So something is definitely wrong with implementation. I was asking tf "please use all devices for this huge workload", and you are saying it just like "it can be done using 2 GPUs dude so i'm using 2, I don't care how long you gonna wait for the result" ? :-)<|||||>@jplu so if you know how to make it use all 8 GPUs in my particular case for 1 million of input sentences please advise, it would solve the issue completely.<|||||>Really sorry I don't know what to tell you more, if you have mostly TF related questions I suggest you to open an issue on the TF github repo.<|||||>Hi,
I'm having similar issues with inference when using multi-gpu, the `predict` function returns empty output despite being actually processing the input.
```python
from transformers import BertTokenizerFast, TFBertForSequenceClassification
import tensorflow as tf
strategy = tf.distribute.MirroredStrategy()
#strategy = tf.distribute.OneDeviceStrategy("/gpu:0")
with strategy.scope():
tf_model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
inputs = tokenizer('This is a test', 'Esto es una prueba',
return_tensors='tf', max_length=200,
padding='max_length', truncation=True,
return_attention_mask=True,
return_token_type_ids=False)
print(tf_model.predict([inputs["input_ids"], inputs["attention_mask"]],
verbose=1))
print(tf_model([inputs["input_ids"], inputs["attention_mask"]]))
```
```
All model checkpoint layers were used when initializing TFBertForSequenceClassification.
Some layers of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
WARNING:tensorflow:From /venv/lib/python3.7/site-packages/tensorflow/python/data/ops/multi_device_iterator_ops.py:601: get_next_as_optional (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Iterator.get_next_as_optional()` instead.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
1/1 [==============================] - 0s 241us/step
TFSequenceClassifierOutput(loss=None, logits=None, hidden_states=None, attentions=None)
TFSequenceClassifierOutput(loss=None, logits=<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[-0.47814545, 0.35146457]], dtype=float32)>, hidden_states=None, attentions=None)
```
Is this expected to happen? It would be great to be able to use predict function for performance reasons.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,641 | closed | Conditional branching logic in modeling_tf_flaubert.py causing errors with TF Graph | Hi @jplu !
I am encountering an error when running the TFFlaubert model inside of a tensorflow graph.
Here is some code to reproduce the issue:
```
import FlaubertTokenizer, TFFlaubertModel, FlaubertConfig
import tensorflow as tf
config=FlaubertConfig.from_pretrained('jplu/tf-flaubert-small-cased', output_attentions=True, output_hidden_states=True, return_dict=True)
tokenizer = TFFlaubertModel.from_pretrained(config=config, pretrained_model_name_or_path='jplu/tf-flaubert-small-cased')
@tf.function
def train_step(inputs, mask, token_type_ids):
with tf.GradientTape() as tape:
a = model({
"input_ids": inputs,
"training": True,
"attention_mask": mask,
"token_type_ids": token_type_ids,
})
train_step(inputs, mask, token_type_ids)
```
The error seems to be caused by L611-624 in modeling_tf_flaubert.py [here](https://github.com/huggingface/transformers/blob/c60e0e1ee45f4bf1017736b146c51729f120bb83/src/transformers/models/flaubert/modeling_tf_flaubert.py#L611)
The error message is as follows:
> TypeError: in user code:
>
> python-input-5-4a1e131ff478:4 train_step *
> a = model({
> /Users/ludwig/venv/lib/python3.6/site-packages/transformers/models/flaubert/modeling_tf_flaubert.py:274 call *
> outputs = self.transformer(
> /Users/ludwig/venv/lib/python3.6/site-packages/transformers/models/flaubert/modeling_tf_flaubert.py:616 call *
> for i in range(self.n_layers):
> /Users/ludwig/venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py:1163 if_stmt
> _tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
> /Users/ludwig/venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py:1210 _tf_if_stmt
> cond, aug_body, aug_orelse, strict=True)
> /Users/udwig/venv/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
> return target(*args, **kwargs)
> /Users/ludwig/venv/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py:538 new_func
> return func(*args, **kwargs)
> /Users/ludwig/venv/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py:1180 cond
> return cond_v2.cond_v2(pred, true_fn, false_fn, name)
> /Users/ludwig/venv/lib/python3.6/site-packages/tensorflow/python/ops/cond_v2.py:96 cond_v2
> op_return_value=pred)
> /Users/ludwig/venv/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py:990 func_graph_from_py_func
> func_outputs = python_func(*func_args, **func_kwargs)
> /Users/ludwig/venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py:1206 aug_orelse
> _verify_tf_cond_vars(new_body_vars_[0], new_orelse_vars, symbol_names)
> /Users/ludwig/venv/lib/python3.6/site-packages/tensorflow/python/autograph/operators/control_flow.py:365 _verify_tf_cond_vars
> ' branches:\n\n{}'.format(name, str(e)))
>
> TypeError: 'hidden_states' must have the same nested structure in the main and else branches:
>
> The two structures don't have the same nested structure.
>
> First structure: type=tuple str=(<tf.Tensor 'tf_flaubert_model/transformer/mul:0' shape=(18, 44, 512) dtype=float32>,)
>
> Second structure: type=tuple str=()
>
> More specifically: The two structures don't have the same number of elements. First structure: type=tuple str=(<tf.Tensor 'tf_flaubert_model/transformer/mul:0' shape=(18, 44, 512) dtype=float32>,). Second structure: type=tuple str=()
> Entire first structure:
> (.,)
> Entire second structure:
> ()
| 01-17-2021 02:09:37 | 01-17-2021 02:09:37 | Hello,
To help you we need to have more information such as your environement info, and a standalone piece of code to let us reproduce your error. Thanks!<|||||>As a first guess, I can say that the issue you get might come from a misformed input, can you try with:
```
@tf.function
def train_step(inputs, mask, token_type_ids):
with tf.GradientTape() as tape:
a = model({
"input_ids": inputs
"attention_mask": mask,
"token_type_ids": token_type_ids,
}, training=True)
```<|||||>That worked! Thank you!! |
transformers | 9,640 | closed | Renamed `nlp` variables #9455 | * Give better names to pipeline variables named nlp
* This was desired because nlp was not a descriptive variable name
Fixes # 9455
@Narsil , @sgugger
| 01-17-2021 01:03:02 | 01-17-2021 01:03:02 | I can change `unmask` to `unmasker`. And I'll go back and take care of the merge conflicts and `make style` . <|||||>I think your rebase went wrong as the diff as suddenly become unreadable. Could you close this PR and open a new one from your branch? Don't hesitate to tag me on it.<|||||>Hi @terrenceedmonds I don't think you ever opened a new clean PR from your branch (might need a new rebase first since it's been a while). You had done all the work for this issue so it would be great to merge it!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,639 | closed | Add head_mask/decoder_head_mask for TF BART models | This PR adds `head_mask` and `decoder_head_mask` input arguments for TF BART-based models. The full list of models is as follows:
* **TFBART**
* **TFMBart**
* **TFBlenderbot**
* **TFBlenderbotSmall**
* **TFMarian**
* **TFPegasus**
This PR can be deemed as a TF counterpart to the PR #9569.
<hr>
**Further information:**
* I've added `test_headmasking` functionality to `tests/test_modeling_tf_common.py`
* **_TODO_**: Add a test (as a part of `test_headmasking`) to verify that we can get a gradient back for importance score computation. I am not so familiar with TensorFlow, therefore, I am not fully sure with a TF equivalent to
```
outputs = model(**inputs, return_dict=True)
output = sum(t.sum() for t in outputs[0])
output = output.sum()
output.backward()
```
<hr>
Reviewer: @patrickvonplaten
| 01-16-2021 22:55:31 | 01-16-2021 22:55:31 | @stancld, thanks so much for tackling this! I think it would be a great addition if we could add a `test_headmasking` method for TF as well.
I think it might be better if we don't try to have the exact test as in PyTorch. For now it should be enough to just leave out all gradient-related statements, like `.backward()`, `requires_grad(...)` in TF. The attentions output should still be 0 accordingly. <|||||>Hey @patrickvonplaten, I hope this PR is ready for review. There's newly implemented `test_headmasking` method which follows the method from PyTorch testing except for the gradient-related statements as you pointed above.
It seems all checks have passed after rebasing this PR.<|||||>Also, @jplu it would be great if you could take a quick look if this is all serving compatible (I don't see a reason why it wouldn't be though)<|||||>Just done further tests on your PR and the changes are not graph compliant and the following slow tests are failing:
- test_saved_model_creation
- test_saved_model_creation_extended
One of the reasons is what @sgugger raised.<|||||>@jplu @sgugger Thank you very much for your comments and suggested solution. I'll try to fix these issues and send a new commit!<|||||>Hi @jplu, could you, please, review the changes in the code I've done to say whether assertions are done more appropriately now? :)
I've been also struggling to run (on my local) those four slow tests you mentioned last time, but I'm gonna have a look at that at the weekend if we're afraid of not passing.<|||||>I confirm that the assertions are done more appropriately now! Those four tests are among the most important one for the TF code base (they are run in slow mode because unfortunately they take some time to be executed).
If you need some help to make them pass, I will be happy to.<|||||>> @jplu I removed `global_rng` and leave it as it was before changes. Hopefully, now this PR is ready for a final review
Are these tests finally pass? :
* test_saved_model_with_hidden_states_output
* test_saved_model_with_attentions_output
* test_saved_model_creation
* test_saved_model_creation_extended
If yes, I will approve the PR :)<|||||>@jplu I ran these 4 aforementioned tests for BART and all those tests passed.<|||||>Merging, thanks a lot for your efforts @stancld!! |
transformers | 9,638 | closed | ValueError: Expected floating point type, got <dtype: 'int32'> for TFGPT2LMHeadModel | Hi,
I am trying to serve a gpt2 model online using Google cloud. But when creating model environment I get the error:
```
Create Version failed. Bad model detected with error: "Failed to load model: Unexpected error when loading the model: in user code:\n\n /tmp/custom_lib/transformers/modeling_tf_gpt2.py:551 call *\n transformer_outputs = self.transformer(inputs, **kwargs)\n /tmp/custom_lib/transformers/modeling_tf_gpt2.py:321 call *\n inputs_embeds = self.wte(input_ids, mode=\"embedding\")\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py:758 __call__ **\n self._maybe_build(inputs)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py:2131 _maybe_build\n self.build(input_shapes)\n /tmp/custom_lib/transformers/modeling_tf_utils.py:1522 build\n \"weight\", shape=[self.vocab_size, self.hidden_size], initializer=get_initializer(self.initializer_range)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py:447 add_weight\n caching_device=caching_device)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/training/tracking/base.py:743 _add_variable_with_custom_getter\n **kwargs_for_getter)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py:141 make_variable\n shape=variable_shape if variable_shape else None)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:259 __call__\n return cls._variable_v1_call(*args, **kwargs)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:220 _variable_v1_call\n shape=shape)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:198 <lambda>\n previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variable_scope.py:2598 default_variable_creator\n shape=shape)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:263 __call__\n return super(VariableMetaclass, cls).__call__(*args, **kwargs)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py:1434 __init__\n distribute_strategy=distribute_strategy)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py:1567 _init_from_args\n initial_value() if init_from_fn else initial_value,\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py:121 <lambda>\n init_val = lambda: initializer(shape, dtype=dtype)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/init_ops_v2.py:445 __call__\n dtype = _assert_float_dtype(dtype)\n /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/init_ops_v2.py:1037 _assert_float_dtype\n raise ValueError(\"Expected floating point type, got %s.\" % dtype)\n\n ValueError: Expected floating point type, got <dtype: 'int32'>.\n (Error code: 0)"
```
I saved the model from a fine-tuned GPT2 model:
```python
tf_model = TFGPT2LMHeadModel.from_pretrained("checkpoint-8000", from_pt=True)
tf_model.save_pretrained("tensorflow-model")
model_class, tokenizer_class = TFGPT2LMHeadModel, GPT2Tokenizer
tokenizer = tokenizer_class.from_pretrained('tensorflow-model')
model = model_class.from_pretrained('tensorflow-model')
```
This model works on my local machine using ``model.generate()``. But I get the error above when creating model environment on GCP.
I dont know if this is a Google cloud issue or transformers issue. However, when looking at the model created by the line
```python
model = model_class.from_pretrained('tensorflow-model')
```
I can see that the ``model.dtype`` and ``model.variable_dtype`` is float32.
Can anyone help why google cloud thinks this model expects a ``float32`` input and not ``int32``. Can I change anything in model to ensure that correct input dtype?
Thanks | 01-16-2021 20:58:03 | 01-16-2021 20:58:03 | @jplu do you have any experience with creating model environments on GCP with TensorFlow?<|||||>Hello!
Can you first share your local env and the env you are using on your GCP machine? <|||||>Hi @jplu
Thanks for the response.
For my local environment I'm on Python 3.6, tenensorflow 2.4, transformers 2.8.0. Is this what you meant by local environment?
On GCP, here is my model version settings:
```
Model sentence_generator
Model location gs://gpt2-checkpoint/tensorflow-model/
Creation time Jan 16, 2021, 3:51:39 PM
Last use time
Python version 3.7
Runtime version 2.2
Custom code and dependencies gs://gpt2-checkpoint/staging-dist/generator_package-0.6.tar.gz
Prediction class generator_class_tf.GeneratorClass
Machine type Single core CPU
Auto scaling minimum nodes 1
```
Below is my ``setup.py`` file:
```python
from setuptools import setup
setup(
name="generator_package",
version="0.6",
include_package_data=True,
scripts=["generator_class_tf.py"],
install_requires=['transformers==2.8.0']
)
```
<|||||>Yes, this is what I meant. I see that you are using an old version of transformers, can you update to the last release please.<|||||>> Yes, this is what I meant. I see that you are using an old version of transformers, can you update to the last release please.
With the latest version of transformers I get this error:
> Create Version failed. Bad model detected with error: "Failed to load model: Unexpected error when loading the model: problem in generator_class_tf - DistributionNotFound: The 'tqdm>=4.27' distribution was not found and is required by this application, \nTry: pip install transformers -U or pip install -e '.[dev]' if you're working with git master (Error code: 0)"
And to the best of my knowledge, I dont think we can ``pip install`` anything with Google cloud prediction environment.
<|||||>You can just replace your `setup.py` file with
```
from setuptools import setup
setup(
name="generator_package",
version="0.6",
include_package_data=True,
scripts=["generator_class_tf.py"],
install_requires=['transformers==4.2.1']
)
```<|||||>> You can just replace your `setup.py` file with
>
> ```
> from setuptools import setup
>
>
> setup(
> name="generator_package",
> version="0.6",
> include_package_data=True,
> scripts=["generator_class_tf.py"],
> install_requires=['transformers==4.2.1']
> )
> ```
Thanks but this is how I had my ``setup.py`` when I got the error above relating to ``tqdm``. <|||||>Ok, then did you try:
```
from setuptools import setup
setup(
name="generator_package",
version="0.6",
include_package_data=True,
scripts=["generator_class_tf.py"],
install_requires=['transformers==4.2.1', 'tqdm>=4.27']
)
```<|||||>Yes, I have. still get the same error :(
> Create Version failed. Bad model detected with error: "Failed to load model: Unexpected error when loading the model: problem in generator_class_tf - DistributionNotFound: The 'tqdm>=4.27' distribution was not found and is required by this application, \nTry: pip install transformers -U or pip install -e '.[dev]' if you're working with git master (Error code: 0)"<|||||>According to this page, it should work, https://cloud.google.com/ai-platform/training/docs/packaging-trainer
So the problem might come from somewhere else. I suppose you can run your model as expected locally?<|||||>Yes, that's correct. It works without problems on my own machine.
This looks to be a problem with GCP. I'll lodge this as a bug on their issue tracker and update here on any progress made.
<|||||>I'm seeing the same issue for my deployment, using transformers 4.5.0. But it seems indeed to be a GCP issue.
Have you seen any comments from Google on this @farazk86 ?
Thanks in advance!<|||||>> I'm seeing the same issue for my deployment, using transformers 4.5.0. But it seems indeed to be a GCP issue.
> Have you seen any comments from Google on this @farazk86 ?
>
> Thanks in advance!
Yes, this is a GCP issue.
Unfortunately, I gave up in the end. As the issue I created on Google issue tracker also did not help, they were asking for me to provide information from methods within transformers library that I was not familiar with or knew about. It was too much of a hassle - I just gave up.<|||||>Alright, thanks for the quick reply! That is too bad, I will keep trying myself, and let you know if I find a solution.
Just for my curiosity, did you instead take any alternative approach (than using Custom Prediction Routines) in order to serve a Transformer model on Google?
I read some people had success with using Docker containers on the "Cloud Run" API.<|||||>No, I just gave up on cloud entirely. And you are right, I had also determined that Docker works as other people on stackoverflow had achieved to deploy using Docker. But I had no experience with docker and just moved on to other projects.
If you do manage to figure it out, then yes please do let me know even though now the $300 introductory credits are also expired :)<|||||>I will keep you informed on my progress for sure.
Could you provide the link to the issue tracker / bug report you submitted with GCP?
If that is a public page that is.
Thanks in advance!<|||||>sure, I'll try to find it. <|||||>> I will keep you informed on my progress for sure.
>
> Could you provide the link to the issue tracker / bug report you submitted with GCP?
> If that is a public page that is.
>
> Thanks in advance!
Here are both my submitted issues, based on my multiple tries at making this work: https://issuetracker.google.com/issues/177648341 and https://issuetracker.google.com/issues/178236762<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,637 | closed | XLMRobertaTokenizerFast producing wrong tokenized output | ## Environment info
- transformers` version: 4.2.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@mfuntowicz
@stefan-it
## Information
Model I am using is XLM-RoBERTa.
The problem arises when using XLMRobertaTokenizerFast tokenizer.
The tasks I am working on is token-classification. In order to align the labels with the sub-word units I have used the code snippet provided here: https://huggingface.co/transformers/custom_datasets.html [ Fine-tuning with custom datasets/Token Classification with W-NUT Emerging Entities ].
When trying to align the labels with the encodings, it throws: "ValueError: NumPy boolean array indexing assignment cannot assign X input values to the Y output values where the mask is true."
This behavior is due to tokenizing punctuation. Moreover comma ( ' , ' ) gets tokenized into '__' and ',' ( having offset values (0,1) ) Similar behavior happens with dot. However, some other punctuation marks are producing only one token (i.g. ' : ' -> '__:').
In addition, the offset_mapping value for ':' is different in different sentences resulting either in (0,0) or (0,3) tuple. The problem is that padding tokens have offset tuple with values (0,0) which are excluded from alignment, but in this case I have to preserve the punctuation since it is POS tagging problem.
## To reproduce
```
print("Token: {} Offset_mapping: {}".format(train_encodings[338].tokens[67], train_encodings[338].offsets[67]))
# Token: β... Offset_mapping: (0, 0)
print("Token: {} Offset_mapping: {}".format(train_encodings[20].tokens[2], train_encodings[20].offsets[2]))
# Token: β... Offset_mapping: (0, 3)
```
Moreover, although I fixed this issue by writing my own masks, I found new issue: the blank space which denotes start of the word is tokenized as separate token instead of being together with the starting sub-token.
## To reproduce
```
tokenizer = XLMRobertaTokenizerFast.from_pretrained("xlm-roberta-base")
model= XLMRobertaForTokenClassification.from_pretrained("xlm-roberta-base")
s = "Je Δesto kritizirao vladu ."
print(tokenizer.tokenize(s))
# output: ['βJe', 'βΔesto', 'βkrit', 'izira', 'o', 'β', 'vlad', 'u', 'β', '.']
```
## Expected behavior
1. Punctuation marks should be consistently tokenized and having offset values different from padding tokens.
2. The first sub-word token should be with preceding blank space everywhere.
| 01-16-2021 20:40:59 | 01-16-2021 20:40:59 | There are two different subjects being discussed here:
- The tokenization behavior: how punctuation is tokenized, or how the blank spaces are separated from the next token. This is expected behavior and just describes the way this tokenizer (XLMRoberta) works.
- The offset mappings, which as described here are wrong in some cases. These need to be fixed, and I am going to describe a bit more the problem and how we are going to solve it below.
### Cause
This bug in offset mapping actually affects **all** the fast tokenizers converted from sentencepiece. During the pre-tokenization step, we first split everything on whitespaces (`WhitespaceSplit` pre-tokenizer), and in a second step, we add the `β` character in front of each word (`Metaspace` pre-tokenizer). This process is accurate in terms of tokenization, but it makes the offset tracking very difficult:
- All the whitespaces get removed, so we won't have any token pointing back to them.
- We add a "new" `β` in front of each word, so these tokens actually point back to the beginning of each word: the first character.
### How to fix it
The initial idea of using the `WhitespaceSplit` in a first step was simply to deduplicate the whitespaces but since it leads to loss of information we'll replace it with the following process:
- Normalization step that replaces groups of whitespaces with a single one, effectively mapping the single whitespace to the group in the original input.
- Pretokenization step: we just keep the `Metaspace` pre-tokenizer.
In order to fix this we need to:
1. Update all the `tokenizer.json` files on the hub, and it will be compatible with any version of `transformers` since we introduced these fast tokenizers (3.5.0+).
2. Update all the conversion steps in `transformers` to avoid creating more buggy tokenizers.<|||||>### List of updated tokenizers:
- https://huggingface.co/google/pegasus-xsum
- https://huggingface.co/google/reformer-crime-and-punishment
### These can't be fixed this way:
The following will need a new version of `transformers` with a bugfix in `tokenizers`. We'll need to find a way to rely on the new `tokenizer.json` version only on versions of `transformers` that include this bugfix, as it would break all the previous ones.
- https://huggingface.co/albert-base-v1
- https://huggingface.co/albert-base-v2
- https://huggingface.co/albert-large-v1
- https://huggingface.co/albert-large-v2
- https://huggingface.co/albert-xlarge-v1
- https://huggingface.co/albert-xlarge-v2
- https://huggingface.co/albert-xxlarge-v1
- https://huggingface.co/albert-xxlarge-v2
- https://huggingface.co/camembert-base
- https://huggingface.co/facebook/mbart-large-en-ro
- https://huggingface.co/moussaKam/barthez
- https://huggingface.co/moussaKam/barthez-orangesum-title
- https://huggingface.co/moussaKam/mbarthez
- https://huggingface.co/t5-11b
- https://huggingface.co/t5-3b
- https://huggingface.co/t5-base
- https://huggingface.co/t5-large
- https://huggingface.co/t5-small
- https://huggingface.co/xlm-roberta-base
- https://huggingface.co/xlm-roberta-large
- https://huggingface.co/xlm-roberta-large-finetuned-conll02-dutch
- https://huggingface.co/xlm-roberta-large-finetuned-conll02-spanish
- https://huggingface.co/xlm-roberta-large-finetuned-conll03-english
- https://huggingface.co/xlm-roberta-large-finetuned-conll03-german
- https://huggingface.co/xlnet-base-cased
- https://huggingface.co/xlnet-large-cased<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Any update on this one?<|||||>Bump<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,636 | closed | key error when use trainer to fine_tuning a dataset | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Linux-3.10.0-693.5.2.el7.x86_64-x86_64-with-centos-7.4.1708-Core
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger
## Information
Model I am using (Bert, XLNet ...):bert-base-uncased
The problem arises when using:
* the official example scripts: (give details below)
i am fine-tuning a text_claasifiction on dbpedia_14.and i followed this colab https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb#scrollTo=TlqNaB8jIrJW
The tasks I am working on is:
* an official GLUE/SQUaD task: (give the name)
datset:dbpedia_14
## To reproduce
Steps to reproduce the behavior:
error
`File "train.py", line 69, in <module>
trainer.train()
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/transformers/trainer.py", line 784, in train
for step, inputs in enumerate(epoch_iterator):
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
KeyError: 2`
code
```python
dataset_name = 'sem_eval_2014_task_1'
num_labels_size = 3
batch_size = 4
model_checkpoint = 'bert-base-uncased'
number_train_epoch = 5
def tokenize(batch):
return tokenizer(batch['premise'], batch['hypothesis'], truncation=True, )
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='micro')
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
model = BertForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels_size)
tokenizer = BertTokenizerFast.from_pretrained(model_checkpoint, use_fast=True)
train_dataset = load_dataset(dataset_name, split='train')
test_dataset = load_dataset(dataset_name, split='test')
train_encoded_dataset = train_dataset.map(tokenize, batched=True)
test_encoded_dataset = test_dataset.map(tokenize, batched=True)
args = TrainingArguments(
output_dir='./results',
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=number_train_epoch,
weight_decay=0.01,
do_predict=True
)
trainer = Trainer(
model=model,
args=args,
compute_metrics=compute_metrics,
train_dataset=train_encoded_dataset,
eval_dataset=test_encoded_dataset,
tokenizer=tokenizer
)
trainer.train()
trainer.evaluate()
```
| 01-16-2021 14:14:39 | 01-16-2021 14:14:39 | I found this Jesus is caused by this description `Here we have the loss since we passed along labels`(url:https://huggingface.co/transformers/main_classes/output.html).so if the column dataset object do not have label(or if the column which represents label have other name ,like'entailment_judgment').the trainer can not recognize this column .<|||||>so I add some line like this :
`def change_transformers_dataset_2_right_format(dataset, label_name):
return dataset.map(lambda example: {'label': example[label_name]}, remove_columns=[label_name])`.it works fine.
<|||||>I found a lot of dataset ,upload by user, the name of the column which represents 'label' have other name!
maybe it is better to unify a standard either on dataset or on trainer<|||||>and I can not visit your forum .I do not know why.and this is wired.can you please help me.thanks a lot!<|||||>The script is not meant to work out of the box on any dataset, it is an example. If the columns are named differently than the usual glue datasets, it's logical you have to change one line.
Please do not post the same issues several times.<|||||>ok, thanks for your reply .and do you know why I can not visit your forum? is there some special setting in you firewall for your forum? @sgugger <|||||>I'm not aware of any firewall problem, you're the first user reporting an issue to connect to them, to be honest.<|||||>I have this same problem.
```py
from transformers import TrainingArguments, Trainer
import numpy as np
import evaluate
training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch")
metric = evaluate.load("accuracy")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
def train(model, train, eval, **kwargs):
print('Training model...')
trainer = Trainer(
model=model,
train_dataset=train, #Dataset to train it with
eval_dataset=eval, #Dataset to test it with
compute_metrics=compute_metrics,
**kwargs
)
trainer.train()
trainer.save_model('adkai')
print('Trained!')
model.train(True)
train(model, {
'#print Hello World':'stdout.write("Hello World\n")',
'#print hello World':'stdout.write("hello World\n")',
'# print Hello world':'stdout.write("Hello world\n")',
'#print hello world':'stdout.write("hello world\n")',
'#print Hello World!':'stdout.write("Hello World!\n")',
'# print hello World!':'stdout.write("hello World!\n")',
'#print goodbye World!':'stdout.write("goodbye World!\n")',
'# write Hello World':'stdout.write("Hello World\n")',
'#write hello World':'stdout.write("hello World\n")',
'# write Hello world':'stdout.write("Hello world\n")',
'#write hello world':'stdout.write("hello world\n")',
'# write Hello World!':'stdout.write("Hello World!\n")',
'set x = 5\n#print x':'stdout.write(x, "\n")',
'set x = "Go home"\n#output x':'stdout.write(x, "\n")',
'set xyz = "Hello"# output xyz':'stdout.write(xyz, "\n")',
'set Whatever = "nothing"\n#output Whatever':'stdout.write(Whatever, "\n")',
'#output Whatever':'stdout.write("Whatever\n")',
'':'',
'':''
}, {
'#write Hello world!':'stdout.write("Hello world!\n")',
'':'',
'# output Hello World!':'stdout.write("Hello World!\n")',
})
```
(only partial code)
Please help, this is the error
```py
Traceback (most recent call last):
File "main.py", line 18, in <module>
train.train(model, {
File "/home/runner/AdkAI/train.py", line 23, in train
trainer.train()
File "/home/runner/AdkAI/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1500, in train
return inner_training_loop(
File "/home/runner/AdkAI/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1716, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "/home/runner/AdkAI/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
data = self._next_data()
File "/home/runner/AdkAI/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 721, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/runner/AdkAI/venv/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/runner/AdkAI/venv/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
KeyError: 2
``` |
transformers | 9,635 | closed | Weights used for Masked LM predictions | I wanted to get masked word predictions for a few bert-base models. I am converting the pytorch models to the original bert tf format using [this](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/convert_bert_pytorch_checkpoint_to_original_tf.py) by modifying the code to load BertForPreTraining state_dict. I am unaware of the use of cls/predictions/decoder in the snippet below, to make the masked predictions. The original BERT codebase does not have this layer, hence. Is it used, or can I safely disregard this to obtain predictions?

| 01-16-2021 12:32:46 | 01-16-2021 12:32:46 | The `cls/predictions/decoder` is the linear layer that is used to project the output of the transformer to the vocabulary logits. This layer is *tied* to the input embeddings: it has the same weights.
I believe the original BERT codebase doesn't have this layer because it re-uses the input embedding layer, instead of instantiating another one. We do this too in the TF implementation.
You can, therefore, safely disregard this layer is the implementation you're using uses the input embeddings' weights to project the output of the transformer to the vocabulary logits. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,634 | closed | Add separated decoder_head_mask for T5 Models | ### Fix issue #9632
<hr>
This PR separates `head_mask` and `decoder_head_mask` for T5 models, and thus enables to specify different head masks for an encoder and decoder.
**Description:**
- Replace a single input argument `head_mask` with a separated couple `head_mask` and `decoder_head_mask` for the T5 models: `T5Model, T5ForConditionalGeneration, TFT5Model, TFT5ForConditionalGeneration`
- Slightly change the order of input arguments to follow the convention of first 7 arguments introduced in PR #9569 for BART-based models, i.e. `"input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "head_mask", "decoder_head_mask", "encoder_outputs"`
- Currently, the updated PyTorch T5 model does not pass `test_forward_signature` in `tests/test_modeling_common.py`. This problem will be diminished once PR #9569 to be merged.
Reviewer: @patrickvonplaten (the code is ready for review) | 01-16-2021 11:29:07 | 01-16-2021 11:29:07 | Great, that looks nice! Let's first merge https://github.com/huggingface/transformers/pull/9569 and then rebase this PR so that it passes all tests :-) <|||||>Thanks for fixing this!
I have one note/question: This seems to only apply to self-attention heads, not heads in the cross attention module, right? Is this intentional?<|||||>@talkhaldi Thank you very much for pointing this out. It seems you're right and this is not intentional by myself. It'll be fixed in another commit.<|||||>Hey @patrickvonplaten and @LysandreJik. I've added some `FutureWarning` into the code to handle cases when only `head_mask` is passed by a user. Also, I fixed a cross-attention issue noted by @talkhaldi.
I believe, the PR is now ready for review as all the checks have passed after the rebasing. |
transformers | 9,633 | closed | Wrong offsets_mapping in T5TokenizerFast | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux-4.9.0-14-amd64-x86_64-with-debian-9.13
- Python version: 3.6.10
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help @patrickvonplaten, @mfuntowicz
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using: T5
## To reproduce
See comments in the code snippet.
```python
from transformers import T5TokenizerFast
def test_offset_mapping():
"""This test fails and therefore we know that there is a bug in offset_mapping mechanism.
We try to tokenize the sentence 'This is a test sentence' and notice to issues:
1. The tokenizer tokenizes it to ['This', 'is', '', 'a', 'test', 'sentence']
which means that it has redundant empty string in position 2.
2. The offset mapping maps to ['This', 'is', 'a', 'a', 'test', 'sentence']
replacing the empty string with redundant 'a'.
"""
tokenizer = T5TokenizerFast.from_pretrained('google/t5-v1_1-base')
s = "This is a test sentence"
tokenized = tokenizer(s, return_offsets_mapping=True)
decoded_tokens, tokens_from_offset_mapping = [], []
for token_index, offset_mapping in enumerate(tokenized['offset_mapping']):
decoded_token = tokenizer.decode(tokenized['input_ids'][token_index])
if decoded_token != tokenizer.eos_token:
decoded_tokens.append(decoded_token)
tokens_from_offset_mapping.append(s[offset_mapping[0]:offset_mapping[1]])
error_msg = f"Wrong offset mapping for '{s}'! \n" \
f"Maps to: {tokens_from_offset_mapping}\n" \
f"Instead of: {decoded_tokens}"
assert decoded_tokens == tokens_from_offset_mapping, error_msg
if __name__ == "__main__":
test_offset_mapping()
```
## Expected behavior
```
AssertionError: Wrong offset mapping for 'This is a test sentence'!
Maps to: ['This', 'is', 'a', 'a', 'test', 'sentence']
Instead of: ['This', 'is', '', 'a', 'test', 'sentence']
```
| 01-16-2021 11:03:55 | 01-16-2021 11:03:55 | @patrickvonplaten @n1t0 do you have any advice on this? The T5 tokenizer tokenizes the sentence as follows:
```
['βThis', 'βis', 'β', 'a', 'βtest', 'βsentence']
```
Unfortunately the offset mapping point to both 'β' and 'a' being at `(8, 9)`, as the following suggests:
```
'offset_mapping': [(0, 4), (5, 7), (8, 9), (8, 9), (10, 14), (15, 23), (0, 0)]
^---- & ^---- here
```
How should one map this encoding back to the initial sequence?<|||||>@patrickvonplaten @n1t0 - did you have a chance to look at this?
Thanks!<|||||>Hi @zorikg! Thank you for reporting this issue. This is related to https://github.com/huggingface/transformers/issues/9637 concerning the offset mappings bug.
The fix for this bug is tricky to deploy, but we are working on it, and I expect it to be available in the coming weeks.<|||||>Thanks @n1t0, I wondered if there have been any progress on this? Any expectation for when the fix will be avail? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@zorikg Using the last few versions of `transformers`, you can instantiate your tokenizer as follow:
```python
tokenizer = T5TokenizerFast.from_pretrained('google/t5-v1_1-base', from_slow=True)
```
This will force the conversion from the slow tokenizer, thus using the fixed version.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I am getting some difference between these 2 tokenizers is this solved? |
transformers | 9,632 | closed | Missing argument: decoder_head_mask for T5 | # π Feature request
Despite the encoder-decoder architecture of T5, the models use a single `head_mask` argument instead of having separate `head_mask` and `decoder_head_mask` as it will be for BART-based models after merging the PR #9569.
## Your contribution
I'm going to send a PR soon. (I'll try to prepare this feature both for PyTorch and TensorFlow in two separate PRs.)
## Reviewer
@patrickvonplaten | 01-16-2021 09:37:03 | 01-16-2021 09:37:03 | Solved in #9634. |
transformers | 9,631 | closed | ImportError: cannot import name 'Dataset' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1, datasets : 1.2.1
- Platform: Linux AI-LAB 5.3.0-42-generic #34~18.04.1-Ubuntu SMP Fri Feb 28 13:42:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
- Python version: 3.6.8
- PyTorch version (GPU?): 1.7.1
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
Anyone.
## Information
Completely install Transformers + datasets. by pip command
The problem arises when using:
When i try to import Lib:
from transformers import AutoTokenizer, AutoModel
## Error like this:
```
ImportError Traceback (most recent call last)
<ipython-input-2-c6bea6c01ce9> in <module>
----> 1 from transformers import AutoTokenizer, AutoModel
/usr/local/lib/python3.6/dist-packages/transformers/__init__.py in __getattr__(self, name)
2096 if name == "__version__":
2097 return __version__
-> 2098 return super().__getattr__(name)
2099
2100 sys.modules[__name__] = _LazyModule(__name__, _import_structure)
/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in __getattr__(self, name)
1463 elif name in self._class_to_module.keys():
1464 module = self._get_module(self._class_to_module[name])
-> 1465 value = getattr(module, name)
1466 else:
1467 raise AttributeError(f"module {self.__name__} has no attribute {name}")
/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in __getattr__(self, name)
1462 value = self._get_module(name)
1463 elif name in self._class_to_module.keys():
-> 1464 module = self._get_module(self._class_to_module[name])
1465 value = getattr(module, name)
1466 else:
/usr/local/lib/python3.6/dist-packages/transformers/models/auto/__init__.py in _get_module(self, module_name)
158
159 def _get_module(self, module_name: str):
--> 160 return importlib.import_module("." + module_name, self.__name__)
161
162 sys.modules[__name__] = _LazyModule(__name__, _import_structure)
/usr/lib/python3.6/importlib/__init__.py in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
127
128
/usr/local/lib/python3.6/dist-packages/transformers/models/auto/modeling_auto.py in <module>
152 from ..pegasus.modeling_pegasus import PegasusForConditionalGeneration, PegasusModel
153 from ..prophetnet.modeling_prophetnet import ProphetNetForCausalLM, ProphetNetForConditionalGeneration, ProphetNetModel
--> 154 from ..rag.modeling_rag import ( # noqa: F401 - need to import all RagModels to be in globals() function
155 RagModel,
156 RagSequenceForGeneration,
/usr/local/lib/python3.6/dist-packages/transformers/models/rag/modeling_rag.py in <module>
27 from ...utils import logging
28 from .configuration_rag import RagConfig
---> 29 from .retrieval_rag import RagRetriever
30
31
/usr/local/lib/python3.6/dist-packages/transformers/models/rag/retrieval_rag.py in <module>
37
38 if is_datasets_available():
---> 39 from datasets import Dataset, load_dataset, load_from_disk
40
41 if is_faiss_available():
ImportError: cannot import name 'Dataset'
```
Thanks
Nakarin | 01-16-2021 04:00:47 | 01-16-2021 04:00:47 | How did you install Transformers and Datasets? Could you post your `pip list` here?<|||||>> How did you install Transformers and Datasets? Could you post your `pip list` here?
transformers (4.2.1)
datasets (1.2.1)
Sorry, for reply late.<|||||>Hmmm, I cannot seem to be able to reproduce your issue. When I install transformers and datasets, I can import `Dataset`, and I don't get a crash like you have.
Can you open a colab notebook that reproduces it?<|||||>I try to downgrade datasets to version 1.2.0 and import transformers, it seems no problem, then I upgrade datasets to 1.2.1 again and try to use transformers it works like a charm.
>>> import transformers
>>> import datasets
>>> import simpletransformers
>>> transformers.__version__
'4.2.0'
>>> dtasets.__version__
'1.2.1'
>>>
Thank you for your time.
Nakarin<|||||>Fantastic, great that you got it to work! Closing this for now, feel free to re-open if you face the issue again.<|||||>This issue still persists even after trying above methods. <|||||>ImportError: cannot import name 'DatasetInfo' from 'huggingface_hub.hf_api'
I occured the same issue when I am trying to import keyBERT package, and my `pip list` is as follow:
keybert : 0.5.0
transformers : 4.15.0
<|||||>Could you try upgrading `huggingface_hub` to the latest version?
```
pip install -U huggingface_hub
```<|||||>Upgrading both Transformer and huggingface_hub worked for me. <br>
```
pip install -U transformers
pip install -U huggingface_hub
```
<|||||>I am facing a similar issue, but it is not fixed by any of the above. Specifically, I am using this space: https://huggingface.co/spaces/ncoop57/cardify/tree/main
And it encounters the following runtime error:
```
/home/user/.local/lib/python3.8/site-packages/paramiko/transport.py:236: CryptographyDeprecationWarning: Blowfish has been deprecated
"class": algorithms.Blowfish,
Traceback (most recent call last):
File "app.py", line 5, in <module>
from autocards.autocards import Autocards
File "/home/user/.local/lib/python3.8/site-packages/autocards/autocards.py", line 1, in <module>
from autocards.pipelines import qg_pipeline
File "/home/user/.local/lib/python3.8/site-packages/autocards/pipelines.py", line 10, in <module>
from transformers import(
File "/home/user/.local/lib/python3.8/site-packages/transformers/__init__.py", line 2709, in __getattr__
return super().__getattr__(name)
File "/home/user/.local/lib/python3.8/site-packages/transformers/file_utils.py", line 1822, in __getattr__
value = getattr(module, name)
File "/home/user/.local/lib/python3.8/site-packages/transformers/file_utils.py", line 1821, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/user/.local/lib/python3.8/site-packages/transformers/models/auto/__init__.py", line 202, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/user/.local/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py", line 221, in <module>
from ..rag.modeling_rag import ( # noqa: F401 - need to import all RagModels to be in globals() function
File "/home/user/.local/lib/python3.8/site-packages/transformers/models/rag/modeling_rag.py", line 29, in <module>
from .retrieval_rag import RagRetriever
File "/home/user/.local/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 32, in <module>
from datasets import Dataset, load_dataset, load_from_disk
File "/home/user/.local/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module>
from .arrow_dataset import Dataset, concatenate_datasets
File "/home/user/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 61, in <module>
from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File "/home/user/.local/lib/python3.8/site-packages/datasets/arrow_writer.py", line 26, in <module>
from .features import Features, Image, Value
File "/home/user/.local/lib/python3.8/site-packages/datasets/features/__init__.py", line 17, in <module>
from .audio import Audio
File "/home/user/.local/lib/python3.8/site-packages/datasets/features/audio.py", line 12, in <module>
from ..utils.streaming_download_manager import xopen
File "/home/user/.local/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py", line 19, in <module>
from ..filesystems import COMPRESSION_FILESYSTEMS
File "/home/user/.local/lib/python3.8/site-packages/datasets/filesystems/__init__.py", line 7, in <module>
from .hffilesystem import HfFileSystem
File "/home/user/.local/lib/python3.8/site-packages/datasets/filesystems/hffilesystem.py", line 6, in <module>
from huggingface_hub.hf_api import DatasetInfo
ImportError: cannot import name 'DatasetInfo' from 'huggingface_hub.hf_api' (/home/user/.local/lib/python3.8/site-packages/huggingface_hub/hf_api.py)
```
I am using the following versions:
`huggingface_hub == 0.6.0` and `transformers == 4.19.1`
Any help would be greatly appreciated!
@LysandreJik <|||||>Fixed my issue by using `huggingface_hub == 0.5.0`<|||||>@LysandreJik I also need help right now..
I have also encountered the error "ImportError: cannot import name 'DatasetInfo' from 'huggingface_hub.hf_api'".
And no matter what versions of the transformers package and huggingface_hub package I have installed or updated to or degraded to, this error still exists...
After several rounds of uninstalling and reinstalling, the reported error altered from "ImportError: cannot import name 'DatasetInfo' from 'huggingface_hub.hf_api (C:\Users\Admin\anaconda3\lib\site-packages\huggingface_hub\hf_api.py)" to "ImportError: cannot import name 'model_info' from 'huggingface_hub' (C:\Users\Admin\anaconda3\lib\site-packages\huggingface_hub\__init__.py)"...
Below are the reported error information. I am now using transformers==4.20.1 and huggingface_hub==0.8.1
> ---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_28460/2252195315.py in <module>
1 import torch
----> 2 from transformers import AutoTokenizer, AutoModelForSequenceClassification
3
4 checkpoint = r"C:\Users\Admin\Desktop\nlp\bert-tiny"
5 tokenizer = AutoTokenizer.from_pretrained(checkpoint)
~\anaconda3\lib\site-packages\transformers\__init__.py in __getattr__(self, name)
2939 Wav2Vec2Config,
2940 Wav2Vec2CTCTokenizer,
-> 2941 Wav2Vec2FeatureExtractor,
2942 Wav2Vec2Processor,
2943 Wav2Vec2Tokenizer,
~\anaconda3\lib\site-packages\transformers\file_utils.py in __getattr__(self, name)
~\anaconda3\lib\site-packages\transformers\file_utils.py in __getattr__(self, name)
~\anaconda3\lib\site-packages\transformers\models\auto\__init__.py in _get_module(self, module_name)
208 MODEL_MAPPING,
209 MODEL_WITH_LM_HEAD_MAPPING,
--> 210 AutoModel,
211 AutoModelForAudioClassification,
212 AutoModelForAudioFrameClassification,
~\anaconda3\lib\importlib\__init__.py in import_module(name, package)
125 break
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
128
129
~\anaconda3\lib\site-packages\transformers\models\auto\modeling_auto.py in <module>
19
20 from ...utils import logging
---> 21 from .auto_factory import _BaseAutoModelClass, _LazyAutoMapping, auto_class_update
22 from .configuration_auto import CONFIG_MAPPING_NAMES
23
~\anaconda3\lib\site-packages\transformers\models\auto\auto_factory.py in <module>
18
19 from ...configuration_utils import PretrainedConfig
---> 20 from ...dynamic_module_utils import get_class_from_dynamic_module
21 from ...utils import copy_func, logging
22 from .configuration_auto import AutoConfig, model_type_to_module_name, replace_list_option_in_docstrings
~\anaconda3\lib\site-packages\transformers\dynamic_module_utils.py in <module>
23 from typing import Dict, Optional, Union
24
---> 25 from huggingface_hub import HfFolder, model_info
26
27 from .utils import (
ImportError: cannot import name 'model_info' from 'huggingface_hub' (C:\Users\Admin\anaconda3\lib\site-packages\huggingface_hub\__init__.py) |
transformers | 9,630 | closed | key error when use trainer to fine_tuning a dataset | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Linux-3.10.0-693.5.2.el7.x86_64-x86_64-with-centos-7.4.1708-Core
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger
## Information
Model I am using (Bert, XLNet ...):bert-base-uncased
The problem arises when using:
* [ ] the official example scripts: (give details below)
i am fine-tuning a text_claasifiction on dbpedia_14.and i followed this colab https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb#scrollTo=TlqNaB8jIrJW
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
datset:dbpedia_14
## To reproduce
Steps to reproduce the behavior:
error
`File "train.py", line 69, in <module>
trainer.train()
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/transformers/trainer.py", line 784, in train
for step, inputs in enumerate(epoch_iterator):
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/pliu3/projects/anaconda3/envs/calibration/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
KeyError: 2`
code
```python
dataset_name = 'sem_eval_2014_task_1'
num_labels_size = 3
batch_size = 4
model_checkpoint = 'bert-base-uncased'
number_train_epoch = 5
def tokenize(batch):
return tokenizer(batch['premise'], batch['hypothesis'], truncation=True, )
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='micro')
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
model = BertForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels_size)
tokenizer = BertTokenizerFast.from_pretrained(model_checkpoint, use_fast=True)
train_dataset = load_dataset(dataset_name, split='train')
test_dataset = load_dataset(dataset_name, split='test')
train_encoded_dataset = train_dataset.map(tokenize, batched=True)
test_encoded_dataset = test_dataset.map(tokenize, batched=True)
args = TrainingArguments(
output_dir='./results',
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=number_train_epoch,
weight_decay=0.01,
do_predict=True
)
trainer = Trainer(
model=model,
args=args,
compute_metrics=compute_metrics,
train_dataset=train_encoded_dataset,
eval_dataset=test_encoded_dataset,
tokenizer=tokenizer
)
trainer.train()
trainer.evaluate()
```
| 01-16-2021 02:49:40 | 01-16-2021 02:49:40 | Duplicate of #9636 |
transformers | 9,629 | closed | [Question] How to use threads for huggingface transformers | I'm trying to run a hugging face model, mode exactly **"cardiffnlp/twitter-roberta-base-sentiment"** on threads. But at the same time, I want just one single instance of it because it's really costly in terms of time.
In other words, I have multiple CSV files (several thousand) and each of them has around 20k-30k lines and I want that each line from all of them to be executed by the huggingface model, as you probably can imagine already this is the reason why I don't want to instantiate a model for each thread (where each thread would be used just to read one line and write it in the database).
The problem with my approach is that when I'm running the code is going to give me an error from huggingface model.
> RuntimeError: Already borrowed
Could any of you help me to understand how cand I fix it?
***Hugging face model:***
class EmotionDetection(object):
def __init__(self, model_name="cardiffnlp/twitter-roberta-base-sentiment"):
self.model_name = model_name
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
self.classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True,
task="sentiment-analysis", device=0)
def get_emotion_by_label(self, label: str):
if label == "LABEL_0":
return "negative"
elif label == "LABEL_1":
return "neutral"
elif label == "LABEL_2":
return "positive"
else:
print("SOMETHING IS WRONG")
return ""
def get_emotion(self, phrase):
results = self.classifier(phrase)
res = dict()
for result in results:
for emotion in result:
res.update({self.get_emotion_by_label(emotion['label']): emotion['score']})
return res
***My code for generating database:***
class GenerateDbThread(object):
def __init__(self, text: str, created_at: datetime.datetime, get_emotion_function, cursor, table_name):
self.table_name = table_name
self.text = text
self.created_at = created_at
emotions = get_emotion_function(self.text)
self.pos = emotions['positive']
self.neg = emotions['negative']
self.neu = emotions['neutral']
self.cursor = cursor
def execute(self):
query = f"INSERT INTO {self.table_name}(date, positive, negative, neutral, tweet) " \
f"VALUES (datetime('{str(self.created_at)}'),{self.pos},{self.neg},{self.neu}, '{self.text}')"
self.cursor.execute(query)
self.cursor.commit()
def get_all_data_files_path(data_dir: str):
return [f for f in os.listdir(data_dir) if os.path.isfile(os.path.join(data_dir, f))]
def run(file: str, table_name: str):
df = pd.read_csv(os.path.join('data', file), delimiter=',')
for index, row in df.iterrows():
text = row['tweet']
language = row['language']
split_data = row['created_at'].split(" ")
GTB_Time = f"{split_data[2]} {split_data[3]} {split_data[4]}"
created_at = datetime.datetime.strptime(row['created_at'], f"%Y-%m-%d %H:%M:%S {GTB_Time}")
if language == "en":
GenerateDbThread(text, created_at, emotion_detector.get_emotion, cursor, table_name)
def init_db(db_name, table_name):
conn = sqlite3.connect(db_name)
cursor = conn.cursor()
cursor.execute(f"""
CREATE TABLE IF NOT EXISTS {table_name} (
uid INTEGER PRIMARY KEY AUTOINCREMENT,
date DATETIME NOT NULL,
positive REAL NOT NULL,
negative REAL NOT NULL,
neutral REAL NOT NULL,
text TEXT NOT NULL
)""")
cursor.execute(f"CREATE INDEX IF NOT EXISTS ix_tweets_index ON {table_name}(uid)")
cursor.close()
ex = ThreadPoolExecutor(max_workers=10)
files = get_all_data_files_path('data')
init_db("DB_NAME.db", "TABLE_NAME")
emotion_detector = EmotionDetection()
conn = sqlite3.connect("DB_NAME.db")
cursor = conn.cursor()
pbar = tqdm(total=len(files))
futures = [ex.submit(run, file, "TABLE_NAME") for file in files]
for future in futures:
res = future.result()
pbar.update(1)
pbar.close()
| 01-16-2021 00:26:18 | 01-16-2021 00:26:18 | Hi, thank you for opening an issue! Could you put the full stack-trace?
I guess this comes from the tokenizer, rather than the model as we've already seen this error in [tokenizers](https://github.com/huggingface/tokenizers/issues/537).
As a means of debugging, can you let me know what happens if you change this line:
```py
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
to
```py
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
```<|||||>When use_fast is set to false I no longer see the borrow exception but randomly experience this one instead. This appears to be an issue in the qa pipeline code rather than tokenizer code though. I can test the tokenizer in isolation if that'd be helpful.
```
/usr/local/lib/python3.6/dist-packages/transformers/pipelines/question_answering.py in <listcomp>(.0)
360 ),
361 }
--> 362 for s, e, score in zip(starts, ends, scores)
363 ]
364 else:
KeyError: 132
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,628 | closed | Issue with TrainingArguments docs. | Hi Team,
This is a minor issue but on this [link](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments), the default optimizer mentioned is Adam in the docs. However, the `Trainer` uses AdamW by default.
This is slightly misleading.
Thanks,
Gunjan
| 01-16-2021 00:21:03 | 01-16-2021 00:21:03 | Hi! Do you want to open a PR to fix it? Thanks! |
transformers | 9,627 | closed | Passing in custom BartForConditionalGeneration model as generator to RagSequenceForGeneration | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @lhoestq
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
ray/raytune: @richardliaw @amogkam
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using:
`RagSequenceForGeneration` using pretrained `facebook/rag-sequence-nq` with a custom generator initialized with `BartForConditionalGeneration`
The problem arises when using:
In the docs (https://huggingface.co/transformers/model_doc/rag.html) it is stated that a `generator` can be used when initializing `RagSequenceForGeneration`. When using a custom pretrained BART model as the `generator`, I get the error:
`ModuleAttributeError: 'BartForConditionalGeneration' object has no attribute 'to_dict'`
To troubleshoot, I initialized my generator using pretrained `facebook/bart-basee` and RAG model the following way:
```
from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration, BartForConditionalGeneration
model_name = 'facebook/bart-base'
generator = BartForConditionalGeneration.from_pretrained(model_name)
tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True)
model = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever, generator=generator)
```
However, I get the same `ModuleAttributeError`.
The tasks I am working on is:
I want to initialize a `RagSequenceForGeneration` with a custom generator.
## To reproduce
Steps to reproduce the behavior:
1. Run the code block above and the error is outputted: `ModuleAttributeError: 'BartForConditionalGeneration' object has no attribute 'to_dict'`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Expected model to initialize with a custom generator, as described in the docs (https://huggingface.co/transformers/model_doc/rag.html).
| 01-15-2021 22:41:47 | 01-15-2021 22:41:47 | Solved by using `RagConfig` and initializing it with a `DPRQuestionEncoder` and the custom `BartForConditionalGeneration` generator configs. Passed the `RagConfig`, question encoder, generator, and retriever, to `RagModel` to initialize the model.<|||||>Could you provide a complete sample code about how to do it? I'm stuck at how to do the config initialization for a DPR + customized BartForCondGen. Thanks! |
transformers | 9,626 | closed | Fix: torch.utils.checkpoint.checkpoint attribute error. | # What does this PR do?
Fixes #9617 along with the other `modeling_<modelname>.py` as well where the import statements are missing.
## Who can review?
@LysandreJik, @patrickvonplaten | 01-15-2021 21:12:37 | 01-15-2021 21:12:37 | |
transformers | 9,625 | closed | Weighted Loss in BertForTokenClassification | # π Feature request
BertForTokenClassification models can compute cross entropy loss currently is only weighted. The option to have different weights for different classes can be useful in several use cases, including but not restricted to the problem of unbalanced output classes
## Motivation
Right now, although BertForTokenClassification models can compute cross entropy loss during the forward pass, there is no explicit way of weighting the different classes, which seems like a useful feature, as sequence tagging tasks often have unbalanced classes. I ran into the above problem during solving a academic problem. I looked at the code for the BertForTokenClassification model, and found that it should be quite easy to implement
## Your contribution
Not sure if I can help, coz I am not really familiar with the codebase. However, I can point out how and where to add code to implement the weighted loss very easily | 01-15-2021 19:57:39 | 01-15-2021 19:57:39 | In PyTorch, [`nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) has an optional `weight` parameter which you can specify. This should be a 1D Tensor assigning a weight to each of the classes.
So if you want `BertForTokenClassification` with a weighted cross entropy loss, you can simply replace [this line](https://github.com/huggingface/transformers/blob/c60e0e1ee45f4bf1017736b146c51729f120bb83/src/transformers/models/bert/modeling_bert.py#L1685) by a weighted loss. For example, you can define it as follows (I just copied the relevant code from `modeling_bert.py` and slightly adapted the cross entropy loss):
```
class BertForTokenClassification(BertPreTrainedModel):
_keys_to_ignore_on_load_unexpected = [r"pooler"]
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.bert = BertModel(config, add_pooling_layer=False)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
self.init_weights()
@add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
@add_code_sample_docstrings(
tokenizer_class=_TOKENIZER_FOR_DOC,
checkpoint="bert-base-uncased",
output_type=TokenClassifierOutput,
config_class=_CONFIG_FOR_DOC,
)
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
r"""
labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
Labels for computing the token classification loss. Indices should be in ``[0, ..., config.num_labels -
1]``.
"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
sequence_output = outputs[0]
sequence_output = self.dropout(sequence_output)
logits = self.classifier(sequence_output)
loss = None
if labels is not None:
weights = torch.tensor([0.6, 0.3, 0.1])
loss_fct = CrossEntropyLoss(weights=weights)
# Only keep active parts of the loss
if attention_mask is not None:
active_loss = attention_mask.view(-1) == 1
active_logits = logits.view(-1, self.num_labels)
active_labels = torch.where(
active_loss, labels.view(-1), torch.tensor(loss_fct.ignore_index).type_as(labels)
)
loss = loss_fct(active_logits, active_labels)
else:
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
if not return_dict:
output = (logits,) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return TokenClassifierOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
```
<|||||>@NielsRogge
You are right. I had done exactly this in my local (huggingface) transformers codebase.
Worked as expected.
I think this would be a useful feature if huggingface models come with it.<|||||>Oh ok so you want this to be an added feature. Not sure if this is possible. @LysandreJik what do you think?<|||||>Hi, thanks for opening an issue! The losses in the models are not made to be completely customizable, but to be the most common loss used in most cases; we favor simplicity here.
This is because defining your custom loss in a PyTorch model is very simple: when you do not pass the labels to your model, then you retrieve the model logits. You can then define a loss (and customize it as you wish!) and compute its value using these logits and your labels.
However, this is not the first time this feature has been requested, and we could probably come up with an implementation that wouldn't complexify the code-base too much. If we see more of this request we'll take a deeper look at how to implement it.
Here's a past issue discussing the same/similar: https://github.com/huggingface/transformers/issues/7024
cc @sgugger @patrickvonplaten<|||||>I'm not sure whether it's a good idea to add such functionality to `modeling_bert.py` - there are too many possibilities. I think it could very well be added to the examples though.<|||||>Yes. Unfortunately, there are too many of these possibilities.
Most users who are familiar with PyTorch can anyway make necessary changes to their local codebase quite easily.
Thanks.<|||||>Also see [this example in the documentation](https://huggingface.co/transformers/main_classes/trainer.html) (scroll a tiny bit down to the first example showing a subclass of `Trainer`) on how to change just the loss computation while using a model with `Trainer`.<|||||>Hi everyone,
I am a student and therefore not yet very familiar with the way issues report work on git, so I aplogize in advance if this is not the proper place to post this message.
I've stumbled onto an error when using the aforementioned method for designing a custom loss function.
My code is the following
```
config = AutoConfig.from_pretrained("bert-base-cased", num_labels=2, finetuning_task="SST-2")
# Test with modified trainer for weighted CrossEntropyLoss
model = AutoModelForSequenceClassification.from_pretrained(
"dmis-lab/biobert-base-cased-v1.1",
from_tf=False,
config=config)
from torch import FloatTensor
classDistribution_raw = [97, 3]
classDistribution = [0.8, 0.2]
normedWeights = [1 - (x / sum(classDistribution)) for x in classDistribution]
normedWeights = FloatTensor(normedWeights).cuda()
from torch.nn import CrossEntropyLoss
class MyTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
if "labels" in inputs:
labels = inputs.pop("labels")
outputs = model(**inputs)
logits = outputs.logits
loss_function = CrossEntropyLoss(weight = normedWeights)
if self.args.past_index >= 0:
self._past = outputs[self.args.past_index]
if labels is not None:
loss = loss_function(logits, labels)
else:
# We don't use .loss here since the model may return tuples instead of ModelOutput.
loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0]
return (loss, outputs) if return_outputs else loss
trainer = MyTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
compute_metrics=compute_metrics_fn,
tokenizer=tokenizer,
)
```
And when I try to train the model using trainer.train(), i get the following error
'NoneType' object has no attribute 'detach'
There is probably something wrong with the way I customized the loss function but I can't find where.
Best regards,
Arthur
<|||||>> Hi, thanks for opening an issue! The losses in the models are not made to be completely customizable, but to be the most common loss used in most cases; we favor simplicity here.
>
> This is because defining your custom loss in a PyTorch model is very simple: when you do not pass the labels to your model, then you retrieve the model logits. You can then define a loss (and customize it as you wish!) and compute its value using these logits and your labels.
>
> However, this is not the first time this feature has been requested, and we could probably come up with an implementation that wouldn't complexify the code-base too much. If we see more of this request we'll take a deeper look at how to implement it.
>
> Here's a past issue discussing the same/similar: #7024
>
> cc @sgugger @patrickvonplaten
@sgugger @LysandreJik @NielsRogge
I want to put a +1 on this feature request. Datasets with imbalanced datasets would benefit a lot from custom loss functions. And this shouldn't be a complex add (should just be one more kwarg?). |
transformers | 9,624 | closed | [wip] [deepspeed] AdamW is now supported by default | This PR syncs with changes in DeepSpeed since `deepspeed==0.3.10` and can only be merged when `deepspeed==0.3.11` or higher is released. So it may sit here for a while aggregating adjustments
* [x] AdamW is now supported by default so we can remove the now redundant config options and comments https://github.com/microsoft/DeepSpeed/pull/670
| 01-15-2021 18:52:31 | 01-15-2021 18:52:31 | |
transformers | 9,623 | closed | wandb breaks tests - importlib.util.find_spec-related under forked process | This has to do with a forked process environment:
I was running:
```
pytest -sv examples/seq2seq/test_finetune_trainer.py -k deepspeed
```
and was getting:
```
stderr: Traceback (most recent call last):
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/examples/seq2seq/finetune_trainer.py", line 367, in <module>
stderr: main()
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/examples/seq2seq/finetune_trainer.py", line 297, in main
stderr: train_result = trainer.train(
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/src/transformers/trainer.py", line 998, in train
stderr: self.control = self.callback_handler.on_train_end(self.args, self.state, self.control)
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/src/transformers/trainer_callback.py", line 342, in on_train_end
stderr: return self.call_event("on_train_end", args, state, control)
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/src/transformers/trainer_callback.py", line 377, in call_event
result = getattr(callback, event)(
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/src/transformers/integrations.py", line 565, in on_train_end
100%|ββββββββββ| 1/1 [00:00<00:00, 1.88it/s] self._wandb.log({})
stderr: File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/sdk/lib/preinit.py", line 37, in preinit_wrapper
stderr: raise wandb.Error("You must call wandb.init() before {}()".format(name))
stderr: wandb.errors.error.Error: You must call wandb.init() before wandb.log()
stderr: 2021-01-15 09:38:11 | INFO | wandb.sdk.internal.internal | Internal process exited
```
I tried to remove `wandb` and while `pip uninstall wandb` worked, wandb left code behind and I had to remove it manually:
```
rm -r /home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb
```
But the problem continued without having any wandb installed:
```
stderr: Traceback (most recent call last):
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/examples/seq2seq/finetune_trainer.py", line 367, in <module>
stderr: main()
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/examples/seq2seq/finetune_trainer.py", line 282, in main
stderr: trainer = Seq2SeqTrainer(
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/src/transformers/trainer.py", line 304, in __init__
stderr: self.callback_handler = CallbackHandler(
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/src/transformers/trainer_callback.py", line 282, in __init__
stderr: self.add_callback(cb)
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/src/transformers/trainer_callback.py", line 299, in add_callback
stderr: cb = callback() if isinstance(callback, type) else callback
stderr: File "/mnt/nvme1/code/huggingface/transformers-ds-optim-fix/src/transformers/integrations.py", line 488, in __init__
stderr: wandb.ensure_configured()
stderr: AttributeError: module 'wandb' has no attribute 'ensure_configured'
```
The strange `stderr` prefix is from our multiprocess testing setup which requires special handling as pytest can't handle DDP and a like on its own.
The only way I was able to overcome this is with:
```
export WANDB_DISABLED=true
```
I'm on `transformers` master. | 01-15-2021 17:50:40 | 01-15-2021 17:50:40 | @sgugger, I think the culprit for the 2nd error, when I uninstalled wandb is:
```
def is_wandb_available():
if os.getenv("WANDB_DISABLED"):
return False
return importlib.util.find_spec("wandb") is not None
```
as it returns `True`, when it shouldn't since:
```
ls -l /home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb
ls: cannot access '/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb': No such file or directory
```
You can see it with any ddp test, so you don't need to install deepspeed or fairscale to see it, e.g. this fails too:
```
pytest -sv examples/seq2seq/test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_ddp
```
But a single unforked process test works just fine:
```
pytest -sv examples/seq2seq/test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_dp
```
-----------------
and then there is another problem which occurs with `wandb` installed. See the first error in OP.
<|||||>But with `wandb` installed the 1st error I get with DDP too, w/o needing to fork a process in tests:
```
python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path sshleifer/tiny-mbart --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size 4 --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500
[...]
[INFO|integrations.py:521] 2021-01-16 20:47:40,853 >> Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
wandb: Currently logged in as: stason (use `wandb login --relogin` to force relogin)
2021-01-16 20:47:42.440849: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
wandb: Tracking run with wandb version 0.10.14
wandb: Syncing run output_dir
wandb: βοΈ View project at https://wandb.ai/stason/huggingface
wandb: π View run at https://wandb.ai/stason/huggingface/runs/82q4zxt2
wandb: Run data is saved locally in /mnt/nvme1/code/huggingface/transformers-master/examples/seq2seq/wandb/run-20210116_204741-82q4zxt2
wandb: Run `wandb offline` to turn off syncing.
0%| | 0/63 [00:00<?, ?it/s]
[...]
Training completed. Do not forget to share your model on huggingface.co/models =)
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 297, in main
train_result = trainer.train(
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 998, in train
self.control = self.callback_handler.on_train_end(self.args, self.state, self.control)
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer_callback.py", line 342, in on_train_end
return self.call_event("on_train_end", args, state, control)
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer_callback.py", line 377, in call_event
result = getattr(callback, event)(
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/integrations.py", line 565, in on_train_end
self._wandb.log({})
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/sdk/lib/preinit.py", line 38, in preinit_wrapper
raise wandb.Error("You must call wandb.init() before {}()".format(name))
wandb.errors.error.Error: You must call wandb.init() before wandb.log()
2021-01-16 20:47:46 | INFO | wandb.sdk.internal.internal | Internal process exited
```
<|||||>I'm not sure I understand your first error. Could you give us more details? Are you saying that `importlib.from_spec` finds some weird "wandb" module but only in a distributed setting? I don't have wandb installed so I can't reproduce this at all.
For the last error, pinging @borisdayma <|||||>I had a similar issue recently with python 3.8 but it worked with 3.7. It was due to a function from "importlib" which changed name. Is it the same?<|||||>@borisdayma, I have just installed python-3.7.9 and have the same issue there. Perhaps you had it working with python < 3.7.9?
The issue occurs with python-3.6.12 too.
@sgugger yes, the problem occurs only when there is DDP. If I drop `-m torch.distributed.launch` the problem goes away so it has to do with forking/multi-processes. If you remember there was an Issue where someone also had the problem of using some transformers models because they were importing apex at load time and then it was crushing under `torch.mp` - this is definitely a totally different issue, but it's related that it has to do with multiproc.
To reproduce:
```
pip install wandb
cd examples/seq2seq
python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path sshleifer/tiny-mbart --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size 4 --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 50 --n_train 50
```
which results in:
```
wandb.errors.error.Error: You must call wandb.init() before wandb.log()
```
If you then remove wand:
```
pip uninstall wandb -y
```
The 2nd error happens:
```
AttributeError: module 'wandb' has no attribute 'ensure_configured'
```
The full traces are in the OP.
Please let me know if you need any other info.
<|||||>I am running into the same issue with DDP @stas00 has https://github.com/huggingface/transformers/issues/9623#issuecomment-761731532
I believe this might be due to the call to `on_train_end`, which calls `wandb.log({})` on all processes, and not just on world process 0, while [`wandb.init` was called only on world process 0](https://github.com/huggingface/transformers/blob/897a24c869e2ac2ed44f17956f1009fd8f055f5e/src/transformers/integrations.py#L541-L564): https://github.com/huggingface/transformers/blob/897a24c869e2ac2ed44f17956f1009fd8f055f5e/src/transformers/integrations.py#L586<|||||>Interesting, can you check it solves the issue on your side @tristandeleu ?
If so I'll be happy to make a PR.<|||||>It does work for me when I replace it with
```python
if state.is_world_process_zero:
self._wandb.log({})
```
There is also another thing I ran into at the same time: `_log_model` was not initialized on processes other than world 0, making the following check fail because it didn't know `self._log_model`. Adding `self._log_model = False` to `__init__` solved the issue.
EDIT: This solves the issue with DDP though, I don't know if it also solves the original issue https://github.com/huggingface/transformers/issues/9623#issue-787077821<|||||>Don't hesitate to suggest a PR with your fix @tristandeleu <|||||>> It does work for me when I replace it with
>
> ```python
> if state.is_world_process_zero:
> self._wandb.log({})
> ```
>
> There is also another thing I ran into at the same time: `_log_model` was not initialized on processes other than world 0, making the following check fail because it didn't know `self._log_model`. Adding `self._log_model = False` to `__init__` solved the issue.
>
> EDIT: This solves the issue with DDP though, I don't know if it also solves the original issue [#9623 (comment)](https://github.com/huggingface/transformers/issues/9623#issue-787077821)
I had the same problem. and I just use > if state.is_world_process_zero: self._wandb.log({}), forget self._log_model = False. Thanks !!!<|||||>> It does work for me when I replace it with
>
> ```python
> if state.is_world_process_zero:
> self._wandb.log({})
> ```
>
> There is also another thing I ran into at the same time: `_log_model` was not initialized on processes other than world 0, making the following check fail because it didn't know `self._log_model`. Adding `self._log_model = False` to `__init__` solved the issue.
>
> EDIT: This solves the issue with DDP though, I don't know if it also solves the original issue [#9623 (comment)](https://github.com/huggingface/transformers/issues/9623#issue-787077821)
Even with revising these codes, the program(with TPU) doesn't seem to stop at the end<|||||>@lkk12014402 can you confirm it still happens with latest HF master branch?
If so do you have a reproducible example you could share?<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,622 | closed | [deepspeed] --gradient_accumulation_steps fix | This PR fixes deepspeed integration to run `self.deepspeed.step()` instead of `optimizer.step()` + adds test. As it was failing when `--gradient_accumulation_steps 2` was added.
Thank you @jncasey for detecting this bug in https://github.com/microsoft/DeepSpeed/issues/671
@sgugger | 01-15-2021 17:45:03 | 01-15-2021 17:45:03 | |
transformers | 9,621 | closed | Remove duplicated extras["retrieval"] | The `extras["retrieval"]` is defined a few lines above as:
https://github.com/huggingface/transformers/blob/28b26013abea3a49afeb46d36993a568ec98f39e/setup.py#L217-L222
and then it seems to be overridden just below, probably linking to `faiss-cpu` being included even on windows.
This PR removes the second assignment.
cc @LysandreJik @sgugger @stas00 | 01-15-2021 16:51:48 | 01-15-2021 16:51:48 | |
transformers | 9,620 | closed | SQuAD 2.0 metric not supported | Hello.
I'm trying to run the official `run_qa.py` code for SQuAD 2.0.
You have an open TODO here that is causing a bug: https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_qa.py#L436
I would like to know what is the status of this TODO, and if it is going to be updated, or is there a way around it.
This is the current code:
```python
current_dir = os.path.sep.join(os.path.join(__file__).split(os.path.sep)[:-1])
metric = load_metric(os.path.join(current_dir, "squad_v2_local") if data_args.version_2_with_negative else "squad")
```
I receive:
```
FileNotFoundError: Couldn't find file locally at .../squad_v2_local/squad_v2_local.py,
```
I've tried to change it to:
```python
metric = load_metric("squad_v2" if data_args.version_2_with_negative else "squad")
```
But this is the stacktrace I receive:
```
Traceback (most recent call last):
File "/data/users/yonatab/transformers_pip/QA/run_qa_val_more_valueable.py", line 557, in <module>
main()
File "/data/users/yonatab/transformers_pip/QA/run_qa_val_more_valueable.py", line 538, in main
results = trainer.evaluate()
File "/data/users/yonatab/transformers_pip/QA/trainer_qa.py", line 63, in evaluate
metrics = self.compute_metrics(eval_preds)
File "/data/users/yonatab/transformers_pip/QA/run_qa_val_more_valueable.py", line 499, in compute_metrics
return metric.compute(predictions=p.predictions, references=p.label_ids)
File "/data/users/yonatab/transformers_pip/trans_pip/lib/python3.6/site-packages/datasets/metric.py", line 398, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/ec2-user/.cache/huggingface/modules/datasets_modules/metrics/squad_v2/7529efd518b03f775290694e7b797412cb2253e90b4f843af83cf7434cccb3a8/squad_v2.py", line 108, in _compute
exact_raw, f1_raw = get_raw_scores(dataset, predictions)
File "/home/ec2-user/.cache/huggingface/modules/datasets_modules/metrics/squad_v2/7529efd518b03f775290694e7b797412cb2253e90b4f843af83cf7434cccb3a8/evaluate.py", line 111, in get_raw_scores
gold_answers = [a["text"] for a in qa["answers"] if normalize_answer(a["text"])]
File "/home/ec2-user/.cache/huggingface/modules/datasets_modules/metrics/squad_v2/7529efd518b03f775290694e7b797412cb2253e90b4f843af83cf7434cccb3a8/evaluate.py", line 111, in <listcomp>
gold_answers = [a["text"] for a in qa["answers"] if normalize_answer(a["text"])]
TypeError: string indices must be integers
100%|βββββββββββββββββββββββββββββββββββββββββββ| 13/13 [00:05<00:00, 2.51it/s]
```
How can I solve it?
Thanks | 01-15-2021 16:28:59 | 01-15-2021 16:28:59 | @sgugger would know about this TODO; I think the fix has landed in `datasets`, right?<|||||>Yes, this should be fixed directly from `datasets` now, will update the script this afternoon. |
transformers | 9,619 | closed | Train robertatokenizer failed due to pad token not found | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Windows
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7? 3080 RTX
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
tokenizers: @mfuntowicz
Trainer: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Roberta
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
My first step is to download some of the esperberto data from the sites mentioned in this tutorial https://huggingface.co/blog/how-to-train
Few issues
1. Regarding the tutorial, they make you train a ByteLevelBPETokenizer but this is never used in the training code. The training code isn't even in the tutorial π
2. I came across this https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb
It looks good except whatever ByteLevelBPETokenizer is never used in the training process so I tried to find a way to use it. I tried two approaches both result in the same outcome. I tried using the BPE and not ByteLevelBPETokenizer. I have no clue what is the best practice or why neither of them are working.
This is my code to do the tokenizer. You can uncomment whatever
```
#! pip install tokenizers
#%% Import Statements
from pathlib import Path
from transformers import RobertaTokenizer
from tokenizers import Tokenizer
from tokenizers.trainers import BpeTrainer
from tokenizers.models import BPE
from tokenizers.pre_tokenizers import Whitespace
# from tokenizers import ByteLevelBPETokenizer
# from tokenizers.implementations import ByteLevelBPETokenizer
from tokenizers.processors import BertProcessing
import os.path as osp
#%% Train Tokenizer
if (not osp.exists('models/BPEtokenizer.json')):
paths = [str(x) for x in Path("./eo_data/").glob("**/*.txt")]
# Initialize a tokenizer
# tokenizer = ByteLevelBPETokenizer()
# # Customize training
# tokenizer.train(files=paths, vocab_size=52000, min_frequency=3, special_tokens=[
# "<s>",
# "<pad>",
# "</s>",
# "<unk>",
# "<mask>"
# ])
tokenizer = Tokenizer(BPE())
trainer = BpeTrainer(vocab_size=52000,min_frequency=3, special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.pre_tokenizer = Whitespace()
tokenizer.train(trainer, paths)
# Save files to disk
tokenizer.save('models/BPEtokenizer.json')
#%% Tokenize
tokenizer = Tokenizer.from_file('models/BPEtokenizer.json')
# tokenizer._tokenizer.post_processor = BertProcessing(
# ("</s>", tokenizer.token_to_id("</s>")),
# ("<s>", tokenizer.token_to_id("<s>")),
# )
# tokenizer.enable_truncation(max_length=512)
output = tokenizer.encode("Mi estas Julien.π")
print(output.tokens)
print(output.ids)
# Encoding(num_tokens=7, ...)
# tokens: ['<s>', 'Mi', 'Δ estas', 'Δ Juli', 'en', '.', '</s>']
```
This is my code to do training
```
import torch
from transformers import RobertaConfig
from transformers import RobertaTokenizerFast
from transformers import RobertaForMaskedLM
from transformers import LineByLineTextDataset
from transformers import DataCollatorForLanguageModeling
from pathlib import Path
from transformers import DataCollatorForLanguageModeling
from tokenizers import ByteLevelBPETokenizer
from transformers import PreTrainedTokenizerFast
from tokenizers import Tokenizer
# Tutorial from https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=BzMqR-dzF4Ro
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# tokenizer = RobertaTokenizerFast.from_pretrained("./EsperBERTo", max_len=512)
# tokenizer = ByteLevelBPETokenizer("models/esperberto-vocab.json","models/esperberto-merges.txt") # ? This actually doesn't work. You will get an error saying tokenizer is not callable.
tokenizer = PreTrainedTokenizerFast(tokenizer_file='models/BPEtokenizer.json')
# tokenizer = Tokenizer.from_file('models/BPEtokenizer.json')
mlm=False
config = RobertaConfig(
vocab_size=52_000,
max_position_embeddings=514,
num_attention_heads=12,
num_hidden_layers=6,
type_vocab_size=1,
)
# Training from scratch
model = RobertaForMaskedLM(config=config)
model.num_parameters()
paths = [str(x) for x in Path("eo_data/").glob("**/*.txt")]
# Build the dataset
dataset = LineByLineTextDataset(tokenizer=tokenizer, file_path="eo_data/shuff-orig/eo/eo.txt",block_size=128)
# mlm = mask modeling language
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=mlm, mlm_probability=0.15
)
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="models/EsperBERTo-small",
overwrite_output_dir=True,
num_train_epochs=1000,
per_gpu_train_batch_size=64,
save_steps=10_000,
save_total_limit=2,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset)
trainer.train()
```
I keep getting the error `Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.`
Also I couldn't set mlm=True either. Do you have any good tutorials on how to train your own set of data using Roberta?
If anyone wants to pull my files you can grab them and the dataset here
https://1drv.ms/u/s!Apa0_j-AivqTpqNz7r0M3NNhCm2W_A?e=BMLvqv
If you guys resolve this then I'll update and post a public google colab
| 01-15-2021 12:38:32 | 01-15-2021 12:38:32 | Roberta was train on a causal language model objective, therefore the `LineByLineDataset` is not adapted to train it: it considers one line for one text when the roberta objective is to have several lines concatenated and separated by the sep token until it reaches the block size, to avoid padding.
You need to use a different dataset for this. You should also check the new [`run_mlm` script](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py) that offers both options.<|||||>Hi, I had the same issue, here are the workarounds I used
Platform: Ubuntu 18
Python version: 3.7.9
PyTorch version (GPU): 1.7.1
cuda11
- save your tokenizer with `save_model()`, instead of `save()`, will save a `merges.json` and a` vocab.json`.
` tokenizer.save_model('models/BPEtokenizer')`
- you'll need `config.json`, a `tokenizer_config.json` and a `special_tokens_map.json` files in your tokenizer repo, you can get them from the base model you want to use your tokenizer with, i.e. just quickly run the `run_mlm` script with 2 batches to get them and add them in your tokenizer repo.
I'm not sure the config.json is actually loaded, as it is the model config and not the tokenizer's, but the script wants is to accept your tokenizer path.
tokenizer repo should contain:
```
|__config.json
|__merges.txt
|__special_tokens_map.json
|__tokenizer_config.json
|__vocab.json
```
- in `tokenizer_config.json`, change the ` name_or_path:"roberta-base"` to `model_type: "roberta"`
- then train your model running the mlm script with your options and
`-- tokenizer_name ./models/BPEtokenizer`
<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,618 | closed | Text generation pipeline - output_scores parameter | In `text-generation` pipeline, I am looking for a parameter which calculates the confidence score of the generated text. Source: [here](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.TextGenerationPipeline)
I am assuming that, `output_scores` (from [here](https://huggingface.co/transformers/main_classes/configuration.html#transformers.PretrainedConfig)) parameter is not returned while prediction,
**Code**:
`predictedText = pipeline('text-generation',model=checkpoint_path, tokenizer=gpt2_tokenizer, config={'max_length':20, 'output_scores':True})`
`predictedText('This is a ')`
**Output**:
`Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
[{'generated_text': 'This is a Generated Text'}]`
In the output, I am looking for a confidence score of the predicted text to be displayed
| 01-15-2021 12:22:18 | 01-15-2021 12:22:18 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks!<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,617 | closed | Error in GPT2 while using gradient checkpointing. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.0
- Platform: Linux | 5.4.0-60-generic | 18.04.1-Ubuntu SMP | x86_64
- Python version: 3.7.7
- PyTorch version (GPU?): 1.7.1
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
@LysandreJik
## Information
Model I am using: GPT2
The problem arises when using:
* GPT2LMHeadModel with config `gradient_checkpointing: True`
When using GPT2 pretrained model, with the latest releases (4.x), `gpt2_modeling.py` fails due to the behavior arising from pytorch. It was due to the fact that `torch.utils.checkpoint.checkpoint` import is malfunctioning, see [this](https://discuss.pytorch.org/t/attributeerror-module-torch-utils-has-no-attribute-checkpoint/101543) discussion, but I tried with python3.8 as well, and the problem still occured. When I observe the other scripts for modeling on different models (like BERT, etc.) the import statement for `checkpoint` is handled successfully, but GPT2 script fails. It is discussed that the problem arises due to the import behaviour of python.
```
File
"/home/username/.miniconda3/envs/prj/lib/python3.7/site-packages/torch/nn/modules/module.py", line
727, in _call_impl
result = self.forward(*input, **kwargs)
File
"/home/username/.miniconda3/envs/prj/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py", line
901, in forward
return_dict = return_dict,
File
"/home/username/.miniconda3/envs/prj/lib/python3.7/site-packages/torch/nn/modules/module.py", line
727, in _call_impl
result = self.forward(*input, **kwargs)
File
"/home/username/.miniconda3/envs/prj/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py", line
728, in forward
outputs = torch.utils.checkpoint.checkpoint(
AttributeError: module 'torch.utils' has no attribute 'checkpoint'
```
## Suggestion
in `modeling_gpt2.py` add this import `import torch.utils.checkpoint`. | 01-15-2021 10:49:32 | 01-15-2021 10:49:32 | Hitting this issue as well. |
transformers | 9,616 | closed | Fix label datatype in TF Trainer | # What does this PR do?
This PR fixes the case where `labels` can be either a `dict` or a `tf.Tensor` when doing gradient accumulation.
| 01-15-2021 10:02:58 | 01-15-2021 10:02:58 | I agree with Sylvain that while this is not tested, it's hard to recommend using it. |
transformers | 9,615 | closed | Ignore lm_head decoder bias warning | Removes the warning that's currently happening when importing `xlm-roberta-base` with any of the XLM-R models.
Closes https://github.com/huggingface/transformers/issues/9579 | 01-15-2021 09:33:43 | 01-15-2021 09:33:43 | Is it normal that this bias is missing?<|||||>By answering your question I realized this could be upstreamed directly in the RoBERTa model, which I just did.
You can take a look at my answer this morning to a similar question: https://github.com/huggingface/transformers/issues/6193#issuecomment-760797867.
XLM-R is an alias of the RoBERTa model, hence why they both need this. |
transformers | 9,614 | closed | Conditional branching logic in modeling_tf_xlnet.py causing error with TF Graph | Hi @TevenLeScao ,
I am encountering an error when running the TFXLNet model inside of a tensorflow graph.
Here is some code to reproduce the issue:
```
from transformers import XLNetTokenizer, TFXLNetModel
import tensorflow as tf
tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
model = TFXLNetModel.from_pretrained('xlnet-base-cased')
@tf.function
def train_step(inputs, mask, token_type_ids):
with tf.GradientTape() as tape:
a = model({
"input_ids": inputs,
"training": True,
"attention_mask": mask,
"token_type_ids": token_type_ids,
})
train_step(inputs, mask, token_type_ids)
```
The error seems to be caused by L765-L768 in modeling_tf_xlnet.py [here](https://github.com/huggingface/transformers/blob/82498cbc37d5c15520c7bddde5d804c804eee498/src/transformers/models/xlnet/modeling_tf_xlnet.py#L765)
Here is the error message:
> TypeError: in user code:
> <ipython-input-41-b79f96ef9347>:4 train_step *
> a = model({
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/transformers/models/xlnet/modeling_tf_xlnet.py:1189 call *
> outputs = self.transformer(
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/transformers/models/xlnet/modeling_tf_xlnet.py:753 call *
> if inputs["use_mems"]:
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/autograph/operators/control_flow.py:951 if_stmt
> _tf_if_stmt(cond, body, orelse, get_state, set_state, symbol_names, nouts)
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/autograph/operators/control_flow.py:996 _tf_if_stmt
> cond, aug_body, aug_orelse, strict=True)
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
> return target(*args, **kwargs)
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py:507 new_func
> return func(*args, **kwargs)
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/ops/control_flow_ops.py:1180 cond
> return cond_v2.cond_v2(pred, true_fn, false_fn, name)
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/ops/cond_v2.py:92 cond_v2
> op_return_value=pred)
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py:986 func_graph_from_py_func
> func_outputs = python_func(*func_args, **func_kwargs)
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/autograph/operators/control_flow.py:992 aug_orelse
> _verify_tf_cond_vars(new_body_vars_[0], new_orelse_vars, symbol_names)
> /anaconda2/envs/dialogue/lib/python3.7/site-packages/tensorflow/python/autograph/operators/control_flow.py:286 _verify_tf_cond_vars
> ' branches:\n\n{}'.format(name, str(e)))
> TypeError: 'new_mems' must have the same nested structure in the main and else branches:
> The two structures don't have the same nested structure.
> First structure: type=tuple str=(<tf.Tensor 'tfxl_net_model/transformer/cond_2/StopGradient:0' shape=(44, 18, 768) dtype=float32>,)
> Second structure: type=tuple str=()
> More specifically: The two structures don't have the same number of elements. First structure: type=tuple str=(<tf.Tensor 'tfxl_net_model/transformer/cond_2/StopGradient:0' shape=(44, 18, 768) dtype=float32>,). Second structure: type=tuple str=()
> Entire first structure:
> (.,)
> Entire second structure:
> () | 01-15-2021 07:23:18 | 01-15-2021 07:23:18 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,613 | closed | training_loss in TFTrainer | # What does this PR do?
The purpose of ```training_loss``` in``` TFTrainer``` is logging.
However, ```training_loss``` shows a very large number while it decreases.
And it is doubled with twice the ```gradient_accumulation_steps```
1. ```training_loss``` is accumulated during epochs. Now, it is only calculated in a step.
2. Like ```Trainer```, ```training_loss``` in ```TFTrainer``` considers ```n_replicas``` and ```gradient_accumulation_steps```.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
tensorflow: @jplu | 01-15-2021 07:03:29 | 01-15-2021 07:03:29 | Here the logs of a training with
```
python run_tf_glue.py --task_name mrpc --model_name_or_path bert-base-cased --output_dir model --num_train_epochs 4 --per_device_train_batch_size 16 --per_device_eval_batch_size 16 --do_train --do_eval --do_predict --logging_steps 10 --overwrite_output_dir --gradient_accumulation_steps 2
```
```
[INFO|trainer_tf.py:522] 2021-01-15 10:27:59,116 >> ***** Running training *****
[INFO|trainer_tf.py:523] 2021-01-15 10:27:59,123 >> Num examples = 3668
[INFO|trainer_tf.py:525] 2021-01-15 10:27:59,124 >> Num Epochs = 4
[INFO|trainer_tf.py:526] 2021-01-15 10:27:59,124 >> Instantaneous batch size per device = 16
[INFO|trainer_tf.py:527] 2021-01-15 10:27:59,124 >> Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer_tf.py:530] 2021-01-15 10:27:59,125 >> Gradient Accumulation steps = 2
[INFO|trainer_tf.py:531] 2021-01-15 10:27:59,136 >> Steps per epoch = 115
[INFO|trainer_tf.py:532] 2021-01-15 10:27:59,137 >> Total optimization steps = 460
[INFO|trainer_tf.py:398] 2021-01-15 10:28:50,371 >> {'loss': 0.6347228, 'learning_rate': 4.891304e-05, 'epoch': 0.08695652173913043, 'step': 10}
[INFO|trainer_tf.py:398] 2021-01-15 10:28:56,791 >> {'loss': 0.604829, 'learning_rate': 4.7826084e-05, 'epoch': 0.17391304347826086, 'step': 20}
[INFO|trainer_tf.py:398] 2021-01-15 10:29:03,192 >> {'loss': 0.62615454, 'learning_rate': 4.673913e-05, 'epoch': 0.2608695652173913, 'step': 30}
[INFO|trainer_tf.py:398] 2021-01-15 10:29:09,614 >> {'loss': 0.61436784, 'learning_rate': 4.5652174e-05, 'epoch': 0.34782608695652173, 'step': 40}
[INFO|trainer_tf.py:398] 2021-01-15 10:29:16,163 >> {'loss': 0.60542804, 'learning_rate': 4.456522e-05, 'epoch': 0.43478260869565216, 'step': 50}
[INFO|trainer_tf.py:398] 2021-01-15 10:29:22,633 >> {'loss': 0.60221016, 'learning_rate': 4.347826e-05, 'epoch': 0.5217391304347826, 'step': 60}
[INFO|trainer_tf.py:398] 2021-01-15 10:29:29,129 >> {'loss': 0.59315145, 'learning_rate': 4.2391304e-05, 'epoch': 0.6086956521739131, 'step': 70}
[INFO|trainer_tf.py:398] 2021-01-15 10:29:35,655 >> {'loss': 0.5896678, 'learning_rate': 4.1304345e-05, 'epoch': 0.6956521739130435, 'step': 80}
[INFO|trainer_tf.py:398] 2021-01-15 10:29:42,209 >> {'loss': 0.5796127, 'learning_rate': 4.0217386e-05, 'epoch': 0.782608695652174, 'step': 90}
[INFO|trainer_tf.py:398] 2021-01-15 10:29:48,779 >> {'loss': 0.5678522, 'learning_rate': 3.9130435e-05, 'epoch': 0.8695652173913043, 'step': 100}
[INFO|trainer_tf.py:398] 2021-01-15 10:29:55,365 >> {'loss': 0.55807614, 'learning_rate': 3.8043476e-05, 'epoch': 0.9565217391304348, 'step': 110}
[INFO|trainer_tf.py:398] 2021-01-15 10:30:04,348 >> {'loss': 0.32373077, 'learning_rate': 3.695652e-05, 'epoch': 1.0434782608695652, 'step': 120}
[INFO|trainer_tf.py:398] 2021-01-15 10:30:10,920 >> {'loss': 0.3261666, 'learning_rate': 3.5869565e-05, 'epoch': 1.1304347826086956, 'step': 130}
[INFO|trainer_tf.py:398] 2021-01-15 10:30:17,516 >> {'loss': 0.34052417, 'learning_rate': 3.478261e-05, 'epoch': 1.2173913043478262, 'step': 140}
[INFO|trainer_tf.py:398] 2021-01-15 10:30:24,125 >> {'loss': 0.35018474, 'learning_rate': 3.369565e-05, 'epoch': 1.3043478260869565, 'step': 150}
[INFO|trainer_tf.py:398] 2021-01-15 10:30:30,714 >> {'loss': 0.35887596, 'learning_rate': 3.260869e-05, 'epoch': 1.391304347826087, 'step': 160}
[INFO|trainer_tf.py:398] 2021-01-15 10:30:37,303 >> {'loss': 0.34891757, 'learning_rate': 3.1521737e-05, 'epoch': 1.4782608695652173, 'step': 170}
[INFO|trainer_tf.py:398] 2021-01-15 10:30:43,900 >> {'loss': 0.33256933, 'learning_rate': 3.0434781e-05, 'epoch': 1.5652173913043477, 'step': 180}
[INFO|trainer_tf.py:398] 2021-01-15 10:30:50,481 >> {'loss': 0.32668048, 'learning_rate': 2.934782e-05, 'epoch': 1.6521739130434783, 'step': 190}
[INFO|trainer_tf.py:398] 2021-01-15 10:30:57,079 >> {'loss': 0.31888676, 'learning_rate': 2.8260865e-05, 'epoch': 1.7391304347826086, 'step': 200}
[INFO|trainer_tf.py:398] 2021-01-15 10:31:03,688 >> {'loss': 0.31276095, 'learning_rate': 2.7173912e-05, 'epoch': 1.8260869565217392, 'step': 210}
[INFO|trainer_tf.py:398] 2021-01-15 10:31:10,284 >> {'loss': 0.30366346, 'learning_rate': 2.6086956e-05, 'epoch': 1.9130434782608696, 'step': 220}
[INFO|trainer_tf.py:398] 2021-01-15 10:31:16,885 >> {'loss': 0.2903903, 'learning_rate': 2.5e-05, 'epoch': 2.0, 'step': 230}
[INFO|trainer_tf.py:398] 2021-01-15 10:31:26,095 >> {'loss': 0.15675393, 'learning_rate': 2.3913042e-05, 'epoch': 2.0869565217391304, 'step': 240}
[INFO|trainer_tf.py:398] 2021-01-15 10:31:32,671 >> {'loss': 0.14483282, 'learning_rate': 2.2826087e-05, 'epoch': 2.1739130434782608, 'step': 250}
[INFO|trainer_tf.py:398] 2021-01-15 10:31:39,275 >> {'loss': 0.14147088, 'learning_rate': 2.173913e-05, 'epoch': 2.260869565217391, 'step': 260}
[INFO|trainer_tf.py:398] 2021-01-15 10:31:45,866 >> {'loss': 0.13758971, 'learning_rate': 2.0652174e-05, 'epoch': 2.3478260869565215, 'step': 270}
[INFO|trainer_tf.py:398] 2021-01-15 10:31:52,464 >> {'loss': 0.13357341, 'learning_rate': 1.9565217e-05, 'epoch': 2.4347826086956523, 'step': 280}
[INFO|trainer_tf.py:398] 2021-01-15 10:31:59,049 >> {'loss': 0.12877393, 'learning_rate': 1.8478258e-05, 'epoch': 2.5217391304347827, 'step': 290}
[INFO|trainer_tf.py:398] 2021-01-15 10:32:05,682 >> {'loss': 0.13753517, 'learning_rate': 1.7391301e-05, 'epoch': 2.608695652173913, 'step': 300}
[INFO|trainer_tf.py:398] 2021-01-15 10:32:12,281 >> {'loss': 0.1319594, 'learning_rate': 1.6304344e-05, 'epoch': 2.6956521739130435, 'step': 310}
[INFO|trainer_tf.py:398] 2021-01-15 10:32:18,883 >> {'loss': 0.12644322, 'learning_rate': 1.5217389e-05, 'epoch': 2.782608695652174, 'step': 320}
[INFO|trainer_tf.py:398] 2021-01-15 10:32:25,472 >> {'loss': 0.12481367, 'learning_rate': 1.41304345e-05, 'epoch': 2.869565217391304, 'step': 330}
[INFO|trainer_tf.py:398] 2021-01-15 10:32:32,082 >> {'loss': 0.12073966, 'learning_rate': 1.3043478e-05, 'epoch': 2.9565217391304346, 'step': 340}
[INFO|trainer_tf.py:398] 2021-01-15 10:32:41,403 >> {'loss': 0.10288413, 'learning_rate': 1.1956521e-05, 'epoch': 3.0434782608695654, 'step': 350}
[INFO|trainer_tf.py:398] 2021-01-15 10:32:47,955 >> {'loss': 0.09858045, 'learning_rate': 1.0869565e-05, 'epoch': 3.130434782608696, 'step': 360}
[INFO|trainer_tf.py:398] 2021-01-15 10:32:54,532 >> {'loss': 0.07963112, 'learning_rate': 9.782609e-06, 'epoch': 3.217391304347826, 'step': 370}
[INFO|trainer_tf.py:398] 2021-01-15 10:33:01,104 >> {'loss': 0.08428383, 'learning_rate': 8.6956525e-06, 'epoch': 3.3043478260869565, 'step': 380}
[INFO|trainer_tf.py:398] 2021-01-15 10:33:07,684 >> {'loss': 0.0844244, 'learning_rate': 7.6086967e-06, 'epoch': 3.391304347826087, 'step': 390}
[INFO|trainer_tf.py:398] 2021-01-15 10:33:14,284 >> {'loss': 0.08690852, 'learning_rate': 6.5217405e-06, 'epoch': 3.4782608695652173, 'step': 400}
[INFO|trainer_tf.py:398] 2021-01-15 10:33:20,877 >> {'loss': 0.0832295, 'learning_rate': 5.434781e-06, 'epoch': 3.5652173913043477, 'step': 410}
[INFO|trainer_tf.py:398] 2021-01-15 10:33:27,494 >> {'loss': 0.078029804, 'learning_rate': 4.3478244e-06, 'epoch': 3.6521739130434785, 'step': 420}
[INFO|trainer_tf.py:398] 2021-01-15 10:33:34,095 >> {'loss': 0.079320244, 'learning_rate': 3.2608687e-06, 'epoch': 3.7391304347826084, 'step': 430}
[INFO|trainer_tf.py:398] 2021-01-15 10:33:40,709 >> {'loss': 0.076877564, 'learning_rate': 2.1739122e-06, 'epoch': 3.8260869565217392, 'step': 440}
[INFO|trainer_tf.py:398] 2021-01-15 10:33:47,324 >> {'loss': 0.07551385, 'learning_rate': 1.0869561e-06, 'epoch': 3.9130434782608696, 'step': 450}
[INFO|trainer_tf.py:398] 2021-01-15 10:33:53,941 >> {'loss': 0.07157838, 'learning_rate': 0.0, 'epoch': 4.0, 'step': 460}
```
Nothing seems wrong in the loss computation.
```
eval_acc = 0.8518518518518519
eval_f1 = 0.8954248366013072
eval_acc_and_f1 = 0.8736383442265796
```<|||||>@jplu
Yes, you are right, and I am wrong.
My dataset format was wrong (```labels``` in dataset for ```TFGPT2LMHead``` should be ```tensor```, but was ```dict``` yesterday). Sorry for the confusion.
However, there is one problem though. ```training_loss``` is not properly calculated with successive training. Run ```run_tf_glue.py``` with ```save_steps```. I train 40 steps, and train again from that ckpt. Results are shown below, where loss increases and decreases.
```
[INFO|trainer_tf.py:398] 2021-01-15 16:19:09,011 >> {'loss': 0.10616198, 'learning_rate': 4.456522e-05, 'epoch': 0.43478260869565216, 'step': 50}
[INFO|trainer_tf.py:398] 2021-01-15 16:19:20,271 >> {'loss': 0.17419913, 'learning_rate': 4.347826e-05, 'epoch': 0.5217391304347826, 'step': 60}
[INFO|trainer_tf.py:398] 2021-01-15 16:19:33,935 >> {'loss': 0.2174806, 'learning_rate': 4.2391304e-05, 'epoch': 0.6086956521739131, 'step': 70}
[INFO|trainer_tf.py:398] 2021-01-15 16:19:47,342 >> {'loss': 0.25698015, 'learning_rate': 4.1304345e-05, 'epoch': 0.6956521739130435, 'step': 80}
[INFO|trainer_tf.py:398] 2021-01-15 16:20:04,633 >> {'loss': 0.28348362, 'learning_rate': 4.0217386e-05, 'epoch': 0.782608695652174, 'step': 90}
[INFO|trainer_tf.py:398] 2021-01-15 16:20:22,000 >> {'loss': 0.29971126, 'learning_rate': 3.9130435e-05, 'epoch': 0.8695652173913043, 'step': 100}
[INFO|trainer_tf.py:398] 2021-01-15 16:20:35,515 >> {'loss': 0.30674613, 'learning_rate': 3.8043476e-05, 'epoch': 0.9565217391304348, 'step': 110}
[INFO|trainer_tf.py:398] 2021-01-15 16:20:57,368 >> {'loss': 0.3384542, 'learning_rate': 3.695652e-05, 'epoch': 1.0434782608695652, 'step': 120}
[INFO|trainer_tf.py:398] 2021-01-15 16:21:10,066 >> {'loss': 0.25661522, 'learning_rate': 3.5869565e-05, 'epoch': 1.1304347826086956, 'step': 130}
[INFO|trainer_tf.py:398] 2021-01-15 16:21:23,626 >> {'loss': 0.2714903, 'learning_rate': 3.478261e-05, 'epoch': 1.2173913043478262, 'step': 140}
[INFO|trainer_tf.py:398] 2021-01-15 16:21:42,122 >> {'loss': 0.262019, 'learning_rate': 3.369565e-05, 'epoch': 1.3043478260869565, 'step': 150}
[INFO|trainer_tf.py:398] 2021-01-15 16:21:55,652 >> {'loss': 0.27134755, 'learning_rate': 3.260869e-05, 'epoch': 1.391304347826087, 'step': 160}
```
This comes from ```steps_trained_in_current_epoch``` and ```training_loss = self.train_loss.result() / (step + 1)``` in ```TFTrainer```.
For 41 step (1 step after running from cpkt-40 in this case), only one loss is accumulated but step is 40.
I simply revise it by initially saving ```steps_trained_in_current_epoch``` to another constant.
This may be treated in different PR.<|||||>Humm looks to be an issue indeed. You can keep this PR open to fix this, feel free to ask questions if I can help :) |
transformers | 9,612 | closed | Why do not use 'torch.nn.MultiheadAttention' to substitude 'Class BertSelfAttention+BertSelfOutput' for pytorch | # π Migration
## Information
pytorch has 'torch.nn.MultiheadAttention'
https://pytorch.org/docs/1.3.0/nn.html?highlight=multihead#torch.nn.MultiheadAttention
## Details
1. For Better performace and generality, I suggest to use 'torch.nn.MultiheadAttention' instead of 'Class BertSelfAttention+BertSelfOutput' in BERT model
| 01-15-2021 06:53:37 | 01-15-2021 06:53:37 | 'BertSelfAttention+BertSelfOutput' is tensorflow style
'torch.nn.MultiheadAttention' is real pytorch style<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>The layers from the Pytorch will be significantly faster than using the two classes like in TensorFlow. Can't we make an exception for pytorch to use the optimized layers? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.