repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 11,925 | closed | BERT pretraining: [SEP] vs. Segment Embeddings? | I’m confused about the differences between the intent of the [SEP] tokens and Segment Embeddings applied to the input of BERT during pretraining.
As far as I’ve understood, the [SEP] tokens are inserted between sentence A and B to enable the model’s ability to distinguish between the two sentences for BERTs Next-Sentence Prediction pretraining-task. Similarly, the Segment Embeddings are added to the input embeddings to alter the input, creating another opportunity for the model to learn that sentence A and B are distinct things.
However, these seem to be facilitating the same purpose. Why can’t BERT be trained on only Segment Embeddings, omitting [SEP] tokens? What additional information do [SEP] tokens conceptually provide, that the Segment Embeddings don’t?
Furthermore, [SEP] tokens aren’t used directly anyways. NSP is trained on the [CLS] embeddings, which I understand to sort of represent an embedding of sentence continuity. | 05-28-2021 12:10:21 | 05-28-2021 12:10:21 | From the BERT paper: "We differentiate the sentences in two ways. First, we separate them with a special token ([SEP]). Second, we add a learned embedding to every token indicating whether it belongs to sentence A or sentence B."
Deep learning, as you may now, is a lot of experimenting, and in this case, it was a design choice. I guess you could try to omit the [SEP] token, perhaps it doesn't add much information to the model. Or omit the token type embeddings, and check whether the results are significantly different.
To give another example, people are experimenting with all kinds of position encodings (include absolute ones, as in BERT, relatives ones, as in T5, sinusoidal ones, as in the original Transformer, now there are rotary embeddings, as in the new RoFormer paper)...
So the question you're asking is a genuine research question :) <|||||>Thank you for the quick answer, good to know! I was suspecting it might be something along these lines :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,924 | closed | Test optuna and ray | Run the slow tests for optuna and ray
cc @richardliaw @amogkam | 05-28-2021 11:51:57 | 05-28-2021 11:51:57 | |
transformers | 11,923 | closed | Trainer.predict using customized model.predict function? | I am using the [Trainer](https://huggingface.co/transformers/main_classes/trainer.html) to train a sentence-bert model with triplet-loss. Then I want to do some inference. How to call Trainer.predict using custom model.predict function?
I use `model.forward()` to calculate loss in training stage. But I want to use a customized `model.predict()` to calculate prediction results based on `model.forward()` (e.g., model.forward() -> embedding -> other method to calculate prediction instead of the loss function)
I saw the `prediction_step()` function just called `outputs = model(**inputs)` to get `(loss, logits, labels)`
Is there any good method to do that? | 05-28-2021 11:22:22 | 05-28-2021 11:22:22 | Cant you just subclass the Trainer class and write your own `predict`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,922 | closed | get_ordinal(local=True) replaced with get_local_ordinal() in training_args.py | ## Fixed
Wrong method call fixed. Modified according to:
https://pytorch.org/xla/release/1.8.1/_modules/torch_xla/core/xla_model.html
TPU training as called by the following or similar scripts now works:
```ruby
python xla_spawn.py \
--num_cores=8 \
language-modeling/run_mlm.py \
--train_file $TRAIN_FILE \
--model_name_or_path bert-base-uncased \
--output_dir $OUTPUT_DIR \
--overwrite_output_dir True \
--do_train True \
```
## Discussed/approved
https://github.com/huggingface/transformers/issues/11910
## Who can review?
@sgugger
| 05-28-2021 09:50:49 | 05-28-2021 09:50:49 | Thanks a lot! |
transformers | 11,921 | closed | ProphetNetForConditionalGeneration model isn't returning all objects properly | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Google Colab
- Python version:
- PyTorch version (GPU?): GPU
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
- text generation: @patrickvonplaten
## Information
The model I am using Prophetnet:
The problem arises when using:
* my own modified scripts: [My notebook](https://colab.research.google.com/drive/1rmZyTXsdEDDpx8tbX-Gt6Uj9NuQi92VK?usp=sharing)
The tasks I am working on is:
* an official SQUaD task: Question Generation
## To reproduce
Steps to reproduce the behavior:
1. Just run the notebook
2. After running a single inference I am only getting 4 objects while I should get loss and other objects.
## Expected behavior
After running a single inference I am only getting 4 objects while I should get loss and other objects. @patrickvonplaten
| 05-28-2021 08:34:09 | 05-28-2021 08:34:09 | What are all the objects you expect to get? The `loss` is only returned if you pass the labels to the model - otherwise it cannot compute any loss. Please check out the [return statement of ProphetNetForConditionalGeneration's forward method for more information](https://huggingface.co/transformers/model_doc/prophetnet.html#transformers.ProphetNetForConditionalGeneration.forward). <|||||>Thank you, it worked @LysandreJik |
transformers | 11,920 | closed | Remove redundant `nn.log_softmax` in `run_flax_glue.py` | # What does this PR do?
`optax.softmax_cross_entropy` expects unscaled logits, so it already calls `nn.log_softmax` ([here](https://github.com/deepmind/optax/blob/master/optax/_src/loss.py#L166)). `nn.log_softmax` is idempotent so mathematically it shouldn't have made a difference.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@marcvanzee @patrickvonplaten
| 05-28-2021 07:34:51 | 05-28-2021 07:34:51 | Great catch @n2cholas! Could you also remove the line:
```python
import flax.linen as nn
```
to make our code quality checks happy? Happy to merge right after :-)<|||||>Done @patrickvonplaten! |
transformers | 11,919 | closed | Trainer reported loss is wrong when using DeepSpeed and gradient_accumulation_steps > 1 | ## Environment info
- `transformers` version: 4.7.0.dev0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.0
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: no but using DeepSpeed on a single node
### Who can help
@stas00, @sgugger (trainer.py)
### See Also
https://github.com/microsoft/DeepSpeed/issues/1107
## Information
Model I am using (Roberta)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
* [ ] pretraining a Language Model (wikipedia and bookcorpus datasets)
## To reproduce
Steps to reproduce the behavior:
1. run scripts to pretrain a model with DeepSpeed on a single node with 1 GPU for N steps (gradient_accum_steps=1)
2. run scripts to pretrain a model with DeepSpeed on a single node with 1 GPU for N steps (gradient_accum_steps=8)
3. note that vast difference in **loss** reported on console by trainer.py
## Expected behavior
reported loss for any number of gradient_accum_steps, nodes, or GPUs should be the mean of all losses; the same order of magnitude as shown when training with gradient_accum_steps=1, on a single node, with a single GPU.
| 05-28-2021 05:54:56 | 05-28-2021 05:54:56 | Please note that the fix should involve ignoring the return value of `deepspeed.backward()` in this [line](https://github.com/huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/src/transformers/trainer.py#L1754). Or at least not updating loss with this return value since it is the scaled loss value, similar to `scaled_loss` in this [line](https://github.com/huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/src/transformers/trainer.py#L1750)<|||||>Aaaah! We had two different definitions of scaled here, I know fully understand the issue. I was thinking scaled as scaled by the gradient accumulation steps factor, not scaled as scaled by the loss scaling factor. This is an easy fix to add, will do that in a bit.<|||||>> Please note that the fix should involve ignoring the return value of `deepspeed.backward()` in this [line](https://github.com/huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/src/transformers/trainer.py#L1754). Or at least not updating loss with this return value since it is the scaled loss value, similar to `scaled_loss` in this [line](https://github.com/huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/src/transformers/trainer.py#L1750)
@tjruwase, could you please review your suggestion, since I see the deepspeed code doing scaling by GAS only. Please see:
https://github.com/microsoft/DeepSpeed/blob/c697d7ae1cf5a479a8a85afa3bf9443e7d54ac2b/deepspeed/runtime/engine.py#L1142-L1143
Am I missing something?
And running tests I don't see any problem with the current code.<|||||>@stas00, you are right my suggestion here is not correct. I initially thought that deepspeed code scaling by GAS and exposing the scaled value to the client (HF) was the problem. But based yours and @sgugger findings, it seems there is nothing to do if HF is fine with `deepspeed.backward()` returning the GAS-scaled loss.
Sounds like this issue can be closed, once @rfernand2 agrees. <|||||>Yes, sounds good to me.<|||||>Closing as the same report on Deepspeed side has been closed https://github.com/microsoft/DeepSpeed/issues/1107
|
transformers | 11,918 | closed | [Flax] Return Attention from BERT, ELECTRA, RoBERTa and GPT2 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # https://github.com/huggingface/transformers/issues/11901
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-28-2021 05:06:06 | 05-28-2021 05:06:06 | 🎉 |
transformers | 11,917 | closed | [Flax][WIP] Addition of Flax-Wav2Vec Model | # What does this PR do?
This PR is for the addition of Wav2Vec Model
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patil-suraj | 05-28-2021 04:51:35 | 05-28-2021 04:51:35 | Cool PR! For next steps, we should write the missing classes and remove everything which is related to:
```
feat_extract_norm="group"
do_stable_layer_norm=True
```
(This config parameters are only used for https://huggingface.co/facebook/wav2vec2-base-960h which is the oldest of the wav2vec2 models)
Also, it would be very important to add tests in `modeling_flax_wav2vec2.py` <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this in favor of https://github.com/huggingface/transformers/pull/12271 |
transformers | 11,916 | closed | Wrong perplexity when evaluate the megatron-gpt2. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0.dev0
- Platform: Linux-5.4.0-1046-azure-x86_64-with-debian-buster-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@jdemouth @LysandreJik @sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using gpt2(megatron-gpt2-345m):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official language-modeling task: (transformers/examples/pytorch/language-modeling/run_clm.py )
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Follow the steps given by [huggingface](https://huggingface.co/nvidia/megatron-gpt2-345m) to convert the megatron-lm model to huggingface model.
+ export MYDIR=/mnt/reproduce
+ git clone https://github.com/huggingface/transformers.git $MYDIR/transformers
+ mkdir -p $MYDIR/nvidia/megatron-gpt2-345m
+ wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O $MYDIR/nvidia/megatron-gpt2-345m/checkpoint.zip
+ python3 $MYDIR/transformers/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py $MYDIR/nvidia/megatron-gpt2-345m/checkpoint.zip
(Here I meet error: *"io.UnsupportedOperation: seek. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead."* And I solve it by,
- unzip $MYDIR/nvidia/megatron-gpt2-345m/checkpoint.zip,
- change the code in transformers/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py Line **209-211** by
```python
with open(args.path_to_checkpoint, "rb") as pytorch_dict:
input_state_dict = torch.load(pytorch_dict, map_location="cpu")
```
- python3 $MYDIR/transformers/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py $MYDIR/nvidia/megatron-gpt2-345m/release/mp_rank_00/model_optim_rng.pt
+ git clone https://huggingface.co/nvidia/megatron-gpt2-345m/
+ mv $MYDIR/nvidia/megatron-gpt2-345m/release/mp_rank_00/pytorch_model.bin $MYDIR/nvidia/megatron-gpt2-345m/release/mp_rank_00/config.json $MYDIR/megatron-gpt2-345m/
2. run the clm.py tests on wikitext-2, the scripts is given by [readme](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/README.md).
```python
CUDA_VISIBLE_DEVICES=0 python $MYDIR/transformers/examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path $MYDIR/megatron-gpt2-345m \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_eval \
--output_dir /mnt/logs/evaluation/megatron/wikitext-2
```
3. The results are shown as, which shows the wrong perplexity(I also test on other datasets, and the perplexity results are also big):
``` txt
[INFO|trainer_pt_utils.py:907] 2021-05-28 04:17:49,817 >> ***** eval metrics *****
[INFO|trainer_pt_utils.py:912] 2021-05-28 04:17:49,817 >> eval_loss = 11.63
[INFO|trainer_pt_utils.py:912] 2021-05-28 04:17:49,817 >> eval_runtime = 0:00:22.85
[INFO|trainer_pt_utils.py:912] 2021-05-28 04:17:49,817 >> eval_samples = 240
[INFO|trainer_pt_utils.py:912] 2021-05-28 04:17:49,817 >> eval_samples_per_second = 10.501
[INFO|trainer_pt_utils.py:912] 2021-05-28 04:17:49,817 >> eval_steps_per_second = 1.313
[INFO|trainer_pt_utils.py:912] 2021-05-28 04:17:49,817 >> perplexity = 112422.0502
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I want to convert my megatron-lm model checkpoints into huggingface. Please help me.
<!-- A clear and concise description of what you would expect to happen. -->
| 05-28-2021 04:23:15 | 05-28-2021 04:23:15 | We’ll try to reproduce the issue on our side. We’ll keep you posted. Thanks!<|||||>> We’ll try to reproduce the issue on our side. We’ll keep you posted. Thanks!
Thanks for your help!
<|||||>We (NVIDIA engineers) were able to reproduce strange perplexity results and we are trying to identify the root cause. We will update you as we know more. Thanks for reporting the issue and for the reproducer.<|||||>Hi,
I think #12004 is an related issue |
transformers | 11,915 | closed | RuntimeError: The size of tensor a (716) must match the size of tensor b (512) at non-singleton dimension 1 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0.dev0
- Platform: windows 7
- Python version: 3.8.8
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [√] the official example scripts: (give details below)
`python run_ner.py --model_name_or_path nlpaueb/legal-bert-base-uncased --train_file ***.json --validation_file ***.json --output_dir /tmp/*** --do_train --do_eval`
* [ ] my own modified scripts: (give details below)
The tasks I am working on is: NER
* [ ] an official GLUE/SQUaD task: (give the name)
* [√] my own task or dataset: (give details below)
## Error
`File "run_ner.py", line 504, in <module>
main()
File "run_ner.py", line 446, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "D:\Tool\Install\Python\lib\site-packages\transformers\trainer.py", line
1240, in train
tr_loss += self.training_step(model, inputs)
File "D:\Tool\Install\Python\lib\site-packages\transformers\trainer.py", line
1635, in training_step
loss = self.compute_loss(model, inputs)
File "D:\Tool\Install\Python\lib\site-packages\transformers\trainer.py", line
1667, in compute_loss
outputs = model(**inputs)
File "D:\Tool\Install\Python\lib\site-packages\torch\nn\modules\module.py", li
ne 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\Tool\Install\Python\lib\site-packages\transformers\models\bert\modeli
ng_bert.py", line 1679, in forward
outputs = self.bert(
File "D:\Tool\Install\Python\lib\site-packages\torch\nn\modules\module.py", li
ne 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\Tool\Install\Python\lib\site-packages\transformers\models\bert\modeli
ng_bert.py", line 964, in forward
embedding_output = self.embeddings(
File "D:\Tool\Install\Python\lib\site-packages\torch\nn\modules\module.py", li
ne 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\Tool\Install\Python\lib\site-packages\transformers\models\bert\modeli
ng_bert.py", line 207, in forward
embeddings += position_embeddings
RuntimeError: The size of tensor a (716) must match the size of tensor b (512) at non-singleton dimension 1
5%|██▏ | 12/231 [03:07<56:54, 15.59s/i
t]`
| 05-28-2021 03:43:43 | 05-28-2021 03:43:43 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,914 | closed | How to get back the identified words from LayoutLMForTokenClassification? | I am using LayoutLMForTokenClassification as described [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb). In the end, the tutorial shows an annotated image with identified classes for various tokens.
How can I get back the original words as well to be annotated along with the labels?
I tried to read the words with tokenizer.decode(input_ids).split(" ") but the tokenizer broke words into multiple tokens which it wasn't supposed to. So, I have more words/outputs/boxes that I am supposed to have.
| 05-28-2021 03:16:30 | 05-28-2021 03:16:30 | Hi,
A solution for this can be the following (taken from my [Fine-tuning BERT for NER notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/BERT/Custom_Named_Entity_Recognition_with_BERT_only_first_wordpiece.ipynb)):

In my notebook for `LayoutLMForTokenClassification`, only the label for the first word piece of each word matters. In HuggingFace Transformers, a tokenizer takes an additional parameter called `return_offsets_mapping` which can be set to `True` to return the (char_start, char_end) for each token.
You can use this to determine whether a token is the first wordpiece of a word, or not. As we are only interested in the label of the first wordpiece, you can assign its label to be the label for the entire word.
Do you understand?
<|||||>I do. Thanks. I tried doing this by referring your BERT code. But I am getting this error, unfortunately.

Apologies if I messed up. This is the first time that I am working with transformers.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,913 | closed | Inference for pinned model keeps loading | I have pinned the enterprise model: `ligolab/DxRoberta`
This model is pinned for instant start.
When I try the Inference API `fill-mask` task, it responds instantly
However, when I try API call to embedding pipeline https://api-inference.huggingface.co/pipeline/feature-extraction/ligolab/DxRoberta , I keep getting the message: `{'error': 'Model ligolab/DxRoberta is currently loading', 'estimated_time': 20}` and status does not change with time.
API call to embedding pipeline was working yesterday when I tested it.
| 05-28-2021 01:22:25 | 05-28-2021 01:22:25 | after I have unpinned the model embeddings pipeline started working again (unless you did something on the back end)<|||||>I have repeated the experiment: pinned model - embedding pipeline API returns "loading" status; unpinned model - embedding pipeline returns valid results (after model 'warm up'). Look like inference API to embedding pipeline stops working if the model is pinned for instant inference access. <|||||>Tagging @Narsil for visibility (API support issues are best handled over email if possible!)<|||||>Thanks for tagging me, the `ligolab/DxRoberta` is defined a `fill-mask` by default, so that was what was being pinned down leading the issues you were encountering. You can override that by changing the `pipeline_tag` in the model card (if you want).
There is currently no way to specify the `task` when pinning, so I did it manually for now ! You should be good to go !<|||||>If we change `pipeline_tag `do we still need to use API endpoint `/pipeline/feature-extraction/ ` ?<|||||>No if you change the default tag, then the regular route /models/{MODEL} will work ! <|||||>@Narsil , could you chare snippet of using `pipeline_tag` in the card? I don't recall seeing this option it in the documentation https://github.com/huggingface/model_card.<|||||>Just `pipeline_tag: xxx`, see https://huggingface.co/docs#how-is-a-models-type-of-inference-api-and-widget-determined |
transformers | 11,912 | closed | Distillation of Pegasus using Pseudo labeling |
### Who can help
@sgugger
@patrickvonplaten
Models:
- Distillation of Pegasus
## Information
The model I am using (google/pegasus-xsum):
The problem:
- Trying to implement Pegasus Distillation using [Pseudo Labeling](https://github.com/huggingface/transformers/blob/master/examples/research_projects/seq2seq-distillation/precomputed_pseudo_labels.md) in [PRE-TRAINED SUMMARIZATION DISTILLATION](https://arxiv.org/pdf/2010.13002v2.pdf)
- By copying layers from the Teacher model, freezing the positional, token embeddings, all Encoder layers
- The model trained for two epochs on Xsum-Dataset using cross-entropy loss function between logits of student and output
generated from the teacher model
- Generating outputs from the student model gives repeated words and poor generation, although the losses function decreases from 8 to 0.7647 in training and 0.5424 in validation
```python
[have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have have wing wing wing wing wing wing wing wing wing wing wing wing wing']
```
How can I improve the generation of the model
| 05-27-2021 23:06:27 | 05-27-2021 23:06:27 | |
transformers | 11,911 | closed | Fix a condition in test_generate_with_head_masking | Fix a glitch in a condition in `test_generate_with_headmasking`, i.e.
```diff
- if set(head_masking.keys()) < set([*signature.parameters.keys()]):
+ if set(head_masking.keys()) > set([*signature.parameters.keys()]):
continue
```
This PR also fixes usage of head_mask for bigbird_pegasus and speech2texy models.
**Reviewer:** @patrickvonplaten | 05-27-2021 20:48:53 | 05-27-2021 20:48:53 | |
transformers | 11,910 | closed | xla_spawn.py: xm.get_ordinal() got an unexpected keyword argument 'local' | ## Environment info
- `transformers` version: 4.7.0.dev0
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic (working on Colab with TPU)
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: No, using TPU
- Using distributed or parallel set-up in script?: number_of_cores = 8
### Who can help
<@sgugger, @patil-suraj -->
## Information
Model I am using: Bert
The problem arises when using:
* [ ] the official example scripts:
```ruby
python /transformers/examples/pytorch/xla_spawn.py --num_cores=8 \
/transformers/examples/pytorch/language-modeling/run_mlm.py (--run_mlm.py args)
```
The tasks I am working on is:
* Pretraining BERT with TPU
## To reproduce
Steps to reproduce the behavior:
1. install necessary packages:
```ruby
pip install git+https://github.com/huggingface/transformers
cd /content/transformers/examples/pytorch/language-modeling
pip install -r requirements.txt
pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8.1-cp37-cp37m-linux_x86_64.whl
```
2. run xla_spwan with minimal args passed to run_mlm: specify a small .txt TRAIN_FILE and an OUTPUT_DIR:
```ruby
python xla_spawn.py \
--num_cores=8 \
language-modeling/run_mlm.py \
--train_file $TRAIN_FILE \
--model_name_or_path bert-base-uncased \
--output_dir $OUTPUT_DIR \
--overwrite_output_dir True \
--do_train True \
```
I get this error (for different TPU cores):
```ruby
Exception in device=TPU:0: get_ordinal() got an unexpected keyword argument 'local'
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
fn(gindex, *args)
File "/content/drive/My Drive/Thesis/transformers/examples/pytorch/language-modeling/run_mlm.py", line 493, in _mp_fn
main()
File "/content/drive/My Drive/Thesis/transformers/examples/pytorch/language-modeling/run_mlm.py", line 451, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1193, in train
self.state.is_local_process_zero = self.is_local_process_zero()
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1784, in is_local_process_zero
return self.args.local_process_index == 0
File "/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py", line 1605, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/training_args.py", line 864, in local_process_index
return xm.get_ordinal(local=True)
TypeError: get_ordinal() got an unexpected keyword argument 'local'
```
## Expected behavior
The training should run without errors. I achieved this by simply replacing line 864 of /transformers/training_args.py:
```ruby
return xm.get_ordinal(local=True)
```
with:
```ruby
return xm.get_local_ordinal()
```
Following torch docs at:
https://pytorch.org/xla/release/1.5/_modules/torch_xla/core/xla_model.html
If this is the correct syntax (and this behaviour is not due to something wrong in my environment), this easy fix should be enough. My model trained correctly.
| 05-27-2021 20:06:19 | 05-27-2021 20:06:19 | Thanks for the catch! Since you have the proper fix indeed, would like to make a PR with it?<|||||>Done, thanks!<|||||>Closed by #11922 |
transformers | 11,909 | closed | FlaxGPTNeo Draft PR | # What does this PR do?
Add FlaxGPTNeo to HuggingFace Models!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-27-2021 20:00:18 | 05-27-2021 20:00:18 | Hey @zanussbaum,
Regarding on how to proceed with the implementation, could you maybe post your questions here and tag @patil-suraj and @patrickvonplaten so that we can move forward? :-)<|||||>Hey @patrickvonplaten, I actually chatted with Suraj this morning and
cleared my questions up about the Self Attention Module. I am working on
implementing it and hope to have something out this weekend!
On Thu, Jun 3, 2021 at 11:27 AM Patrick von Platen ***@***.***>
wrote:
> Hey @zanussbaum <https://github.com/zanussbaum>,
>
> Regarding on how to proceed with the implementation, could you maybe post
> your questions here and tag @patil-suraj <https://github.com/patil-suraj>
> and @patrickvonplaten <https://github.com/patrickvonplaten> so that we
> can move forward? :-)
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/11909#issuecomment-853957259>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AIBFIPPVDY3FYPLTVEYQCTTTQ6NPXANCNFSM45U7XKKQ>
> .
>
|
transformers | 11,908 | closed | Fine tuning with transformer models for Regression tasks | - `transformers` version: Bert, Albert, openai-gpt2
- Tensorflow version (GPU?): 2.5.0
## Information
Model I am using : Bert, Albert, openai-gpt2
The problem arises when using:
* [x] my own modified scripts: (give details below) <br>
- performed fine tuning
The tasks I am working on is:
* [x] my own task or dataset: (give details below)<br>
- I have been trying to use BertModel, albert and GPT2 models for fine tuning on my regression task and i was able to produce unwanted results . i will mention it below: <br>
- I tried it two times: <br>
1. I used CLS token embeddings and fine tuned over my entire custom model but it produced some random number repeating over and over in my output matrix space.<br>
2. I simply passed CLS token embeddings to the feed forward NN. In this case also it produced some random number and no learning is seen here.<br>
<br>
**what can be the solution to this problem? is there any issues with transformers with respect to regression?** | 05-27-2021 19:40:44 | 05-27-2021 19:40:44 | I think it's better to ask this question on the [forum](https://discuss.huggingface.co/) rather than here. For example, all questions related to training BERT for regression can be found [here](https://discuss.huggingface.co/search?q=bert%20regression). |
transformers | 11,907 | closed | Add conversion from TF to PT for Tapas retrieval models | # What does this PR do?
Table Retrieval models based on Tapas as described [here](https://arxiv.org/pdf/2103.12011.pdf) just got published in the [Tapas repository](https://github.com/google-research/tapas). The existing conversion function does not work with the retrieval models, so I added support to convert them to Pytorch.
Unfortunately, this only converts the language model without the down projection layer. However, I think this might still be useful to some people who, for instance, want to fine-tune the pre-trained models.
Unfortunately, I do not have the time at the moment to add the down projection layer myself.
## Who can review?
@NielsRogge
| 05-27-2021 17:35:11 | 05-27-2021 17:35:11 | Thanks for this, it's certainly something I'd like to add in the future. The TAPAS team seems quite active, they released [yet another paper involving TAPAS](https://arxiv.org/abs/2106.00479) (to work on larger tables).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge I just quickly reviewed this script and it looks fine. Is it possible to know why this PR became stall and was automatically closed? Is there anything wrong with the script I should be aware of?
I'm currently working on finetuning the TAPAS retrieval model for a research project, just wanted to have your thoughts on this before running the script and uploading the model to the Huggingface hub.<|||||>@jonathanherzig Just wanted to confirm with you, in this case `bert` is the table encoder and `bert_1` is the question encoder, right? <|||||>Hi @xhlulu ,
Sorry, but I not familiar with the implementation details in this version of TAPAS... probably @NielsRogge can help.
Best,
Jonathan<|||||>No worries, thanks Jonathan! |
transformers | 11,906 | closed | Added Sequence Classification class in GPTNeo | # Added Sequence Classification Class in GPT Neo Model
Fixes #11811
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #11811
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @patil-suraj @sgugger
| 05-27-2021 15:10:21 | 05-27-2021 15:10:21 | |
transformers | 11,905 | closed | Customize pretrained model for model hub | Hi community,
I would like to add mean pooling step inside a custom SentenceTransformer class derived from the model sentence-transformers/stsb-xlm-r-multilingual, in order to avoid to do this supplementary step after getting the tokens embeddings.
My aim is to push this custom model onto model hub. If not using this custom step, it is trivial as below:
```
from transformers import AutoTokenizer, AutoModel
Simple export
## Instanciate the model
model_name = "sentence-transformers/stsb-xlm-r-multilingual"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
## Save the model and tokenizer files into cloned repository
model.save_pretrained("path/to/repo/clone/your-model-name")
tokenizer.save_pretrained("path/to/repo/clone/your-model-name")
```
However, after defining my custom class SentenceTransformerCustom I can’t manage to push on model hub the definition of this class:
```
import transformers
import torch
#### Custom export ####
## 1. Load feature-extraction pipeline with specific sts model
model_name = "sentence-transformers/stsb-xlm-r-multilingual"
pipeline_name = "feature-extraction"
nlp = transformers.pipeline(pipeline_name, model=model_name, tokenizer=model_name)
tokenizer = nlp.tokenizer
## 2. Setting up a simple torch model, which inherits from the XLMRobertaModel model. The only thing we add is a weighted summation over the token embeddings and a clamp to prevent zero-division errors.
class SentenceTransformerCustom(transformers.XLMRobertaModel):
def __init__(self, config):
super().__init__(config)
# Naming alias for ONNX output specification
# Makes it easier to identify the layer
self.sentence_embedding = torch.nn.Identity()
def forward(self, input_ids, attention_mask):
# Get the token embeddings from the base model
token_embeddings = super().forward(
input_ids,
attention_mask=attention_mask
)[0]
# Stack the pooling layer on top of it
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return self.sentence_embedding(sum_embeddings / sum_mask)
## 3. Create the custom model based on the config of the original pipeline
model = SentenceTransformerCustom(config=nlp.model.config).from_pretrained(model_name)
## 4. Save the model and tokenizer files into cloned repository
model.save_pretrained("/home/matthieu/Deployment/HF/stsb-xlm-r-multilingual")
tokenizer.save_pretrained("/home/matthieu/Deployment/HF/stsb-xlm-r-multilingual")
```
Do I need to place this custom class definition inside a specific .py file ? Or is there anything to do in order to correctly import this custom class from model hub?
Thanks! | 05-27-2021 14:04:14 | 05-27-2021 14:04:14 | Maybe of interest to @nreimers <|||||>Hi @Matthieu-Tinycoaching
I was sadly not able to re-produce your error. Have you uploaded such a model to the hub? Could you post the link here?
And how does your code look like to load the model?<|||||>Hi @nreimers
I retried with including the custom class definition when loading the model and it worked.
|
transformers | 11,904 | closed | 'error': 'Model Matthieu/stsb-xlm-r-multilingual is currently loading' | Hello,
I have pushed on model hub (https://huggingface.co/Matthieu/stsb-xlm-r-multilingual) a pretrained sentence transformer model (https://huggingface.co/sentence-transformers/stsb-xlm-r-multilingual).
However, when trying to get prediction via th API_URL I stil got the following error:
`{'error': 'Model Matthieu/stsb-xlm-r-multilingual is currently loading', 'estimated_time': 44.49033436}`
How could I deal with this problem?
Thanks!
| 05-27-2021 13:59:55 | 05-27-2021 13:59:55 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,903 | closed | Problem when freezing all GPT2 model except the LM head | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-4.15.0-142-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- gpt2: @patrickvonplaten, @LysandreJik
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
[*] the official example scripts: (give details below)
When I try to print all the named parameters of GPT2 model with LM head, `model.lm_head` does not appear in the list.
In my experiment, I tried to freeze all the parameters except the lm head, however, the lm head is frozen together when model.transformer.wte is frozen.
## To reproduce
Steps to reproduce the behavior:
1. Load model
```
from transformers import AutoModelForCausalLM
gpt2 = AutoModelForCausalLM.from_pretrained("gpt2")
```
2. freeze the transformer part
```
for p in gpt2.transformer.parameters():
p.requires_grad=False
```
or just:
```
for p in gpt2.transformer.wte.parameters():
p.requires_grad=False
```
3. check lm_head
```
for p in gpt2.lm_head.parameters():
print(p.requires_grad)
```
and the output of the third step is False.
4. When I try printing all the named parameters
```
components = [k for k,v in gpt2.named_parameters()]
print(components)
```
The output is as follows:
['transformer.wte.weight', 'transformer.wpe.weight', 'transformer.h.0.ln_1.weight', 'transformer.h.0.ln_1.bias', 'transformer.h.0.attn.c_attn.weight', 'transformer.h.0.attn.c_attn.bias', 'transformer.h.0.attn.c_proj.weight', 'transformer.h.0.attn.c_proj.bias', 'transformer.h.0.ln_2.weight', 'transformer.h.0.ln_2.bias', 'transformer.h.0.mlp.c_fc.weight', 'transformer.h.0.mlp.c_fc.bias', 'transformer.h.0.mlp.c_proj.weight', 'transformer.h.0.mlp.c_proj.bias', 'transformer.h.1.ln_1.weight', 'transformer.h.1.ln_1.bias', 'transformer.h.1.attn.c_attn.weight', 'transformer.h.1.attn.c_attn.bias', 'transformer.h.1.attn.c_proj.weight', 'transformer.h.1.attn.c_proj.bias', 'transformer.h.1.ln_2.weight', 'transformer.h.1.ln_2.bias', 'transformer.h.1.mlp.c_fc.weight', 'transformer.h.1.mlp.c_fc.bias', 'transformer.h.1.mlp.c_proj.weight', 'transformer.h.1.mlp.c_proj.bias', 'transformer.h.2.ln_1.weight', 'transformer.h.2.ln_1.bias', 'transformer.h.2.attn.c_attn.weight', 'transformer.h.2.attn.c_attn.bias', 'transformer.h.2.attn.c_proj.weight', 'transformer.h.2.attn.c_proj.bias', 'transformer.h.2.ln_2.weight', 'transformer.h.2.ln_2.bias', 'transformer.h.2.mlp.c_fc.weight', 'transformer.h.2.mlp.c_fc.bias', 'transformer.h.2.mlp.c_proj.weight', 'transformer.h.2.mlp.c_proj.bias', 'transformer.h.3.ln_1.weight', 'transformer.h.3.ln_1.bias', 'transformer.h.3.attn.c_attn.weight', 'transformer.h.3.attn.c_attn.bias', 'transformer.h.3.attn.c_proj.weight', 'transformer.h.3.attn.c_proj.bias', 'transformer.h.3.ln_2.weight', 'transformer.h.3.ln_2.bias', 'transformer.h.3.mlp.c_fc.weight', 'transformer.h.3.mlp.c_fc.bias', 'transformer.h.3.mlp.c_proj.weight', 'transformer.h.3.mlp.c_proj.bias', 'transformer.h.4.ln_1.weight', 'transformer.h.4.ln_1.bias', 'transformer.h.4.attn.c_attn.weight', 'transformer.h.4.attn.c_attn.bias', 'transformer.h.4.attn.c_proj.weight', 'transformer.h.4.attn.c_proj.bias', 'transformer.h.4.ln_2.weight', 'transformer.h.4.ln_2.bias', 'transformer.h.4.mlp.c_fc.weight', 'transformer.h.4.mlp.c_fc.bias', 'transformer.h.4.mlp.c_proj.weight', 'transformer.h.4.mlp.c_proj.bias', 'transformer.h.5.ln_1.weight', 'transformer.h.5.ln_1.bias', 'transformer.h.5.attn.c_attn.weight', 'transformer.h.5.attn.c_attn.bias', 'transformer.h.5.attn.c_proj.weight', 'transformer.h.5.attn.c_proj.bias', 'transformer.h.5.ln_2.weight', 'transformer.h.5.ln_2.bias', 'transformer.h.5.mlp.c_fc.weight', 'transformer.h.5.mlp.c_fc.bias', 'transformer.h.5.mlp.c_proj.weight', 'transformer.h.5.mlp.c_proj.bias', 'transformer.h.6.ln_1.weight', 'transformer.h.6.ln_1.bias', 'transformer.h.6.attn.c_attn.weight', 'transformer.h.6.attn.c_attn.bias', 'transformer.h.6.attn.c_proj.weight', 'transformer.h.6.attn.c_proj.bias', 'transformer.h.6.ln_2.weight', 'transformer.h.6.ln_2.bias', 'transformer.h.6.mlp.c_fc.weight', 'transformer.h.6.mlp.c_fc.bias', 'transformer.h.6.mlp.c_proj.weight', 'transformer.h.6.mlp.c_proj.bias', 'transformer.h.7.ln_1.weight', 'transformer.h.7.ln_1.bias', 'transformer.h.7.attn.c_attn.weight', 'transformer.h.7.attn.c_attn.bias', 'transformer.h.7.attn.c_proj.weight', 'transformer.h.7.attn.c_proj.bias', 'transformer.h.7.ln_2.weight', 'transformer.h.7.ln_2.bias', 'transformer.h.7.mlp.c_fc.weight', 'transformer.h.7.mlp.c_fc.bias', 'transformer.h.7.mlp.c_proj.weight', 'transformer.h.7.mlp.c_proj.bias', 'transformer.h.8.ln_1.weight', 'transformer.h.8.ln_1.bias', 'transformer.h.8.attn.c_attn.weight', 'transformer.h.8.attn.c_attn.bias', 'transformer.h.8.attn.c_proj.weight', 'transformer.h.8.attn.c_proj.bias', 'transformer.h.8.ln_2.weight', 'transformer.h.8.ln_2.bias', 'transformer.h.8.mlp.c_fc.weight', 'transformer.h.8.mlp.c_fc.bias', 'transformer.h.8.mlp.c_proj.weight', 'transformer.h.8.mlp.c_proj.bias', 'transformer.h.9.ln_1.weight', 'transformer.h.9.ln_1.bias', 'transformer.h.9.attn.c_attn.weight', 'transformer.h.9.attn.c_attn.bias', 'transformer.h.9.attn.c_proj.weight', 'transformer.h.9.attn.c_proj.bias', 'transformer.h.9.ln_2.weight', 'transformer.h.9.ln_2.bias', 'transformer.h.9.mlp.c_fc.weight', 'transformer.h.9.mlp.c_fc.bias', 'transformer.h.9.mlp.c_proj.weight', 'transformer.h.9.mlp.c_proj.bias', 'transformer.h.10.ln_1.weight', 'transformer.h.10.ln_1.bias', 'transformer.h.10.attn.c_attn.weight', 'transformer.h.10.attn.c_attn.bias', 'transformer.h.10.attn.c_proj.weight', 'transformer.h.10.attn.c_proj.bias', 'transformer.h.10.ln_2.weight', 'transformer.h.10.ln_2.bias', 'transformer.h.10.mlp.c_fc.weight', 'transformer.h.10.mlp.c_fc.bias', 'transformer.h.10.mlp.c_proj.weight', 'transformer.h.10.mlp.c_proj.bias', 'transformer.h.11.ln_1.weight', 'transformer.h.11.ln_1.bias', 'transformer.h.11.attn.c_attn.weight', 'transformer.h.11.attn.c_attn.bias', 'transformer.h.11.attn.c_proj.weight', 'transformer.h.11.attn.c_proj.bias', 'transformer.h.11.ln_2.weight', 'transformer.h.11.ln_2.bias', 'transformer.h.11.mlp.c_fc.weight', 'transformer.h.11.mlp.c_fc.bias', 'transformer.h.11.mlp.c_proj.weight', 'transformer.h.11.mlp.c_proj.bias', 'transformer.ln_f.weight', 'transformer.ln_f.bias']
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
May I ask about the connect between LM head and the wte layer and is that possible to freeze the GPT2 model except LM head? | 05-27-2021 13:21:26 | 05-27-2021 13:21:26 | Hi! In GPT-2, as with most models, the LM head is tied to the embeddings: it has the same weights.
You can play around with the `tie_word_embeddings` configuration option, but your LM head will be randomly initialized.<|||||>Thank you very much! |
transformers | 11,902 | closed | [Flax] return attentions | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-27-2021 12:08:52 | 05-27-2021 12:08:52 | |
transformers | 11,901 | closed | [Flax] Add attention weights outputs to all models | # 🚀 Feature request
At the moment we cannot return a list of attention weight outputs in Flax as we can do in PyTorch.
In PyTorch, there is a `output_attentions` boolean in the forward call of every function, see [here](https://github.com/huggingface/transformers/blob/42fe0dc23e4a7495ebd08185f5850315a1a12dc0/src/transformers/models/bert/modeling_bert.py#L528) which when set to True collects all attention weights and returns them as a tuple.
In PyTorch, the attention weights are returned (if `output_attentions=True`) from the self-attention layer *e.g.* here: https://github.com/huggingface/transformers/blob/42fe0dc23e4a7495ebd08185f5850315a1a12dc0/src/transformers/models/bert/modeling_bert.py#L331 and then passed with the outputs.
Currently, this is not implemented in Flax and needs to be done. At the moment the function [`dot_product_attention`](https://github.com/google/flax/blob/6fb839c640de80f887580a533b222c6dddf04c0d/flax/linen/attention.py#L109) is used in every Flax model which makes it impossible to retrieve the attention weights, see [here](https://github.com/huggingface/transformers/blob/42fe0dc23e4a7495ebd08185f5850315a1a12dc0/src/transformers/models/bert/modeling_flax_bert.py#L244). However recently the Flax authors refactored this function into a smaller one called [`dot_product_attention_weights`](https://github.com/google/flax/blob/6fb839c640de80f887580a533b222c6dddf04c0d/flax/linen/attention.py#L37) which would allow us to correctly retrieve the attention weights if needed. To do so all `dot_product_attention` functions should be replaced by `dot_product_attention_weights`, followed by a `jnp.einsum`, see [here](https://github.com/google/flax/blob/6fb839c640de80f887580a533b222c6dddf04c0d/flax/linen/attention.py#L162) so that we can retrieve the attention weights.
Next, the whole `output_attentions` logic should be implemented for all Flax models analog to `output_hidden_states`.
| 05-27-2021 11:11:26 | 05-27-2021 11:11:26 | Also, adding this feature will require us to bump up the Flax dependency to `>=0.3.4` for `flax` in https://github.com/huggingface/transformers/blob/master/setup.py<|||||>I am starting with Bert.<|||||>Hi @patrickvonplaten @patil-suraj https://github.com/huggingface/transformers/pull/11918 |
transformers | 11,900 | closed | [Community Notebooks] Add Emotion Speech Noteboook | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds a notebook for Emotion Classification
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-27-2021 09:44:33 | 05-27-2021 09:44:33 | Amazing notebook @m3hrdadfi ! |
transformers | 11,899 | closed | Provides an option to select the parallel mode of the Trainer. | # 🚀 Feature request
Provides an option to select the parallel mode of the Trainer.
## Motivation
For multiple GPUs, Trainer uses `nn.DataParallel` for parallel computing by default, however, this approach results in a large memory occupation for the first GPU. Please provide an API to switch to `nn.parallel.DistributedDataParallel`. Also, for the `Trainer.predict()` function is there an option to turn off parallel computing?
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
@sgugger | 05-27-2021 09:03:05 | 05-27-2021 09:03:05 | This is already implemented, it just depends on how you launch your training script. To use distributed data parallel, you have to launch it with `torch.distributed.launch`.<|||||>> This is already implemented, it just depends on how you launch your training script. To use distributed data parallel, you have to launch it with `torch.distributed.launch`.
Hi~
How to do for jupyter?
Also what should I do if I use DataParallel for train and only want to use one GPU for predict?
As the `DataParallel` requires droplast, which is not allowed in predict phase.
Thanks
<|||||>You can't do this directly in jupyter, you have to launch a script using the pytorch utilities (it's not a Trainer limitation, it's a PyTorch one). You can completely predict in parallel with the `Trainer`, it will complete the last batch to make it the same size as the others and then truncate the predictions.<|||||>> You can't do this directly in jupyter, you have to launch a script using the pytorch utilities (it's not a Trainer limitation, it's a PyTorch one). You can completely predict in parallel with the `Trainer`, it will complete the last batch to make it the same size as the others and then truncate the predictions.
But we can't truncate predictions because it's a contest or a client's demand and we need all the test results we can get.
As a demo, #11833
https://github.com/huggingface/transformers/issues/11833<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,898 | closed | mutil gpu errors | i want to use multi gpus to train,but it erros;
model = nn.DataParallel(model)
model = model.cuda()
model.train_model(train_df, eval_data=eval_df)
torch.nn.modules.module.ModuleAttributeError: 'DataParallel' object has no attribute 'train_model'
so how can i to use multi gpus | 05-27-2021 08:02:08 | 05-27-2021 08:02:08 | Is this related to `transformers`? <|||||>yes this is simpletransformers and i find transformers multi gpus is hard to run <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,897 | closed | Fix Tensorflow Bart-like positional encoding | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #11724
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-27-2021 04:45:16 | 05-27-2021 04:45:16 | |
transformers | 11,896 | closed | Update deepspeed config to reflect hyperparameter search parameters | # What does this PR do?
This PR adds a few lines of code to the Trainer so that it rebuilds the Deepspeed config when running hyperparameter_search. As is, if you run hyperparameter_search while using Deepspeed the TrainingArguments are updated but the Deepspeed config is not, the two become out of sync, and Deepspeed effectively ignores the parameters of any hyperparameter search trials which are set by the Deepspeed config.
This fixes #11894
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. --> https://github.com/huggingface/transformers/issues/11894
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
I ran the Deepspeed tests and Trainer tests locally; everything passed except for `test_stage3_nvme_offload` but I think that was a hardware compatibility issue on my local machine.
## Who can review?
@stas00 (and maybe whoever implemented hyperparameter_search() in the Trainer)
| 05-27-2021 04:43:22 | 05-27-2021 04:43:22 | Code quality check passes on my local machine 🤔
Happy to change formatting if necessary, just not sure what to change.<|||||>You probably have a different version of `black`.
Please try:
```
cd transformers
pip install -e .[dev]
make fixup
```
this should re-align the versions.<|||||>Turns out I just ran the style check on the wrong branch the first time; my bad.
Should be fixed now. <|||||>the doc job failure is unrelated - we will re-run it when other jobs finish - the CI has been quite flakey...<|||||>Thanks for your PR! |
transformers | 11,895 | closed | Small error in documentation / Typo | The documentation for BART decoder layer mentions that it expects hidden states as well as the encoder hidden states to be expected in "(seq_len, batch, embed_dim)" instead of the "(batch, seq_len, embed_dim)" that is actually is expected. Lead to a bit of confusion so would be great if it were corrected! :)
Relevant lines:
https://github.com/huggingface/transformers/blob/996a315e76f6c972c854990e6114226a91bc0a90/src/transformers/models/bart/modeling_bart.py#L373
https://github.com/huggingface/transformers/blob/996a315e76f6c972c854990e6114226a91bc0a90/src/transformers/models/bart/modeling_bart.py#L376
@patrickvonplaten | 05-27-2021 03:40:12 | 05-27-2021 03:40:12 | Thanks for spotting. Feel free to open a PR to fix this :)
By the way, I see you're the main author of MDETR (amazing work!). I'm currently adding DETR to the repo (see #11653), so if you are up to help me add MDETR to the repo, feel free to reach out :)
<|||||>Ooh thanks :D That sounds great, I'd be happy to help :) Will send you an email. |
transformers | 11,894 | closed | Deepspeed integration ignores Optuna trial parameters in hyperparameter_search | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-4.19.0-16-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
Also in case it matters, my deepspeed version is 0.3.16.
### Who can help
Maybe @stas00 ? I'm not sure.
## Information
Model I am using: custom Pytorch transformer model, although I don't think it matters here.
The problem arises when using:
* [ ] the official example scripts: (probably, I haven't tried)
* [x] my own modified scripts: I'm running a simple custom MLM training script using the transformers trainer.
## To reproduce
Steps to reproduce the behavior:
1. Run a hyperparameter search using the transformers Trainer with the [default zero-2 config from the documentation](https://huggingface.co/transformers/main_classes/trainer.html#zero-2-example)
2. Observe that parameters taken from the Deepspeed config like gradient accumulation steps and optimizer/scheduler params are not updated to reflect the Optuna trial parameters.
Here is some output from the script I'm running, with the middle omitted for brevity. I'm printing trial params myself but I do it from inside the Trainer, so these are definitely the same trial params the Trainer is getting.
```
[I 2021-05-27 02:20:41,133] A new study created in memory with name: no-name-26248906-22c0-4666-a7d4-159173902bc5
current trial params {'learning_rate': 2.0670100636747183e-05, 'adam_beta2': 0.98, 'gradient_accumulation_steps': 6, 'dropout': 0.0, 'local_attention_window': 192, 'weight_decay': 0.01, 'warmup_ratio': 0.12, 'deep_transformer_stack_layers': 10}
.....
[2021-05-27 02:21:01,787] [INFO] [config.py:751:print] initial_dynamic_scale ........ 65536
[2021-05-27 02:21:01,787] [INFO] [config.py:751:print] loss_scale ................... 0
[2021-05-27 02:21:01,787] [INFO] [config.py:751:print] memory_breakdown ............. False
[2021-05-27 02:21:01,787] [INFO] [config.py:751:print] optimizer_legacy_fusion ...... False
[2021-05-27 02:21:01,787] [INFO] [config.py:751:print] optimizer_name ............... adamw
[2021-05-27 02:21:01,787] [INFO] [config.py:751:print] optimizer_params ............. {'lr': 5e-05, 'betas': [0.9, 0.999], 'eps': 1e-08, 'weight_decay': 0.0}
[2021-05-27 02:21:01,787] [INFO] [config.py:751:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}
[2021-05-27 02:21:01,788] [INFO] [config.py:751:print] pld_enabled .................. False
....
```
I looked around briefly and I _think_ the issue comes from the fact that that the Deepspeed config [is built as part of TrainingArgs](https://github.com/huggingface/transformers/blob/996a315e76f6c972c854990e6114226a91bc0a90/src/transformers/training_args.py#L677) and then presumably never updated after that, even if the training args change. Consequently when the [training args are updated](https://github.com/huggingface/transformers/blob/996a315e76f6c972c854990e6114226a91bc0a90/src/transformers/trainer.py#L861) as part of setup for the hyperparameter search, it's not reflected in the Deepspeed config.
Note that this might also be an issue with Ray, I just haven't tried it with Ray.
## Expected behavior
Ideally Deepspeed would run with config/parameters that respected the content of the Optuna trials, although I know that getting two external integrations to play well together is easier said than done. In the meantime I'm going to see if I can work around this by using an HF scheduler and HF optimizer in the hopes that those will take their parameters from the training arguments directly.
| 05-27-2021 02:37:47 | 05-27-2021 02:37:47 | Thanks for the report, @Mindful!
It's very possible that I mistakenly bypassed some logic of optuna, as I have never used it.
Would you like to have a look and see if you can fix it - since you have already been researching this - basically go into `src/transformers/trainer.py` look for `optuna` and see where `if self.deepspeed` skips it. Shouldn't be too difficult if you already have a test for it.
Please let me know if I can help.
Thank you!<|||||>@stas00
I would need to look a little closer to be sure but it really just looks like a timing issue - the Deepspeed config is built off the Trainer config, which then has its state changed *afterwards* by the Optuna integration so the two get out of sync.
I am definitely open to trying to fix this myself (I've been looking for a reason to contribute), I just have two concerns:
1. I'm pretty swamped right now, at least for the next week or two. Things should hopefully calm down after that, but it's hard for me to promise I can get to it by a certain date.
2. It seems like the only options for fixing this are either somehow making the Deepspeed config automatically update to reflect updates to the training config (which would be complicated and probably overkill) or changing the hyperparameter_search method so that it also updates the Deepspeed config if necessary. I think the latter is the better option, but going attribute-by-attribute would basically mean duplicating the logic for copying training parameters from the TrainingArguments to the deepspeed config. I think the _best_ option is to just construct a new DeepSpeedConfigHF based on the updated training parameters, but there's a lot of logic there and I'm not sure if this is safe to do.
Actually, if the fix is as easy as rebuilding DeepSpeedConfigHF with the updated TrainingArguments, this might be relatively quick. I'm not sure.
Edit: I wrote this out and then went back and looked at the code and I think doing the above might fix it (in which case this is an easy fix). Let me try this and get back to you.<|||||>Yeah, I just changed files locally so that the hyperparameter search rebuilds the DeepSpeedConfigHF object and that seems to have fixed it. I still need to double check tests pass/etc, but it looks like this was much easier than I thought.
I'll open a PR shortly. <|||||>Awesome. Looking forward to reading your PR, @Mindful - please tag me there.
For tests just run:
```
RUN_SLOW=1 pytest tests/deepspeed
```
and if you have an idea for a new test that would be a bonus. |
transformers | 11,893 | closed | RAG-2nd2end-revamp | same as [shamanez:rag-retriever-end2end](https://github.com/huggingface/transformers/pull/11655) PR. In the previous version, I got some version control problems.
Really sorry to come up with duplicate PRs :(.
@lhoestq @patrickvonplaten
I conducted an experiment to check the difference and added a simple test run code.
Finally I added two test functions to test_modeling_rag.py and test_retrieval_rag.py.

| 05-27-2021 00:14:16 | 05-27-2021 00:14:16 | I added all the minor changes and I would like to thank @patrickvonplaten and @lhoestq for the enormous amounts of support and advice. :)<|||||> Hey
Thanks for giving this end to end version. Am trying to run it and see its performance on my own dataset (much smaller and domain specific to see if get performance gains with end to end) but at the moment it is throwing a Pickling error with dummy dataset in the code. Am still stuck trying to understand how to fix this. Any idea how this can be dealt with
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 580, in dump
return Pickler.dump(self, obj)
File "pyarrow/io.pxi", line 1021, in pyarrow.lib.Buffer.__reduce_ex__
AttributeError: module 'pickle' has no attribute 'PickleBuffer
Cheers
<|||||>Could you please add this as a new issue. Also I would like to see the
entire error log. Seems like something is wrong with your RAY installation.
On Fri, Jun 4, 2021, 00:49 Shraey Bhatia ***@***.***> wrote:
> Hey
> Thanks for giving this end to end version. Am trying to run it and see its
> performance on my own dataset (much smaller and domain specific to see if
> get performance gains with end to end) but at the moment it is throwing a
> Pickling error with dummy dataset in the code. Am still stuck trying to
> understand how to fix this. Any idea how this can be dealt with
>
> File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 580, in dump
> return Pickler.dump(self, obj)
> File "pyarrow/io.pxi", line 1021, in pyarrow.lib.Buffer.__reduce_ex__
> AttributeError: module 'pickle' has no attribute 'PickleBuffer
>
> Cheers
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/11893#issuecomment-853842534>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGWLRNB4P234HGI4JOLTQ5255ANCNFSM45TDZJNA>
> .
>
<|||||>Sure, just created a new issue with full log |
transformers | 11,892 | closed | Link official Cloud TPU JAX docs | # What does this PR do?
Adds a link to the new official Cloud TPU VM JAX docs.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
| 05-26-2021 19:20:09 | 05-26-2021 19:20:09 | |
transformers | 11,891 | closed | GPT2 saved pb file cannot handle dynamic sequence length | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Darwin-20.4.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models: gpt2 @patrickvonplaten, @LysandreJik
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python3
import tensorflow as tf
import tensorflow_hub as hub
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
tf.saved_model.save(model, "gpt2")
model_hub = hub.KerasLayer("gpt2")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model_hub(encoded_input)
```
It would complain the input mismatched, where input is [None, 5], I think 5 is dummy input defined inside the file_utils.py. In other word, does gpt2 saved tf hub model must be fixed length (dummy input length)?
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Should run without issue after loading tfhub
<!-- A clear and concise description of what you would expect to happen. -->
| 05-26-2021 18:58:13 | 05-26-2021 18:58:13 | Hi! As far as I'm aware, we don't support TF Hub integrations right now. If you want to save or load a TF2 model from Transformers, you can use the `save_pretrained` method like so:
```
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
model.save_pretrained("saved_gpt2")
new_model = TFGPT2Model.from_pretrained('saved_gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = new_model(encoded_input)
```
I'm the TF maintainer here, and one of the things I'm actively working on is making our Tensorflow support cleaner and more idiomatic. If there's a reason you really want to export to the TF Hub or normal Keras formats, please let us know, and we'll take that into account when planning development!<|||||>Hello @Rocketknight1
Thank you, I think export to tf hub is an important feature if you want to deploy to real production. as far as I know, many companies they would not use saved model weights, but rather do serving with packed computational graph. Therefore, it would be great if you could take tf hub export into account in the future. will close this ticket, by the way, do you want me open another one regarding to feature request?
Thank you! |
transformers | 11,890 | closed | changing find_batch_size to work with tokenizer outputs | trainer_pt_utils.find_batch_size currently does not recognize the batch size of BatchEncoding objects. This can cause an error when a trainer relies on find_batch_size to report the number of observed examples in the evaluation loop, which is the case when the eval dataset is Iterable.
# What does this PR do?
Very simple change that lets find_batch_size find the batch size of BatchEncoding objects.
Fixes # (11882)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x ] Was this discussed via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/11882
- [x ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x ] Did you write any new necessary tests?
@LysandreJik @sgugger
| 05-26-2021 15:35:23 | 05-26-2021 15:35:23 | Mmm, I don't think I am allowed to push commits on your branch and the CI decided to not run on your PR. Could you push an empty commit to trigger it?
```
git commit --allow-empty -m "Trigger CI"
```
Should do this (and then push).
<|||||>ah looks like it needs approval:
```First-time contributors need a maintainer to approve running workflows```<|||||>No that's for the Git actions (and I clicked yes). Your empty commit did trigger circle CI so all is good, just have to wait for the green tick :-) |
transformers | 11,889 | closed | Hubert | # What does this PR do?
This PR adds Hubert:
- https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression
- https://arxiv.org/pdf/2106.07447.pdf?fbclid=IwAR3hI4uGqc4mV5j-ob8R5yLu-BaamVoe9ncxUoVmgFLjJXsE1IevP0rdNYY
Checkpoints are available here:
https://huggingface.co/models?filter=hubert
Hubert is essentially the same as Wav2Vec2 with some minor differences. The pretraining is completely different though, which is why we need to put it in a new modeling class. Pretraining functionality will be added in a second PR.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-26-2021 14:57:25 | 05-26-2021 14:57:25 | > batching
It uses `Wav2Vec2Processor` for feature extraction etc |
transformers | 11,888 | closed | Add a new pipeline for the Relation Extraction task. | # 🚀 Feature request
Add a new pipeline option for the Relation Extraction task : `nlp = pipeline('relation-extraction')`
## Motivation
Relation Extraction between named entities is a well-known NLP task. For example, when you get entities relative to medications (let's say our entity types are DRUG and FORM (tablet, capsule, etc.)), you want to know which FORM entity goes with which DRUG entity, etc.
Reference: https://portal.dbmi.hms.harvard.edu/projects/n2c2-2018-t2/
This task is not limited to the biomedical domain.
## Your contribution
I still need to play more with the HF API to contribute !
But, as I see it, the pipeline would return a list of dictionaries, each dictionary representing an identified relation in the text.
The relation extraction model would probably sit on top of the NER model.
There are implementations of such models [here](https://nlpprogress.com/english/relationship_extraction.html).
| 05-26-2021 14:46:59 | 05-26-2021 14:46:59 | We have a voluntarily generic `token-classification` pipeline that should be suited for this, no?<|||||>> We have a voluntarily generic `token-classification` pipeline that should be suited for this, no?
As far as I understood, `token-classification` is just an alias for `ner` (in the source code, we can observe: `NerPipeline = TokenClassificationPipeline`).
The relation extraction part would be to classify pairs of entities (given by the `ner`/`token-classification` part of the pipeline) to a set of relation classes, such as `IS_CEO_OF_ORG`.
I don't think it is possible to do this for now. Thanks for the reply!<|||||>This seems to be a popular repo for RE: https://github.com/thunlp/OpenNRE<|||||>For now, there's only 1 model that is capable of performing relation extraction out-of-the-box, and that's [LUKE](https://huggingface.co/transformers/model_doc/luke.html#overview). You can use `LukeForEntityPairClassification` to classify the relationship between two entities in a sentence:
```
from transformers import LukeTokenizer, LukeForEntityPairClassification
tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
model = LukeForEntityPairClassification.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
text = "Beyoncé lives in Los Angeles."
entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to "Beyoncé" and "Los Angeles"
inputs = tokenizer(text, entity_spans=entity_spans, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
However, relation extraction is a task that is solved in many different ways. So it's not straightforward to define a generic pipeline for it, in which you can plug different models.<|||||>Thanks @NielsRogge for your answer. I have 3 questions then:
1. What do you mean exactly by:
> relation extraction is a task that is solved in many different ways.
because the task of RE itself is quite standardized, isn't it?
2. Is the LUKE model you showed me usable with *any* dataset? If yes, which format of the dataset is needed?
3. Wouldn't be good to choose *one* approach (maybe SpanBERT?, cf [this](https://kr2ml.github.io/2020/papers/KR2ML_12_paper.pdf)) and implement it in the HF `pipeline`?<|||||>> 1. What do you mean exactly by:
>
> > relation extraction is a task that is solved in many different ways.
NER is always solved in the same way (in the Transformers library, at least), namely by placing a token classification head on top of the final hidden states of the tokens. However, relation extraction can be solved in many ways. LUKE for example has a very specific way, namely it considers a word sequence (tokens) and an entity sequence (entities), and it places a linear layer on top of the concatenation of the entity tokens. Another model, like [R-BERT](https://arxiv.org/abs/1905.08284) for example, does it differently. From the paper: "(...) We apply the average operation to get a vector representation for each of the two target entities. Then after an activation operation (i.e. tanh), we add a fully connected layer to each of the two vectors (...)":

In other words, as every relation extraction model does it in a different way, it's not straightforward to define a general pipeline for it.
> 2. Is the LUKE model you showed me usable with _any_ dataset? If yes, which format of the dataset is needed?
Yes, you can use it with any dataset. I fine-tuned it myself on a custom dataset. You just need to prepare a csv file with 4 columns: sentence, entity 1, entity 2, relationship. I will prepare a notebook that illustrates how you can do it easily.
> 3\. Wouldn't be good to choose _one_ approach (maybe SpanBERT?, cf [this](https://kr2ml.github.io/2020/papers/KR2ML_12_paper.pdf)) and implement it in the HF `pipeline`?
A pipeline is meant to be used for several models, I don't think it's nice to have a pipeline that only works for a single model.<|||||>Thanks for all your answers @NielsRogge!
I understand it better now. In fact, the way you do it makes me think of the QA task, but here the context is replaced by the entity spans, and the output is the one of a `SequenceClassification` task.
Looks pretty good for becoming the standard :wink:!<|||||>@xegulon here's a notebook that illustrates how to fine-tune `LukeForEntityPairClassification` on a custom dataset for relation extraction: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LUKE/Supervised_relation_extraction_with_LukeForEntityPairClassification.ipynb<|||||>Thanks a lot @NielsRogge !
Hoping to see `pipeline('relation-classification')` and `pipeline('joint-ner-and-re')` someday ;) !<|||||>> @xegulon here's a notebook that illustrates how to fine-tune `LukeForEntityPairClassification` on a custom dataset for relation extraction: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LUKE/Supervised_relation_extraction_with_LukeForEntityPairClassification.ipynb
Thanks a lot @NielsRogge for this notebook. You saved me a lot of time!
I have a doubt, a statement we're trying to annotate is:
Mukesh Ambani married Nita Ambani in 1985 and they have two sons, Akash and Anant, and a daughter, Isha.
There are multiple entities in one sentence and different relations between them.
How should i go about incorporating this in my dataset?
1. The sentence column will have the above statement multiple times until all relations and entities are captured. The entity and label columns will change as per entities.
2. Making this a multi label problem -- (which is more tricky)
Would love to know your approach on this. Thanks!<|||||>> How should i go about incorporating this in my dataset?
I think you need to create several training examples for this single sentence. Each training example should be <sentence, entity 1, entity 2, relationship>. So indeed, option 1 is what I would do.
There are other approaches to relation extraction, in which one applies a binary classifier to each possible pair of entities (an example is [this paper](https://www.sciencedirect.com/science/article/abs/pii/S095741741830455X?via%3Dihub)). However, LUKE doesn't work that way.<|||||>> I think you need to create several training examples for this single sentence. Each training example should be <sentence, entity 1, entity 2, relationship>. So indeed, option 1 is what I would do.
I have gone ahead with the LUKE approach.
The [TACRED dataset](https://nlp.stanford.edu/pubs/zhang2017tacred.pdf) has 79.5% relation labels as 'no_relation'.
This seems logical because not every sentence consists of relations and also reduces false positives. (my model will be tested against newspaper articles, blogs, wiki text, etc)
I have two doubts:
1. Whilst making a custom dataset (like the one in your notebook) should we also include sentences that have no relations between entities? What percentage of no_relation labels would you suggest for our custom dataset ?
2. How should we go about labelling this sentence: (such sentences are common in news articles or excerpts from interviews)
"Pep Guardiola was unhappy with the passing during the game."
This has only one entity (entity1 = PERSON). Do we consider this sentence since entity2 would be empty?
We have been discarding these as of now.
<|||||>> Whilst making a custom dataset (like the one in your notebook) should we also include sentences that have no relations between entities? What percentage of no_relation labels would you suggest for our custom dataset ?
Yes, for sure. In that way, you can let the model learn that there are also a lot of sentences where there's no relationship between 2 entities. Probably, the percentage of no_relation labels depends on your domain, but it will probably be the most occuring class.
> How should we go about labelling this sentence: (such sentences are common in news articles or excerpts from interviews)
I don't think you need to add sentences that only have a single entity, you can simply discard these. |
transformers | 11,887 | closed | Wrong subword aggregation when using aggregation_strategy | ## Environment info
- `transformers` version: 4.7.0.dev0
- Platform: Windows
- Python version: 3.9.4
- PyTorch version (GPU?): No
- Tensorflow version (GPU?): No
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@Narsil
@francescorubbo
@elk-cloner
## Information
xlm-roberta-large-finetuned-conll02-dutch
The problem arises when using aggregation_strategy.
## To reproduce
Steps to reproduce the behavior:
Given this code:
```sentence = "Groenlinks praat over Schiphol."
nlp = pipeline('ner', model='xlm-roberta-large-finetuned-conll02-dutch')
nlp(sentence)
```
I get the following result:
```
[{'entity': 'B-ORG',
'score': 0.9769433,
'index': 1,
'word': '▁Groen',
'start': 0,
'end': 5},
{'entity': 'I-ORG',
'score': 0.9935022,
'index': 2,
'word': 'link',
'start': 5,
'end': 9},
{'entity': 'B-LOC',
'score': 0.9999288,
'index': 6,
'word': '▁Schi',
'start': 22,
'end': 26},
{'entity': 'I-LOC',
'score': 0.99987257,
'index': 8,
'word': 'hol',
'start': 27,
'end': 30}]
```
We received subwords, where I would prefer to have real words. I found that `aggregation_strategy` was added in the latest release (master branch, 4.7.0.dev0). In an attempt to fix this, I tried this:
```
sentence = "Groenlinks praat over Schiphol."
nlp = pipeline('ner', model='xlm-roberta-large-finetuned-conll02-dutch', aggregation_strategy="max")
nlp(sentence)
```
Which yields:
```
[{'entity_group': 'ORG',
'score': 0.98522276,
'word': 'Groenlink',
'start': 0,
'end': 9},
{'entity_group': 'LOC',
'score': 0.99987257,
'word': 'hol',
'start': 27,
'end': 30}]
```
## Expected behavior
This is different than expected, as subwords are merged in the wrong way. `Groenlink` and `hol` were both not part of the original sentence. I would expect this:
```
[{'entity_group': 'ORG',
'score': 0.98522276,
'word': 'Groenlinks',
'start': 0,
'end': 9},
{'entity_group': 'LOC',
'score': 0.99987257,
'word': 'Schiphol',
'start': 27,
'end': 30}]
```
Do you have any clues how to fix this?
| 05-26-2021 13:10:58 | 05-26-2021 13:10:58 | I suspect we might have an issue recognizing subwords when using the BPE tokenizer.
Using a BERT based model works as expected:
```
>> nlp = transformers.pipeline('ner', model='wietsedv/bert-base-dutch-cased-finetuned-conll2002-ner', aggregation_strategy='first')
>> nlp("Groenlinks praat over Schiphol.")
[{'entity_group': 'org',
'score': 0.99999315,
'word': 'Groenlinks',
'start': 0,
'end': 10},
{'entity_group': 'loc',
'score': 0.9999975,
'word': 'Schiphol',
'start': 22,
'end': 30}]
```
I'll take a closer look at the logic when using BPE sub-tokens and report back.<|||||>@LysandreJik @sgugger The problem is how we decide whether or not a token is a subword [here](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/token_classification.py#L273),
where we compare the token length with the corresponding span in the original text.
For WordPiece this works because `Groenlinks` is tokenized as `['Groen', '##link', '##s']`, so the last two tokens are tagged as subwords. However BPE tokenizes as `['_Groen', 'link', 's']`, so we incorrectly tag `_Groen` as subword and the other two tokens as words.<|||||>Similar to https://github.com/huggingface/transformers/issues/11794<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Any update on this issue?<|||||>@francescorubbo thanks for investigating this. So the `gather_pre_entities` function of the NER pipeline needs an update to also work with BPE tokenizers.
cc @Narsil
Do you mind opening a PR to support BPE tokenizers?<|||||>I think the problem is tricky, I don't think it is properly linked to BPE, but more for tokenizers that are word aware vs not.
Right now, we use tokenizers that use `continuing_subword_prefix` to determine if a token is a subword.
I don't think there is a "correct" way to do that with byte level BPE like gpt2 (roberta) as they don't posess the notion of "word".
As mentioned in a previous issue, if we can find a good heuristic that would be great, but byte BPE can :
- have space as a prefix or suffix
- use a different char than ' ' for space (_ for spm, `G for gpt2)
- Potentially contain spaces (hence different words) within a single token (although I don't think I've seen it done for major tokenizers)
So classifying subwords for these tokenizers is always going to be tricky. We could however disable "word"-based strategies for tokenizers that do no provide "continuing_subword_prefix". It would be more correct (but less usable for sure)<|||||>I result is still incorrect after the new changes @Narsil.
Given the nightly build: '4.10.0.dev0'
With given code
```
nlp = pipeline('ner', model='xlm-roberta-large-finetuned-conll02-dutch', grouped_entities=True)
sentence = "Groenlinks praat over Schiphol."
nlp(sentence)
```
yields
```
[{'entity_group': 'ORG',
'score': 0.98522276,
'word': 'Groenlink',
'start': 0,
'end': 9},
{'entity_group': 'LOC',
'score': 0.9999288,
'word': 'Schi',
'start': 22,
'end': 26},
{'entity_group': 'LOC',
'score': 0.99987257,
'word': 'hol',
'start': 27,
'end': 30}]
```
The subwords are still not merged correctly as the found entities do not exist in the original text. I also tried setting `aggregation_strategy=AggregationStrategy.SIMPLE`, but that did not help either. Am I doing something wrong?
|
transformers | 11,886 | closed | [Flax] Allow dataclasses to be jitted | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Previously it was not possible to jit HF's `ModelOutput` . By changing `dataclass` to `flax.struct.dataclass` this is however possible.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-26-2021 11:52:37 | 05-26-2021 11:52:37 | |
transformers | 11,885 | closed | Find the requested files in the cached path without the internet | # 🚀 Feature request
The pipeline needs an internet connection to find the cached path. A request usually consumes time.
## Motivation
Could we search only localy for a better performance?
## Your contribution
My test code:
```
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-de-en")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-de-en")
de_en_translator = pipeline("translation_de_to_en", model=model, tokenizer=tokenizer)
translation = de_en_translator("Ein kleiner Test.")
``` | 05-26-2021 11:41:52 | 05-26-2021 11:41:52 | You can save your model and tokenizer to a directory using `save_pretrained` and load them from there! You only need to specify the directory path to the model/tokenizer arguments you pass to the `pipeline` method.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,884 | closed | Mask token mismatch with the model on hosted inference API of Model Hub | ### Who can help
@LysandreJik
@julien-c
@mfuntowicz
## Information
In my model card: https://huggingface.co/ethanyt/guwenbert-base, I used to be able to run the hosted inference successfully, but recently it prompted an error: `"<mask>" must be present in your input.`
My model uses RoBERTa MLM and BERT Tokenizer. So the mask token is actually "[MASK]". I have already set it in `tokenizer_confg.json` but the inference API still mismatches with that.
In the past it is OK but recently it turns to prompt an error. Seems like the front-end start to double-check the mask token. How can I set the mask token in an appropriate way? Is it documented to set mask token in inference API?
Thanks!
## To reproduce
Steps to reproduce the behavior:
1. Go to https://huggingface.co/ethanyt/guwenbert-base
2. Run an example with "[MASK]"
## Expected behavior
In the past it was OK. See snapshot in https://github.com/ethan-yt/guwenbert/blob/main/README_EN.md
| 05-26-2021 10:30:38 | 05-26-2021 10:30:38 | In the past, it was no error.
<img width="510" alt="lm-demo" src="https://user-images.githubusercontent.com/9592150/120172538-bd4e1b80-c235-11eb-8576-6446a2dd0ed8.png">
I don't know when it starts to emit an error.
<img width="544" alt="image" src="https://user-images.githubusercontent.com/9592150/120172600-ce972800-c235-11eb-91b0-a231cfaf4f5f.png">
<|||||>I've fixed it by explicitly specifying the `mask_token` in your model card metadata: https://huggingface.co/ethanyt/guwenbert-base/commit/30aaff24928389096312600511a9ca2fad1b3974<|||||>thanks for reporting!<|||||>> thanks for reporting!
Thanks!
|
transformers | 11,883 | closed | Add FlaxCLIP | # What does this PR do?
This PR adds the CLIP model in JAX/Flax. | 05-26-2021 10:06:27 | 05-26-2021 10:06:27 | - Added jitted tests for `get_image_features` and `get_text_features`
- `__init__(...)` now takes `input_shape` as an arg, when `None` it's set using default values in `config`.
Merging! |
transformers | 11,882 | closed | BertForMaskedLM training fails when using iterable eval_dataset and DataCollatorForLanguageModeling collator. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-5.8.0-48-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Library:
- tokenizers: @LysandreJik
- trainer: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Instantiate a Trainer with a BertForMaskedLM model, and an iterable dataset passed in for the "eval_dataset", and DataCollatorForLanguageModeling as the collator.
2. Call train()
../../.pyenv/versions/3.7.9/lib/python3.7/site-packages/transformers/trainer.py:1334: in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
../../.pyenv/versions/3.7.9/lib/python3.7/site-packages/transformers/trainer.py:1405: in _maybe_log_save_evaluate
metrics = self.evaluate()
../../.pyenv/versions/3.7.9/lib/python3.7/site-packages/transformers/trainer.py:2011: in evaluate
output.metrics.update(speed_metrics(metric_key_prefix, start_time, output.num_samples))
```python
def speed_metrics(split, start_time, num_samples=None):
runtime = time.time() - start_time
result = {f"{split}_runtime": round(runtime, 4)}
if num_samples is not None:
samples_per_second = 1 / (runtime / num_samples) # ZeroDivisionError: float division by zero here
```
When evaluation_loop() gets called with an iterable eval dataset, it uses the "observed_num_examples" value to return the number of samples:
https://github.com/huggingface/transformers/blob/a9c797f93de97984771b7b902ce1e6b0aed98f96/src/transformers/trainer.py#L2155
```python
observed_num_examples = 0
# Main evaluation loop
for step, inputs in enumerate(dataloader):
# Update the observed num examples
observed_batch_size = find_batch_size(inputs)
if observed_batch_size is not None:
observed_num_examples += observed_batch_size
```
The problem is, transformers.trainer_pt_utils.find_batch_size fails to find the correct batch size if the input is a BatchEncoding object (which is what DataCollatorForLanguageModeling returns if it is passed a dict or BatchEncoding):
https://github.com/huggingface/transformers/blob/0b0a598452b02278075a75f84b5ca7bb457224ad/src/transformers/trainer_pt_utils.py#L106
```python
def find_batch_size(tensors):
"""
Find the first dimension of a tensor in a nested list/tuple/dict of tensors.
"""
if isinstance(tensors, (list, tuple)):
for t in tensors:
result = find_batch_size(t)
if result is not None:
return result
elif isinstance(tensors, dict): # <--- returns false if "tensors" is BatchEncoding, should maybe return True?
for key, value in tensors.items():
result = find_batch_size(value)
if result is not None:
return result
elif isinstance(tensors, torch.Tensor):
return tensors.shape[0] if len(tensors.shape) >= 1 else None
elif isinstance(tensors, np.ndarray):
return tensors.shape[0] if len(tensors.shape) >= 1 else None
```
This leads the observed_num_examples variable to not get updated, and since the input dataset is iterable, the output of evaluation_loop() has the "num_samples" variable set to 0:
https://github.com/huggingface/transformers/blob/a9c797f93de97984771b7b902ce1e6b0aed98f96/src/transformers/trainer.py#L2212
```python
if not isinstance(eval_dataset, IterableDataset):
num_samples = len(eval_dataset)
elif isinstance(eval_dataset, IterableDatasetShard):
num_samples = eval_dataset.num_examples
else:
num_samples = observed_num_examples # observed_num_examples is falsely set to 0
```
which leads to the above ZeroDivisionError error.
This should be a quick fix in the find_batch_size function unless I am mistaken.
## Expected behavior
The training finishes with no error.
| 05-26-2021 09:58:51 | 05-26-2021 09:58:51 | Might be of interest to @sgugger <|||||>Fixed by #11890 |
transformers | 11,881 | closed | Adding new Jax Models. | Is there any board to track that how many current models have jax implementation ?
I would like to contribute to add jax implementation for the remaining ones, which model I can take to start ? | 05-26-2021 02:41:35 | 05-26-2021 02:41:35 | Hi there, thank you for your interest in JAX.
We plan to add as many models as possible in JAX/Flax. Right now we are working on improving the JAX support in the lib, better JAX/Flax tests, generation, cookie-cutter templates etc so that it'll become easier to add more models faster.
Please stay tuned, we'll soon share more details :)
|
transformers | 11,880 | closed | KeyError: 'labels' in distill_classifier.py | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.6
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Issue
I am trying to run the distill_classifier.py script from transformers/examples/research_projects/zero-shot-distillation/ with my own text data set and labels on the roberta-large-mnli model. There are a few hundred rows of text and 13 class labels. I am running the following in a cell of my notebook:
```
!python transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py \
--data_file ./distill_data/train_unlabeled.txt \
--class_names_file ./distill_data/class_names.txt \
--teacher_name_or_path roberta-large-mnli \
--hypothesis_template "This text is about {}." \
--output_dir ./my_student/distilled
```
The script starts to run but after a short while I receive the following error:
```
Trainer is attempting to log a value of "{'Science': 0, 'Math': 1, 'Social Studies': 2, 'Language Arts': 3, 'Statistics': 4, 'Calculus': 5, 'Linear Algebra': 6, 'Probability': 7, 'Chemistry': 8, 'Biology': 9, 'Supply chain management': 10, 'Economics': 11, 'Pottery': 12}"
for key "label2id" as a parameter.
MLflow's log_param() only accepts values no longer than 250 characters so we dropped this attribute.
0%| | 0/7 [00:00<?, ?it/s]Traceback (most recent call last):
File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 338, in <module>
main()
File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 328, in main
trainer.train()
File "/opt/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1272, in train
tr_loss += self.training_step(model, inputs)
File "/opt/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1734, in training_step
loss = self.compute_loss(model, inputs)
File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 119, in compute_loss
target_p = inputs["labels"]
File "/opt/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 231, in __getitem__
return self.data[item]
KeyError: 'labels'
0%| | 0/7 [00:01<?, ?it/s]
```
I have re-examined my labels files and am exactly following this guide for distill_classifier.py
https://colab.research.google.com/drive/1mjBjd0cR8G57ZpsnFCS3ngGyo5nCa9ya?usp=sharing&utm_campaign=Hugging%2BFace&utm_medium=web&utm_source=Hugging_Face_8#scrollTo=ECt06ndcnpyb
Any help would be appreciated to distill!
**Edit:** Updated torch to latest version and receiving the same error. I reduced number of classes from 24 to 13 and still have this issue. When I print inputs in the compute loss function it looks like there is no key for labels:
```
{'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0]]), 'input_ids': tensor([[ 101, 1999, 2262, ..., 0, 0, 0],
[ 101, 4117, 2007, ..., 0, 0, 0],
[ 101, 2130, 2295, ..., 0, 0, 0],
...,
[ 101, 1999, 2760, ..., 0, 0, 0],
[ 101, 2057, 6614, ..., 0, 0, 0],
[ 101, 2057, 1521, ..., 0, 0, 0]])}
```
Is there an additional parameter that is needed to assign the labels?
**Edit 2:** Just let the colab notebook "Distilling Zero Shot Classification.ipynb" run for a few hours and am receiving the same error with the agnews dataset. It looks like the code in the colab notebook might have an incompatibility with some other files possibly.
**Edit 3:** I have changed datasets and reduced to 3 classes and tried to add the label_names argument
`--label_names ["Carbon emissions", "Energy efficiency", "Water scarcity"]
`
my ./distill_data/class_names.txt file looks like:
```
Carbon Emissions
Energy Efficiency
Water Scarcity
```
and am still facing the same error.
### Who can help
@LysandreJik
@sgugger
@joeddav
| 05-25-2021 22:52:51 | 05-25-2021 22:52:51 | I am facing the same issue and cannot run the google colab examples either. Any help is appreciated! <|||||>@joeddav was there a specific point in time to clone the repo from to get the scripts to run or anything recent that has changed which might have broken the code?<|||||>Experienced the same issue with labels trained on a custom dataset.
## Environment info
torch 1.8.1+cu111
tqdm 4.49.0
transformers 4.5.1
Python 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)]
Windows-10-10.0.17763-SP0
## Issue
Executing this cell:
```
!python transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py \
--data_file email.txt \
--class_names_file class_names.txt \
--hypothesis_template "This text is about {}." \
--student_name_or_path distilbert-base-uncased \
--output_dir ./distilbert-base-uncased-notino-student
```
I'm getting this output:
```
06/09/2021 15:12:04 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 3distributed training: False, 16-bits training: False
06/09/2021 15:12:04 - INFO - __main__ - Training/evaluation parameters DistillTrainingArguments(output_dir='./distilbert-base-uncased-notino-student', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=<IntervalStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=32, per_device_eval_batch_size=128, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=1.0, max_steps=-1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, warmup_ratio=0.0, warmup_steps=0, logging_dir='runs\\Jun09_15-12-04_dcvmdwhanl03', logging_strategy=<IntervalStrategy.STEPS: 'steps'>, logging_first_step=False, logging_steps=500, save_strategy=<IntervalStrategy.STEPS: 'steps'>, save_steps=500, save_total_limit=0, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', fp16_backend='auto', fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='./distilbert-base-uncased-notino-student', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, length_column_name='length', report_to=['tensorboard', 'wandb'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, mp_parameters='')
...
100%|##########| 128069/128069 [00:47<00:00, 2710.05ex/s]
[INFO|trainer.py:490] 2021-06-09 18:27:26,719 >> The following columns in the training set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: text.
[INFO|trainer.py:1013] 2021-06-09 18:27:27,005 >> ***** Running training *****
[INFO|trainer.py:1014] 2021-06-09 18:27:27,011 >> Num examples = 128069
[INFO|trainer.py:1015] 2021-06-09 18:27:27,016 >> Num Epochs = 1
[INFO|trainer.py:1016] 2021-06-09 18:27:27,022 >> Instantaneous batch size per device = 32
[INFO|trainer.py:1017] 2021-06-09 18:27:27,028 >> Total train batch size (w. parallel, distributed & accumulation) = 96
[INFO|trainer.py:1018] 2021-06-09 18:27:27,034 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1019] 2021-06-09 18:27:27,040 >> Total optimization steps = 1335
[INFO|integrations.py:586] 2021-06-09 18:27:27,791 >> Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
wandb: Currently logged in as: dbs700 (use `wandb login --relogin` to force relogin)
wandb: wandb version 0.10.31 is available! To upgrade, please run:
wandb: $ pip install wandb --upgrade
wandb: Tracking run with wandb version 0.10.30
wandb: Syncing run ./distilbert-base-uncased-notino-student
wandb: View project at https://wandb.ai/dbs700/huggingface
wandb: View run at https://wandb.ai/dbs700/huggingface/runs/14c4hinu
wandb: Run data is saved locally in C:\Users\dmitrii.storozhenko\wandb\run-20210609_182747-14c4hinu
wandb: Run `wandb offline` to turn off syncing.
0%| | 0/1335 [00:00<?, ?it/s]Traceback (most recent call last):
File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 338, in <module>
main()
File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 328, in main
trainer.train()
File "C:\Users\dmitrii.storozhenko\Anaconda3\lib\site-packages\transformers\trainer.py", line 1120, in train
tr_loss += self.training_step(model, inputs)
File "C:\Users\dmitrii.storozhenko\Anaconda3\lib\site-packages\transformers\trainer.py", line 1524, in training_step
loss = self.compute_loss(model, inputs)
File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 119, in compute_loss
target_p = inputs["labels"]
KeyError: 'labels'
wandb: Waiting for W&B process to finish, PID 5992
wandb: Program failed with code 1. Press ctrl-c to abort syncing.
```<|||||>Hi, sorry for the slow response. This is due to [a breaking change in the Datasets API](https://github.com/huggingface/datasets/releases/tag/1.6.2). I'll need to update the script accordingly. In the meantime, use datasets <= 1.6.1 and that should solve the problem.<|||||>That did the trick! |
transformers | 11,879 | closed | Trainer : AttributeError: 'str' object has no attribute '_memory_tracker' | ## Environment info
- `transformers` version: 4.6.0
- Platform: Linux-5.8.0-53-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0+cu111 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Library:
- trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...):
T5ForConditionalGeneration
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Training a T5 from scratch over 20GB of data for one epoch and save the model checkpoints using Trainer library
2. Trying to (resume_from_checkpoint="./checkpoint-1320000")
The code :
```
model=T5ForConditionalGeneration.from_pretrained('./checkpoint-1320000/')
%%time
Trainer.train("./T5_model_result/checkpoint-1320000/")
```
The Error message :
```
AttributeError Traceback (most recent call last)
<timed eval> in <module>
~/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)
998
999 # memory metrics - must set up as early as possible
-> 1000 self._memory_tracker.start()
1001
1002 args = self.args
AttributeError: 'str' object has no attribute '_memory_tracker'
```
## Expected behavior
The Traianer library should resume from the last epoch to continue the learning.
| 05-25-2021 22:41:46 | 05-25-2021 22:41:46 | The `Trainer` will resume from the last epoch and continue the learning... if you create it. You are using the class directly, not a `Trainer` object. |
transformers | 11,878 | closed | [Wav2Vec2ForCTC] example typo fixed | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue) n/a
Fixed Typo:
In the example code for `transformers.Wav2Vec2ForCTC` loss was being computed on `transcription` instead of the `target_transcription` variable. An acquaintance of mine noticed the error, and that it had been corrected elsewhere, namely in a [code snippet for a fairseq example](https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/README.md).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-25-2021 20:52:26 | 05-25-2021 20:52:26 | |
transformers | 11,877 | closed | basic_tokenizer don't preserve token encoding/format | Hello all!
## Environment info
- `transformers` version: 4.3.2
- Platform: Linux-4.19.0-13-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
- the code was run on jupyter notebook
### Who can help
@LysandreJik
## Issue
I have the following code
```
model = 'microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext'
tokenizer = BertTokenizer.from_pretrained(model, do_lower_case=False)
s = 'View and Print-FDA Drug Safety Communication: FDA requiring color changes to Duragesic (fentanyl) pain patches to aid safetyâ\x80\x95emphasizing that accidental exposure to used patches can cause death'
```
When using `basic_tokenizer` it changes the token (not the same forme (encoding) of the original sentence)
```
tokenizer.basic_tokenizer.tokenize(s)
>>> ['View', 'and', 'Print', '-', 'FDA', 'Drug', 'Safety', 'Communication', ':', 'FDA', 'requiring', 'color', 'changes', 'to', 'Duragesic', '(', 'fentanyl', ')', 'pain', 'patches', 'to', 'aid', 'safetyâemphasizing', 'that', 'accidental', 'exposure', 'to', 'used', 'patches', 'can', 'cause', 'death']
```
the original token `safetyâ\x80\x95emphasizing` is tokenized in `safetyâemphasizing`
**2 issues then:**
- Is this the normal behavior ? it seems not or I am using it wrongly
- It seems there is no documentation about basic_tokenizer object in the huggingface documentation
Any help/explanation would be welcomed :)
| 05-25-2021 20:49:38 | 05-25-2021 20:49:38 | Hello! The basic tokenizer is only a building block of the `BertTokenizer` - and it was not intended to be used independently.
What are you trying to achieve especially, that the `BertTokenizer` cannot?
Usually, it is best to assume that the tokenizer should be used as it is configured on the hub - as it is with that tokenizer that the model was trained, and staying consistent is important to obtain good results.<|||||>Hi thanks for the reply.
My original problem is that I want to decode bert token ids (int --> word)
If I use BertTokenizer it sometimes generates [UNK] token which can't be decode back to the original token (the real word that produced the [UNK] token). I then use basic_tokenizer to have the list of raw token and replace [UNK] by the right token using its index in the sentence. But I am facing inconsistancies
Here is an example:
```
raw_sent = "patches to aid safetyâ\x80\x95emphasizing"
ids = bert_tokenizer.encode(raw_sent)
tokens = bert_tokenizer.decode(ids)
print(ids)
print(tokens)
```
gives:
```
[2, 13453, 1701, 6974, 1, 3]
'[CLS] patches to aid [UNK] [SEP]'
```
In my pipeline I receive the raw sentence and the list of ids (int), I want to figure out wich word in the sentence produced the [UNK] token.
I do:
```
basic_token = bert_tokenizer.basic_tokenizer.tokenize(raw_sent)
print(basic_token)
['patches', 'to', 'aid', 'safetyâemphasizing']
```
So I know that id `1` in the list `[2, 13453, 1701, 6974, 1, 3]` corresponds to `safetyâemphasizing`. So here is the problem: `safetyâemphasizing` is different from `safetyâ\x80\x95emphasizing` in the original sentence which leads to further errors in the following of the pipeline (especially finding de spans of the token)
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,876 | closed | Cannot add tokenizer to model repo | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- Google Colab Pro Notebook
## To reproduce
Steps to reproduce the behavior:
**From Google Colab Notebook,**
1. Push model to new repo:
2. Try to add tokenizer to rep_ur using `use_auth_token`
tokenizer.push_to_hub(repo_url="https://huggingface.co/vitali/Roberta" , use_auth_token="api_****************")`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
CalledProcessError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py in git_push(self)
407 encoding="utf-8",
--> 408 cwd=self.local_dir,
409 )
5 frames
/usr/lib/python3.7/subprocess.py in run(input, capture_output, timeout, check, *popenargs, **kwargs)
511 raise CalledProcessError(retcode, process.args,
--> 512 output=stdout, stderr=stderr)
513 return CompletedProcess(process.args, retcode, stdout, stderr)
CalledProcessError: Command '['git', 'push']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-51-82c7360864ec> in <module>()
1 # add tokenizer to repo
----> 2 tokenizer.push_to_hub(repo_url='https://huggingface.co/vitali/Roberta' , use_auth_token='api_*******')
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in push_to_hub(self, repo_name, repo_url, commit_message, organization, private, use_auth_token)
1891 organization=organization,
1892 private=private,
-> 1893 use_auth_token=use_auth_token,
1894 )
1895
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in _push_to_hub(cls, save_directory, save_files, repo_name, repo_url, commit_message, organization, private, use_auth_token)
1959 copy_tree(save_directory, tmp_dir)
1960
-> 1961 return repo.push_to_hub(commit_message=commit_message)
/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py in push_to_hub(self, commit_message)
422 self.git_add()
423 self.git_commit(commit_message)
--> 424 return self.git_push()
/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py in git_push(self)
410 logger.info(result.stdout)
411 except subprocess.CalledProcessError as exc:
--> 412 raise EnvironmentError(exc.stderr)
413
414 return self.git_head_commit_url()
OSError: error: RPC failed; HTTP 403 curl 22 The requested URL returned error: 403 Forbidden
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly
Everything up-to-date
```
## Expected behavior
Tokenizer should be added to the model repo
| 05-25-2021 18:45:42 | 05-25-2021 18:45:42 | This is a known issue on our side. Can you try once more? cc @n1t0 @sterchelen <|||||>Same result:
```
---------------------------------------------------------------------------
CalledProcessError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py in git_push(self)
407 encoding="utf-8",
--> 408 cwd=self.local_dir,
409 )
5 frames
/usr/lib/python3.7/subprocess.py in run(input, capture_output, timeout, check, *popenargs, **kwargs)
511 raise CalledProcessError(retcode, process.args,
--> 512 output=stdout, stderr=stderr)
513 return CompletedProcess(process.args, retcode, stdout, stderr)
CalledProcessError: Command '['git', 'push']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-64-82c7360864ec> in <module>()
1 # add tokenizer to repo
----> 2 tokenizer.push_to_hub(repo_url='https://huggingface.co/vitali/Roberta' , use_auth_token='api_********')
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in push_to_hub(self, repo_name, repo_url, commit_message, organization, private, use_auth_token)
1891 organization=organization,
1892 private=private,
-> 1893 use_auth_token=use_auth_token,
1894 )
1895
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in _push_to_hub(cls, save_directory, save_files, repo_name, repo_url, commit_message, organization, private, use_auth_token)
1959 copy_tree(save_directory, tmp_dir)
1960
-> 1961 return repo.push_to_hub(commit_message=commit_message)
/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py in push_to_hub(self, commit_message)
422 self.git_add()
423 self.git_commit(commit_message)
--> 424 return self.git_push()
/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py in git_push(self)
410 logger.info(result.stdout)
411 except subprocess.CalledProcessError as exc:
--> 412 raise EnvironmentError(exc.stderr)
413
414 return self.git_head_commit_url()
OSError: error: RPC failed; HTTP 403 curl 22 The requested URL returned error: 403 Forbidden
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly
Everything up-to-date
```<|||||>Hi @Kvit, I noticed many of your requests being blocked on our side, and should have fixed the problem. Can you try again?<|||||>It worked this time, thank you. |
transformers | 11,875 | closed | [lm examples] replicate --config_overrides addition to other LM examples | This PR https://github.com/huggingface/transformers/pull/11798 created for `run_clm.py` which adds a new feature `--config_overrides` needs to be replayed for other scripts under `examples/pytorch/language-modeling/`.
If you choose to work on this small project, please comment that you're working on it.
And thank you! | 05-25-2021 18:12:51 | 05-25-2021 18:12:51 | I am getting started with this task. <|||||>Thank you, @kumar-abhishek! |
transformers | 11,874 | closed | [AutomaticSpeechRecognitionPipeline] Ensure input tensors are on device | # What does this PR do?
Enables using AutomaticSpeechRecognitionPipeline on GPU.
The feature extractor does not create tensors on the appropriate device, so we call `ensure_tensor_on_device` before feeding the processed inputs to the model.
Fixes #11829
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@LysandreJik are there tests running on GPU? The other pipelines do not seem to test GPU inference, either.
## Who can review?
@LysandreJik, @Narsil
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. | 05-25-2021 17:44:55 | 05-25-2021 17:44:55 | |
transformers | 11,873 | closed | Errors in Quickstart Documentation related to GPT-2 | To: @sgugger, @patrickvonplaten, @LysandreJik
## Environment info
- `transformers` version: 4.6.0
- Platform: Linux-5.4.0-73-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
@sgugger, @patrickvonplaten, @LysandreJik
## Information
Model I am using (gpt-2 ...):
The problem arises when using:
* Example code in **Quickstart** page on online [documentation](https://huggingface.co/transformers/quickstart.html) section **OpenAI GPT-2** / **Using the past**
### Existing Code
```
for i in range(100):
print(i)
output, past = model(context, past=past)
token = torch.argmax(output[..., -1, :])
generated += [token.tolist()]
context = token.unsqueeze(0)
sequence = tokenizer.decode(generated)
print(sequence)
```
### Errors Encountered in statement ```output, past = model(context, past=past)```
- Obsolete named parameter **past**, replaced by **past_key_values** in current release
- Assignment to ```output, past =``` does not assign expected values
- model() statement returns value of type ```transformers.modeling_outputs.CausalLMOutputWithCrossAttentions```
### Suggested Corrected Version
```
for i in range(100):
print(i)
ret = model(context, past_key_values=past)
output, past = ret.logits, ret.past_key_values
# or
# output, past = ret[0], ret[1]
token = torch.argmax(output[..., -1, :])
generated += [token.tolist()]
context = token.unsqueeze(0)
sequence = tokenizer.decode(generated)
print(sequence)
```
| 05-25-2021 15:15:39 | 05-25-2021 15:15:39 | Oh, this page should not be there anymore, it's part of the old documentation. Out of curiosity, how did you get on it?<|||||>Hi @sgugger -
I think I see the problem. If the user navigates to huggingface.co first, then follows the links, it points to the updated documentation. Also, the link associated with the Transformers github repo points to the current docs..
However, if the user navigates via a Google search and clicks on Quickstart, it redirects to an obsolete version of the docs. See screenshot attached.

<|||||>I have removed the page from the doc website. Hopefully Google will update its recommendation soon!<|||||>Note that there probably are some SEO-related optimisations to our doc site layout to make sure Sitelinks are kept up-to-date (and the latest released version is the one best ranked by Google):
https://support.google.com/webmasters/answer/47334?hl=en
In terms of UX, one can also display a banner on top on the doc for every non-latest version. cc @gary149 <|||||>Note that here it was trying to render that file in the current version (the version selector said stable doc for instance), so it was really confusing.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,872 | closed | modify qa-trainer | I fixed the evaluation failure for the TrainerQA-based script [`examples/pytorch/question-answering/run_qa.py`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py) when using distributed training, which has been partially fixed in #11746.
```
Traceback (most recent call last):
File "run_qa.py", line 543, in <module>
main()
File "run_qa.py", line 509, in main
metrics = trainer.evaluate()
File "trainer_qa.py", line 44, in evaluate
ignore_keys=ignore_keys,
File "/home/x/anaconda3/envs/torch18/lib/python3.6/site-packages/transformers/trainer.py", line 2162, in evaluation_loop
logits = self._nested_gather(logits)
File "/home/x/anaconda3/envs/torch18/lib/python3.6/site-packages/transformers/trainer.py", line 2252, in _nested_gather
tensors = distributed_concat(tensors)
File "/home/x/anaconda3/envs/torch18/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/home/x/anaconda3/envs/torch18/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr>
return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)
File "/home/x/anaconda3/envs/torch18/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat
dist.all_gather(output_tensors, tensor)
File "/home/x/anaconda3/envs/torch18/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 1863, in all_gather
work = default_pg.allgather([tensor_list], [tensor])
RuntimeError: Tensors must be non-overlapping and dense
```
This failure is similar to the the previous commit (https://github.com/huggingface/transformers/pull/404/commits/fda2f623953bfe2290cd65429eb008f02ebdb152), but it also happens in pytorch 1.8 now.
Meanwhile, I added **Evaluation** and **Prediction** log for the script [`run_qa_no_trainer.py`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa_no_trainer.py) according to the TrainerQA.
Thanks! @sgugger | 05-25-2021 15:02:15 | 05-25-2021 15:02:15 | I encountered an unexpected error in ci, could you help me to finish this pr? @LysandreJik <|||||>Seems like a CircleCI failure, I just relaunched all tests.<|||||>the CircleCI encountered a HTTPError, what should I do? @LysandreJik |
transformers | 11,871 | closed | Want to use bert-base-uncased model without internet connection | I want to use the bert-base-uncased model in offline , for that I need the bert tokenizer and bert model have there packages saved in my local . **I am unable to understand how should I achieve it in my local without any internet connection** ?
import transformers
transformers.BertTokenizer.from_pretrained("bert-base-uncased", do_lower_case=True)
transformers.BertModel.from_pretrained("bert-base-uncased")
currently getting the error

What file should I add inplace for ("bert-base-uncased") so that it can work corectly in offline?
here is the link to my [notebook](https://www.kaggle.com/soumochatterjee/inference-commonlit-readability) | 05-25-2021 14:42:30 | 05-25-2021 14:42:30 | This would be better suited on the Forum, but I would suggest doing (with git-lfs installed)
```bash
git clone https://huggingface.co/bert-base-uncased
```
and then
```python
import transformers
transformers.BertTokenizer.from_pretrained("./bert-base-uncased", do_lower_case=True)
transformers.BertModel.from_pretrained("./bert-base-uncased")
```<|||||>>
>
> This would be better suited on the Forum, but I would suggest doing (with git-lfs installed)
>
> ```shell
> git clone https://huggingface.co/bert-base-uncased
> ```
>
> and then
>
> ```python
> import transformers
> transformers.BertTokenizer.from_pretrained("./bert-base-uncased", do_lower_case=True)
> transformers.BertModel.from_pretrained("./bert-base-uncased")
> ```
@julien-c
Clone is failing with below errors
```
C:\Users\Karthik\Desktop>git clone https://huggingface.co/bert-base-uncased
Cloning into 'bert-base-uncased'...
remote: Enumerating objects: 52, done.
remote: Counting objects: 100% (52/52), done.
remote: Compressing objects: 100% (50/50), done.
remote: Total 52 (delta 19), reused 0 (delta 0)
Unpacking objects: 100% (52/52), 304.24 KiB | 61.00 KiB/s, done.
Updating files: 100% (10/10), done.
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'
Exiting because of "interrupt" signal.
```<|||||>@julien-c I also got the same error.<|||||>Can you add `GIT_CURL_VERBOSE=1 GIT_TRACE=1` to your command to get more info?<|||||>And also paste your `git --version` and `git lfs --version`
<|||||>I was able to resolve this problem with the help of code :
[LINK](https://www.kaggle.com/abhishek/bert-base-uncased)
```
BERT_MODEL_PATH = 'PATH FOR THE DATASET YOU SAVED IN YOUR LOCAL THROUGH THE LINK'
TOKENIZER = transformers.BertTokenizer.from_pretrained(BERT_MODEL_PATH, do_lower_case=True, local_files_only=True)
model = BertModel.from_pretrained("BERT_MODEL_PATH")
```<|||||>Posted my solution to my asked question here in the issue |
transformers | 11,870 | closed | Issue: BART does not learn during fine-tuning for abstractive text summarization | ## Environment info
- transformers version: 4.5.1
- Python version: Python 3.7
- Using GPU in script? Yes
### Who can help
- @patrickvonplaten
## Information
I am currently working on abstractive text summarization. In the process I am trying to fine-tune BART on german text data. This works i.e. with bert-base-multilingual-cased and bert-base-german-cased. This does not work with i.e. deepset/gbert-base, deepset/gelectra-large and mbart-large-cc25. The training is not making any progress. The loss converges to zero very quickly. Am I doing something wrong? Do I need to use other classes?
## To reproduce
Here are a few code snippets to reproduce this behavior:
```ruby
# Config
language = "german"
model_name = "facebook/mbart-large-cc25"
tokenizer_name = "facebook/mbart-large-cc25"
batch_size = 8
# Imports
import datasets
import transformers
import tf2tf_tud_gpu_config as config
import tf2tf_tud_gpu_helpers as helpers
# Main
tokenizer = transformers.AutoTokenizer.from_pretrained(
config.tokenizer_name, strip_accent=False
)
if "mbart" in config.model_name:
tf2tf = transformers.MBartForConditionalGeneration.from_pretrained(
config.model_name
)
else:
tf2tf = transformers.EncoderDecoderModel.from_encoder_decoder_pretrained(
config.model_name, config.model_name, tie_encoder_decoder=True
)
train_data, val_data, test_data = helpers.load_data(
language=config.language,
ratio_corpus_wiki=config.ratio_corpus_wiki,
ratio_corpus_news=config.ratio_corpus_news
)
if "mbart" in config.model_name:
training_args = transformers.TrainingArguments(
output_dir=config.path_output,
logging_dir=config.path_output,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
num_train_epochs=1,
warmup_steps=500,
weight_decay=0.01
)
trainer = transformers.Trainer(
model=tf2tf,
args=training_args,
train_dataset=train_data,
eval_dataset=val_data
)
else:
training_args = transformers.Seq2SeqTrainingArguments(
predict_with_generate=True,
evaluation_strategy="steps",
per_device_train_batch_size=config.batch_size,
per_device_eval_batch_size=config.batch_size,
output_dir=config.path_output,
warmup_steps=1000,
save_steps=10000,
logging_steps=2000,
eval_steps=10000,
save_total_limit=1,
learning_rate=5e-5,
adafactor=True,
fp16=True
)
trainer = transformers.Seq2SeqTrainer(
model=tf2tf,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_data,
eval_dataset=val_data,
tokenizer=tokenizer
)
trainer.train()
```
## Expected behaviour
I would like to fine-tune BART profitably. | 05-25-2021 12:52:12 | 05-25-2021 12:52:12 | cc @patrickvonplaten <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten Hi, unfortunately I have not been able to make any progress in the last month and would appreciate if you have a solution for the unexpected behavior. Thank you! :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @DidiDerDenker,
Sorry it's very difficult to debug customized training runs that don't produce good results for us. Could you instead try to use the forum: https://discuss.huggingface.co |
transformers | 11,869 | closed | Custom train file not supported in run_qa.py | ## Environment info
- transformers version: 4.6.1
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
Documentation, maintained examples (not research project or legacy): @sgugger
## Information
I am working on is QA task, fine-tuning on a SQUaD-like dataset.
The problem arises when using the example **run_qa.py** script with a custom --train_file (like SQuAd json file)
## To reproduce
run the script with param `--train_file (squad-like-dataset).json`
This was once **issued before** in [this](https://github.com/huggingface/transformers/issues/9370#issue-776942988).
But the traceback this time is:
```
Traceback (most recent call last):
File "run_qa.py", line 622, in <module>
main()
File "run_qa.py", line 321, in main
answer_column_name = "answers" if "answers" in column_names else column_names[2]
IndexError: list index out of range
```
## Expected behavior
As i debug, `column_names` = ['title', 'paragraphs']
`column_names` expected to be ['context', 'question', 'answers']
the [load_dataset()](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) function with the --train-file didn't do the right job
As described in the script's [README](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/README.md),
I expected that this script would handle --train_file like the legacy **run_squad.py** script but,
It seems that this script works with --dataset_name, datasets that are already on the hub, but doesn't work the same as the old **run_squad.py**.
The documentation about --train_file param may need to be clearer or provided with some examples that use --train_file, --validation_file
## Work around
As i read about this [load_dataset()](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) function:
I transformed the original squad json file into a table-row form like this:
```
{"version": "0.1.0",
"data": [{"id": 1, "context": "", "question": "", "answers": ...},
{"id": 2, "context": "", "question": "", "answers": ...}
...
}
```
with this snippet:
```
for article in data:
for p in article['paragraphs']:
for qas in p['qas']:
answers = {
"text": [],
"answer_start": []
}
for ans in qas['answers']:
answers['text'].append(ans['text'])
answers['answer_start'].append(ans['answer_start'])
output_data.append({
"id": qas['id'],
"context": p['context'],
"question": qas['question'],
"answers": answers
})
```
| 05-25-2021 12:04:38 | 05-25-2021 12:04:38 | Hi there. As mentioned in the main [README](https://github.com/huggingface/transformers#why-shouldnt-i-use-transformers) examples are just that: examples. The script is intended to work on SQUAD or any file that is structured exactly the same way. To make it work on your own dataset, you will need to make some slight adjustments, in particular renaming the columns used.<|||||>Okay,
Thank you for your work. |
transformers | 11,868 | closed | Wrong BlenderbotConfig description (max_position_embeddings) | Hi there, the documentation page for BlenderbotConfiguration has a wrong parameter description
[https://huggingface.co/transformers/model_doc/blenderbot.html](url)
max_position_embeddings (int, optional, defaults to 1024) – The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
But it's actually 128, as shown in the source code:
[https://huggingface.co/transformers/_modules/transformers/models/blenderbot/configuration_blenderbot.html#BlenderbotConfig](url)
def __init__( self, vocab_size=8008, max_position_embeddings=128, encoder_layers=2, encoder_ffn_dim=10240, ...
And by the way, do anyone know how to increase the maximum sequence length of this model? If I modify the config.max_position_embeddings, it will result in an error: (BlenderbotForConditionalGeneration)
`size mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([128, 2560]) from checkpoint, the shape in current model is torch.Size([1024, 2560]).`
`size mismatch for model.decoder.embed_positions.weight: copying a param with shape torch.Size([128, 2560]) from checkpoint, the shape in current model is torch.Size([1024, 2560]).`
With the length of 128 tokens, it will "forget" the conversation's topic quite fast, since the input has to be trimmed.
Thanks in advance.
@patrickvonplaten @patil-suraj
| 05-25-2021 11:45:44 | 05-25-2021 11:45:44 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,867 | closed | Fix incorrect TPU pricing in Flax GLUE README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-25-2021 10:44:11 | 05-25-2021 10:44:11 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,866 | closed | # 🖥 Benchmarking `transformers` | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here!
__Originally posted by @turnertye74 in https://github.com/huggingface/transformers/issues/11865__ | 05-25-2021 10:33:27 | 05-25-2021 10:33:27 | |
transformers | 11,865 | closed | [Benchmark] | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here! | 05-25-2021 10:32:56 | 05-25-2021 10:32:56 | |
transformers | 11,864 | closed | Bart tokenizer and bart model for conditional generation have different vocab size | ## Environment info
- `transformers` version: 4.6.1
- Platform: Google colab
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1
- Using GPU in script?: No
### Who can help
@patrickvonplaten, @patil-suraj
Models:
- bart
Library:
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
## Information
Model I am using (BART):
The problem arises when using the pretrained model of bart : 'bart-large-xsum'.
I tried to load the tokenizer and the model from `bart-large-xsum`.
I then tried to add mask to the inputs (as mentionned in the original paper section 5.1 (https://arxiv.org/pdf/1910.13461.pdf)).
But the tokenizer and the bart model don't have the same vocab size.
Does it mean the `bart-large-xsum` model doesn't take masks as inputs ? Do I need to add it to the vocabulary myself ?
## To reproduce
Steps to reproduce the behavior:
```python
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-xsum')
bart = BartForConditionalGeneration.from_pretrained('facebook/bart-large-xsum')
print(tokenizer.vocab_size)
print(bart.config.to_dict()['vocab_size'])
```

| 05-25-2021 09:48:00 | 05-25-2021 09:48:00 | Hi there,
This is because the [orginal](https://github.com/pytorch/fairseq/tree/master/examples/bart#pre-trained-models) `bart-large-xsum` model uses 50264 for token_embedding, so you would probably need to extend the token embedding layer. |
transformers | 11,863 | closed | [Proposal] Adding infinite generation as an option to generate | # What does this PR do?
`generate` is limited in a lot of models, where we can't generate
more tokens than the limit of a said model (`seq_length`,
`max_position_embeddings`).
It corresponds to a reality of how models where trained, but advanced
usage like inference might rely on using the models to generate longer
output than this. It is also quite inconvenient to hit that barrier when
using pipelines (where inputs are text and not tokens).
So the main goal is to make this behavior non-default but opted
in by users as they should understand what is happening and the
limits linked to this behavior.
The following proposal is to enable (model per model) infinite
generation. It *might* be doable generally, however `past` seems
to be model dependant so it could be harder to do in general.
The implementation here simply clips any left values if somehow
the `input_ids` (or `past`) is larger than what the model can cope with.
We also propose to enable that by default for `text-generation`
models (`text2text-generation` might also make use of it).
Happy to hear your thoughts on this:
1- Should we enable that kind of feature ?
2- Is optional, with enabled by default for pipelines correct ?
3- Can we enable for every models instead of on a model-per-model basis ?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @LysandreJik @patil-suraj
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 05-25-2021 09:31:42 | 05-25-2021 09:31:42 | I get what you are saying, it's a valid concern.
`position ids` tend to be very well correlated with their siblings, so it shouldn't matter (so much) for the first few iterations out of bounds.
It will matter more at full fledged `seq_length` drift, but I figure it's the callers responsibility to understand those caveats (hence the need for actual explicit decision required within `generate`).
I ran some tests and the drift is much more substantial than what I expected for the full `seq_length`. the 20 topk tokens will share between 0 and 10 between non drifted and drifted versions. [gist](https://gist.github.com/Narsil/468af7fe59eaf1e20eb03ec0a4c9d249)
We also need to take into account that some models do not have position embeddings, or other position schemes (sinusoidal), that might change the perspective on this ?
Disabling the cache might be an alternative, I guess the caller should know what are the tradeoffs he wants to make.
Again it is just that, for `pipelines` it is very hard to reason in number of tokens, and you can be hit by walls concerning number of tokens, without any recourse.
An example would be summarization, If I try to summarize some text too large to fit my model, I receive error 3051 tokens > 1024 tokens. It's fine, but now, how much of the string should I cut to actually get a summary ? It's impossible to know. cascading summaries is an option. It has some drawbacks but maybe that's still what I am looking for ?
What I'm suggesting is coping mechanisms within `pipelines` than can be opted in to circumvent the issues mentioned above.
`aggregation_strategy` is a good example of such a thing that was added in `token-classification`. It's a way to cope with incorrect/undesirable outputs from models to produce better end results to users, even if it's not perfectly representative of the model.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,862 | closed | 'SequenceClassifierOutput' object has no attribute 'log_softmax' | Hi there,
I'm trying to transfer the pre-trained ViT model (model base patch 16, image size 224) on Cassava Leaf Disease Dataset. However, when I started to train the model, I encountered an error: 'SequenceClassifierOutput' object has no attribute 'log_softmax' which is shown in details in the attached image.
Can someone help me with this error?
Many thanks.

| 05-25-2021 09:09:04 | 05-25-2021 09:09:04 | Hello! Could you share the code you have that led to this error? Thanks<|||||>This is my source code. I use google colab. Thank you for your fast reply.
[https://drive.google.com/file/d/1zRKRolc-IuKAt_J96gTCoO801r6eXu2q/view?usp=sharing](url)
<|||||>I think the issue is that you have:
```py
class CassvaImgClassifier(nn.Module):
def __init__(self, model_arch, n_class, pretrained=False):
super().__init__()
self.model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224')
...
def forward(self, x):
x = self.model(x)
return x
```
as a model; if I follow closely enough you're getting the outputs with:
```py
image_preds = model(imgs)
```
These will really be the outputs of the ViT model, which is a [`SequenceClassifierOutput`](https://huggingface.co/transformers/main_classes/output.html#transformers.modeling_outputs.SequenceClassifierOutput) as you can see from the [ViT docs](https://huggingface.co/transformers/model_doc/vit.html#transformers.ViTForImageClassification.forward)
I suppose you're interested in the `logits`, so you would have to do:
```py
image_preds = model(imgs).logits
```
instead.
Hope that helps.<|||||>Thank you very much. It works for me.
|
transformers | 11,861 | closed | ONNX model conversion | Hi,
I have been comparing inference speeds between pytorch models and their ONNX versions. To convert a model from pytorch to ONNX I have used the code your provided in convert_graph_to_onnx.py.
I have built my onnx model as follows as I am applying it to QA: python transformers/src/transformers/convert_graph_to_onnx.py --framework pt --model Camembert-base-ccnet-fquad11 --quantize cam_onnx/camembert-base.onnx --pipeline 'question-answering'
This code outputs 3 models, camembert-base.onnx, camembert-base-optimized.onnx, camembert-base-optimized-quantize.onnx.
I run inference with the three models and I was expecting the quantize version to be much faster than the camembert-base.onnx, but it was the complete opposite. I don't understand why quantization doesn't increase the speedup in this case?
Thank you for your answer! | 05-25-2021 08:58:54 | 05-25-2021 08:58:54 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,860 | closed | Add some tests to the slow suite | This PR adds the torchscript tests to the slow suite. The current CI isn't passing because it crashes and exceeds the 10-minute timeout, this PR is a first step into fixing that.
Will look into re-enabling the torchscript tests on each commit (they'll be tested every day for now) once we refactor the test suite to be less hungry. | 05-25-2021 08:05:52 | 05-25-2021 08:05:52 | |
transformers | 11,859 | closed | Enable memory metrics in tests that need it | PR https://github.com/huggingface/transformers/pull/11851 was merged without updating the tests to reflect the change in the argument default.
Explicitly specified the need for memory metrics for these tests.
Merging now to have CI pass, feel free to comment if that's not the right approach @stas00 @sgugger | 05-25-2021 08:03:41 | 05-25-2021 08:03:41 | Thanks a lot for catching and fixing @LysandreJik ! |
transformers | 11,858 | closed | typo | # What does this PR do?
fix typo
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable
Fixes # (issue)
-->
## Before submitting
- [Y] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-25-2021 07:07:04 | 05-25-2021 07:07:04 | |
transformers | 11,857 | closed | [WIP] Fix cross attentions for TF T5 | This PR fixes cross attentions for TF T5 model. This includes adding a new input argument `cross_attn_head_mask` and also adding `cross_attentions` to the model's output.
<hr>
**Reviewers:** @patrickvonplaten @Rocketknight1 | 05-25-2021 05:56:24 | 05-25-2021 05:56:24 | |
transformers | 11,856 | closed | fixed a small typo in the CONTRIBUTING doc | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
I found a small typo in CONTRIBUTING.md. The fix was at the top of the second sentence around 5th paragraph after 4 bullet points as in "In particular there is a special [Good First
Issue](https://github.com/huggingface/transformers/contribute) listing. *It* will give you ..."
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-25-2021 05:33:36 | 05-25-2021 05:33:36 | |
transformers | 11,855 | closed | [lm examples] fix overflow in perplexity calc | This PR fixes overflow exception in perplexity calculation. Triggered when doing eval on untrained model and the loss is huge.
@sgugger | 05-25-2021 01:04:28 | 05-25-2021 01:04:28 | |
transformers | 11,854 | closed | Permission denied for cardiffnlp/twitter-roberta-base-emotion | @patrickvonplaten
When trying to access `cardiffnlp/twitter-roberta-base-emotion` using the example code, it can't seem to find the model. I also tried calling the model from an NLP framework (AdaptNLP) and it gave a Permission denied error. However, I don't get this error when using `cardiffnlp/twitter-roberta-base-sentiment`. Any suggestions? | 05-24-2021 21:57:03 | 05-24-2021 21:57:03 | The model seems accessible: https://huggingface.co/cardiffnlp/twitter-roberta-base-emotion
And running the example code locally correctly loads the model, and outputs the following:
```py
1) joy 0.9382
2) optimism 0.0362
3) anger 0.0145
4) sadness 0.0112
```
Could you try again to make sure it wasn't a connection issue?<|||||>@LysandreJik, thanks - it's working now. |
transformers | 11,853 | closed | Multi-node training for casual language modeling example does not work | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0.dev0
- Platform: Linux-4.19.0-14-amd64-x86_64-with-debian-10.8
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
@sgugger
@patrickvonplaten, @LysandreJik
## Information
Model I am using (Bert, XLNet ...): GPT-2
The problem arises when using:
* [x] my own modified scripts:
```
nproc_per_node=4
python -m torch.distributed.launch \
--nproc_per_node=$nproc_per_node \
--nnodes=2 \
--node_rank=0 \
--master_addr="192.168.1.1" \
--master_port=1234 run_clm.py \
--model_name_or_path gpt2 \
--block_size 256 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--overwrite_output_dir \
--num_train_epochs 1 \
--output_dir /tmp/test-clm
```
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: language modeling
* [x] my own task or dataset: wikitext
## To reproduce
Steps to reproduce the behavior:
1. Have two nodes with at least 4 GPUs each.
2. In the first machine, run the above script.
3. In the second machine, run a script same as above except with the flag `--node_rank=1` instead of `--node_rank=0`.
I have waited for almost 15 mins. Nothing has happened. The training did not get started.
## Expected behavior
The training gets started.
| 05-24-2021 18:40:25 | 05-24-2021 18:40:25 | Are you sure the port is open between the two machines? Not having any output is usually a symptom of that. I've tried on my side and I can run the script on multi-nodes.<|||||>@sgugger Thanks for the reply :)
> Are you sure the port is open between the two machines?
yes I made sure by giving different port numbers; none of them worked and I got this message after 15 mins:
`RuntimeError: connect() timed out.`
<|||||>No they need to have the same port number, otherwise they can't connect to each other.<|||||>Thank you very much.
> No they need to have the same port number
Here I meant I gave the same port number to both sides, but I tried multiple times with some numbers to make sure the port is open :)
But no worries, the issue is solved. I tried with the actual IP address of one of the machines and that solved the issue. <|||||>Glad you solved your issue! |
transformers | 11,852 | closed | Fix two typos in docs | # What does this PR do?
Fixed two minor typos.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 05-24-2021 18:12:20 | 05-24-2021 18:12:20 | |
transformers | 11,851 | closed | Switch mem metrics flag | # What does this PR do?
As flagged out in #11485, the memory metrics take a bit of performance, so this PR switches the flag that enable them to give the best performance by default (and the user can still manually activate them when they want them!)
Fixes #11845 | 05-24-2021 16:22:23 | 05-24-2021 16:22:23 | |
transformers | 11,850 | closed | Gradient is None in after deepspeed backward | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-4.15.0-112-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: GeForce RTX 3090
- Using distributed or parallel set-up in script?: Deepspeed
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people. !-->
- @stas00
- @sgugger
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
I want to check if the gradient in different GPU process are same after backward, so I output the gradient in trainer.py:

The output shows that all gradients are None. I reproduced this in another script that I believe working as expected for a long time. So my quesion is:
1. is this by design? why it's None.
2. how to output the real gradient of the model's parameters after gradient are calcuated?
deepspeed config:
```
"gradient_accumulation_steps": 1,
"train_batch_size": 16,
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1,
"initial_scale_power": 16
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true,
"cpu_offload": false,
"find_unused_parameters": true
},
"zero_allow_untested_optimizer": true,
"optimizer": {
"type": "AdamW",
"params": {
"lr": 2e-5,
"betas": [0.9, 0.999],
"eps": 1e-6,
"weight_decay": 0.01
}
},
"scheduler": {
"type": "OneCycle",
"params": {
"cycle_first_step_size": 5000,
"cycle_first_stair_count": 500,
"cycle_second_step_size": 5000,
"cycle_second_stair_count": 500,
"decay_step_size": 1000,
"cycle_min_lr": 4e-5,
"cycle_max_lr": 1e-4,
"decay_lr_rate": 0.001,
"cycle_momentum": true,
"cycle_min_mom": 0.85,
"cycle_max_mom": 0.99,
"decay_mom_rate": 0.0
}
},
"steps_per_print": 500,
"wall_clock_breakdown": false
}
```
| 05-24-2021 16:15:57 | 05-24-2021 16:15:57 | **edit** looks like Deepspeed needs to add an API to do that: https://github.com/microsoft/DeepSpeed/issues/1098
My original suggestion is very likely not to work. I have just never tried it in this context:
---------
Since params are sharded, you need to gather them before you can read their values. https://deepspeed.readthedocs.io/en/latest/zero3.html#gathering-parameters
Here is an untested code:
```
import deepspeed
for name, param in model.named_parameters():
with deepspeed.zero.GatheredParameters(param, modifier_rank=None):
if param.requires_grad: ....
```
<|||||>meanwhile please edit the OP to include the ds config file you used, so that we know what setup you're running it under.<|||||>> **edit** looks like Deepspeed needs to add an API to do that: [microsoft/DeepSpeed#1098](https://github.com/microsoft/DeepSpeed/issues/1098)
>
> My original suggestion is very likely not to work. I have just never tried it in this context:
>
> Since params are sharded, you need to gather them before you can read their values. https://deepspeed.readthedocs.io/en/latest/zero3.html#gathering-parameters
>
> Here is an untested code:
>
> ```
> import deepspeed
> for name, param in model.named_parameters():
> with deepspeed.zero.GatheredParameters(param, modifier_rank=None):
> if param.requires_grad: ....
> ```
thanks for asking this feature in deepspeed and confrimed None gradient is expected for now<|||||>> meanwhile please edit the OP to include the ds config file you used, so that we know what setup you're running it under.
|
transformers | 11,849 | closed | Add simple ByteTokenizer for Reformer | # What does this PR do?
Fixes #11649
Adds a ReformerByteTokenizer for (https://huggingface.co/google/reformer-enwik8).
- Everything is a bit hardcoded in that tokenizer as very little configuration is expected
- no fast tokenizer (a bit useless as this manipulates raw bytes
and has very little python logic)
Added to the docs
Added very simple tests. For this tokenizer, the actual mixin is debattable as
"tokens" are raw bytes, and cannot be strings (255 for instance is not a valid string).
using b"\xff" instead of 255 is possible yet might not be exactly clearer.
This requires some modifications within the "google/reformer-enwik8" config.
Namely:
- Adding a `tokenizer_class` to the config
- Adding a dummy file so that AutoTokenizer won't fail because no file are needed.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik @patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 05-24-2021 15:59:18 | 05-24-2021 15:59:18 | > Cool, thank you for opening this PR!
>
> Before merging, I think two issues should be resolved:
>
> * I'm not sure what that `ReformerByteTokenizer` can be used for - so if I'm unaware, I suppose users will be too. Adding a bit of documentation regarding what is that tokenizer and why it's here would be nice
Tried to add some documentation, is it better ? Ultimately I don't really have background for this model, so I added more or less what was said in the model card (and left a link to it just in case, I'm also unaware of other models that are Byte level too.).
>
> * According to what it can be used for, the appropriate tests should be put in place. For example there's no testing for saving/loading while there's a bit of a workaround to enable that - it would be nice to test that and all the other expected behaviors.
I added it. Because it has only 1 special token (pad), which I arbitrarily set to 0 (does not seem to be explained in the model card either currently, but why would there be a 2 shift for input_ids otherwise ?) and it does not really make sense for this Tokenizer to try to use "tokens" in the current Tokenizer meaning (substrings that are part of the main string). It is impossible to do because of how utf-8 works, If we did we would run sooner or later into other issues:
`len(chr(255).encode("utf-8")) == 2` for instance which is `chr(255) == [195, 195]`, not `[255]`.
@LysandreJik would love a small re-review, but we should keep this PR low profile, it's not that important anyway.
<|||||>@patrickvonplaten can you give this one a look and merge if it looks good to you?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,848 | closed | Token Classification OOM | I am using the Token Classification Example on my dataset. It has around 20k lines for train and around 2k lines for validation and 2k for the test dataset. I have used the following example for my dataset.
https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/token_classification.ipynb
After training the model, when eval is called it eats up all the GPU memory even on batch size 1.
Here is the output.
```shell
Some weights of the model checkpoint at /home/irfan/Downloads/bert-base-uncased were not used when initializing BertForTokenClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
- This IS expected if you are initializing BertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForTokenClassification were not initialized from the model checkpoint at /home/irfan/Downloads/bert-base-uncased and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Reusing dataset gector (/home/irfan/.cache/huggingface/datasets/gector/gector config/1.0.0/410838f652242763258c1912a4316a2d985c51d27bbb94a3301ad15cde38de06)
100%|██████████| 29/29 [00:01<00:00, 24.00ba/s]
100%|██████████| 7/7 [00:00<00:00, 25.27ba/s]
100%|██████████| 6/6 [00:00<00:00, 25.62ba/s]
2%|▏ | 500/28347 [00:44<42:04, 11.03it/s]{'loss': 1.3763, 'learning_rate': 1.9647228983666704e-05, 'epoch': 0.02}
4%|▎ | 1000/28347 [01:27<40:33, 11.24it/s]{'loss': 0.8006, 'learning_rate': 1.9294457967333407e-05, 'epoch': 0.04}
5%|▌ | 1500/28347 [02:10<37:32, 11.92it/s]{'loss': 0.8302, 'learning_rate': 1.8941686951000106e-05, 'epoch': 0.05}
7%|▋ | 2000/28347 [02:52<35:53, 12.23it/s]{'loss': 0.7262, 'learning_rate': 1.858891593466681e-05, 'epoch': 0.07}
9%|▉ | 2500/28347 [03:35<36:11, 11.90it/s]{'loss': 0.7766, 'learning_rate': 1.823614491833351e-05, 'epoch': 0.09}
11%|█ | 3000/28347 [04:19<37:29, 11.27it/s]{'loss': 0.7891, 'learning_rate': 1.7883373902000214e-05, 'epoch': 0.11}
12%|█▏ | 3500/28347 [05:03<36:17, 11.41it/s]{'loss': 0.7966, 'learning_rate': 1.7530602885666913e-05, 'epoch': 0.12}
14%|█▍ | 4000/28347 [05:47<35:28, 11.44it/s]{'loss': 0.7518, 'learning_rate': 1.7177831869333615e-05, 'epoch': 0.14}
16%|█▌ | 4500/28347 [06:31<34:30, 11.52it/s]{'loss': 0.6913, 'learning_rate': 1.6825060853000318e-05, 'epoch': 0.16}
18%|█▊ | 5000/28347 [07:15<33:57, 11.46it/s]{'loss': 0.7688, 'learning_rate': 1.647228983666702e-05, 'epoch': 0.18}
19%|█▉ | 5500/28347 [08:01<37:43, 10.09it/s]{'loss': 0.8129, 'learning_rate': 1.6119518820333723e-05, 'epoch': 0.19}
21%|██ | 6000/28347 [08:48<33:05, 11.25it/s]{'loss': 0.7222, 'learning_rate': 1.5766747804000426e-05, 'epoch': 0.21}
23%|██▎ | 6500/28347 [09:37<31:53, 11.42it/s]{'loss': 0.7068, 'learning_rate': 1.5413976787667128e-05, 'epoch': 0.23}
25%|██▍ | 7000/28347 [10:22<31:18, 11.36it/s]{'loss': 0.6917, 'learning_rate': 1.5061205771333829e-05, 'epoch': 0.25}
26%|██▋ | 7500/28347 [11:07<30:42, 11.31it/s]{'loss': 0.7271, 'learning_rate': 1.4708434755000532e-05, 'epoch': 0.26}
28%|██▊ | 8000/28347 [11:52<32:11, 10.53it/s]{'loss': 0.6844, 'learning_rate': 1.435566373866723e-05, 'epoch': 0.28}
30%|██▉ | 8500/28347 [12:40<30:40, 10.78it/s]{'loss': 0.7325, 'learning_rate': 1.4002892722333933e-05, 'epoch': 0.3}
32%|███▏ | 9000/28347 [13:27<31:27, 10.25it/s]{'loss': 0.7981, 'learning_rate': 1.3650121706000636e-05, 'epoch': 0.32}
34%|███▎ | 9500/28347 [14:15<28:28, 11.03it/s]{'loss': 0.7432, 'learning_rate': 1.3297350689667338e-05, 'epoch': 0.34}
35%|███▌ | 10000/28347 [15:00<26:41, 11.45it/s]{'loss': 0.6548, 'learning_rate': 1.294457967333404e-05, 'epoch': 0.35}
37%|███▋ | 10500/28347 [15:44<26:11, 11.35it/s]{'loss': 0.7042, 'learning_rate': 1.2591808657000742e-05, 'epoch': 0.37}
39%|███▉ | 11000/28347 [16:27<25:18, 11.42it/s]{'loss': 0.6836, 'learning_rate': 1.2239037640667444e-05, 'epoch': 0.39}
41%|████ | 11500/28347 [17:11<24:17, 11.56it/s]{'loss': 0.7011, 'learning_rate': 1.1886266624334147e-05, 'epoch': 0.41}
42%|████▏ | 12000/28347 [17:55<24:45, 11.01it/s]{'loss': 0.6519, 'learning_rate': 1.153349560800085e-05, 'epoch': 0.42}
44%|████▍ | 12500/28347 [18:38<22:20, 11.83it/s]{'loss': 0.7423, 'learning_rate': 1.1180724591667549e-05, 'epoch': 0.44}
46%|████▌ | 13000/28347 [19:22<22:19, 11.46it/s]{'loss': 0.7353, 'learning_rate': 1.0827953575334251e-05, 'epoch': 0.46}
48%|████▊ | 13500/28347 [20:06<21:15, 11.64it/s]{'loss': 0.6847, 'learning_rate': 1.0475182559000954e-05, 'epoch': 0.48}
49%|████▉ | 14000/28347 [20:48<20:00, 11.95it/s]{'loss': 0.6356, 'learning_rate': 1.0122411542667656e-05, 'epoch': 0.49}
51%|█████ | 14500/28347 [21:30<19:01, 12.14it/s]{'loss': 0.6993, 'learning_rate': 9.769640526334357e-06, 'epoch': 0.51}
53%|█████▎ | 15000/28347 [22:11<18:06, 12.29it/s]{'loss': 0.7461, 'learning_rate': 9.416869510001058e-06, 'epoch': 0.53}
55%|█████▍ | 15500/28347 [22:53<18:01, 11.88it/s]{'loss': 0.723, 'learning_rate': 9.06409849366776e-06, 'epoch': 0.55}
56%|█████▋ | 16000/28347 [23:35<17:18, 11.89it/s]{'loss': 0.7091, 'learning_rate': 8.711327477334463e-06, 'epoch': 0.56}
58%|█████▊ | 16500/28347 [24:16<16:33, 11.93it/s]{'loss': 0.7283, 'learning_rate': 8.358556461001164e-06, 'epoch': 0.58}
60%|█████▉ | 17000/28347 [24:58<15:44, 12.01it/s]{'loss': 0.6658, 'learning_rate': 8.005785444667867e-06, 'epoch': 0.6}
62%|██████▏ | 17500/28347 [25:39<15:14, 11.87it/s]{'loss': 0.7333, 'learning_rate': 7.65301442833457e-06, 'epoch': 0.62}
63%|██████▎ | 18000/28347 [26:21<15:00, 11.50it/s]{'loss': 0.6953, 'learning_rate': 7.300243412001271e-06, 'epoch': 0.63}
65%|██████▌ | 18500/28347 [27:07<14:39, 11.19it/s]{'loss': 0.6792, 'learning_rate': 6.947472395667973e-06, 'epoch': 0.65}
67%|██████▋ | 19000/28347 [27:53<15:01, 10.37it/s]{'loss': 0.7156, 'learning_rate': 6.594701379334675e-06, 'epoch': 0.67}
69%|██████▉ | 19500/28347 [28:41<13:53, 10.62it/s]{'loss': 0.6574, 'learning_rate': 6.241930363001376e-06, 'epoch': 0.69}
71%|███████ | 20000/28347 [29:28<13:19, 10.44it/s]{'loss': 0.724, 'learning_rate': 5.889159346668079e-06, 'epoch': 0.71}
72%|███████▏ | 20500/28347 [30:13<11:11, 11.69it/s]{'loss': 0.6687, 'learning_rate': 5.5363883303347795e-06, 'epoch': 0.72}
74%|███████▍ | 21000/28347 [30:56<10:29, 11.67it/s]{'loss': 0.6612, 'learning_rate': 5.183617314001482e-06, 'epoch': 0.74}
76%|███████▌ | 21500/28347 [31:39<09:29, 12.03it/s]{'loss': 0.6861, 'learning_rate': 4.830846297668184e-06, 'epoch': 0.76}
78%|███████▊ | 22000/28347 [32:23<08:31, 12.40it/s]{'loss': 0.6709, 'learning_rate': 4.478075281334886e-06, 'epoch': 0.78}
79%|███████▉ | 22500/28347 [33:04<08:03, 12.08it/s]{'loss': 0.6689, 'learning_rate': 4.125304265001588e-06, 'epoch': 0.79}
81%|████████ | 23000/28347 [33:46<07:14, 12.31it/s]{'loss': 0.5955, 'learning_rate': 3.7725332486682897e-06, 'epoch': 0.81}
83%|████████▎ | 23500/28347 [34:27<06:42, 12.05it/s]{'loss': 0.7265, 'learning_rate': 3.4197622323349914e-06, 'epoch': 0.83}
85%|████████▍ | 24000/28347 [35:09<06:01, 12.03it/s]{'loss': 0.6603, 'learning_rate': 3.0669912160016936e-06, 'epoch': 0.85}
86%|████████▋ | 24500/28347 [35:50<05:11, 12.36it/s]{'loss': 0.5817, 'learning_rate': 2.7142201996683953e-06, 'epoch': 0.86}
88%|████████▊ | 25000/28347 [36:32<04:34, 12.20it/s]{'loss': 0.6164, 'learning_rate': 2.3614491833350974e-06, 'epoch': 0.88}
90%|████████▉ | 25500/28347 [37:13<03:57, 11.98it/s]{'loss': 0.6458, 'learning_rate': 2.0086781670017996e-06, 'epoch': 0.9}
92%|█████████▏| 26000/28347 [37:55<03:19, 11.77it/s]{'loss': 0.6093, 'learning_rate': 1.6559071506685013e-06, 'epoch': 0.92}
93%|█████████▎| 26500/28347 [38:36<02:29, 12.33it/s]{'loss': 0.6713, 'learning_rate': 1.3031361343352032e-06, 'epoch': 0.93}
95%|█████████▌| 27000/28347 [39:18<01:52, 11.97it/s]{'loss': 0.6891, 'learning_rate': 9.503651180019051e-07, 'epoch': 0.95}
97%|█████████▋| 27500/28347 [39:59<01:09, 12.19it/s]{'loss': 0.7025, 'learning_rate': 5.975941016686069e-07, 'epoch': 0.97}
99%|█████████▉| 28000/28347 [40:43<00:29, 11.58it/s]{'loss': 0.6524, 'learning_rate': 2.4482308533530886e-07, 'epoch': 0.99}
100%|██████████| 28347/28347 [41:11<00:00, 11.47it/s]
{'train_runtime': 2471.9347, 'train_samples_per_second': 11.468, 'epoch': 1.0}
35%|███▍ | 2276/6574 [01:49<06:12, 11.55it/s]Traceback (most recent call last):
File "/home/irfan/PycharmProjects/GecPytorch/token_classification.py", line 62, in <module>
trainer.evaluate()
File "/home/irfan/environments/Allen/lib/python3.6/site-packages/transformers/trainer.py", line 1764, in evaluate
metric_key_prefix=metric_key_prefix,
File "/home/irfan/environments/Allen/lib/python3.6/site-packages/transformers/trainer.py", line 1900, in prediction_loop
preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100)
File "/home/irfan/environments/Allen/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 98, in nested_concat
return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index)
File "/home/irfan/environments/Allen/lib/python3.6/site-packages/transformers/trainer_pt_utils.py", line 66, in torch_pad_and_concatenate
result = tensor1.new_full(new_shape, padding_index)
RuntimeError: CUDA out of memory. Tried to allocate 2.12 GiB (GPU 0; 7.80 GiB total capacity; 3.81 GiB already allocated; 2.18 GiB free; 3.97 GiB reserved in total by PyTorch)
35%|███▍ | 2276/6574 [01:49<03:26, 20.77it/s]
```
I'm using Python 3.6, GPU RTX 2060 super, OS Ubuntu 18.04
@sgugger | 05-24-2021 15:36:06 | 05-24-2021 15:36:06 | You should use `eval_accumulation_steps=n` (for instance 20) to have the predictions be moved to the CPU every n steps during evaluation (n should be lower than the step you get an OOM error)<|||||>Thanks, it worked perfectly.<|||||>> You should use `eval_accumulation_steps=n` (for instance 20) to have the predictions be moved to the CPU every n steps during evaluation (n should be lower than the step you get an OOM error)
Thank you so much. I have been stuck at the point for several days until I landed here. |
transformers | 11,847 | open | Request addition of 'GPT2ForwardBackward' models | # 🌟 Request addition of 'GPT2ForwardBackward' models
## Model description
Code for running forward and backward versions of GPT-2 XL. These were trained for the paper:
**Reflective Decoding: Beyond Unidirectional Generation with Off-the-Shelf Language Models**; Peter West, Ximing Lu, Ari Holtzman, Chandra Bhagavatula, Jena Hwang, and Yejin Choi; ACL (2021)
https://arxiv.org/abs/2010.08566
## Open source status
* [ X] the model implementation is available: (https://github.com/peterwestuw/GPT2ForwardBackward)
* [ X] the model weights are available: (same link as above)
* [ X] who are the authors: (See arvix credits above)
| 05-24-2021 14:39:21 | 05-24-2021 14:39:21 | |
transformers | 11,846 | closed | Fix reference to XLNet | # What does this PR do?
Fixes a reference to the XLNet page in the documentation of TrainingArguments.
Fixes #11831 | 05-24-2021 13:20:44 | 05-24-2021 13:20:44 | |
transformers | 11,845 | closed | Regression in training speed since 4.4.0 | ## Environment info
- `transformers` version: 4.4.0/4.6.1
- Platform: Linux-5.8.0-53-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: Yes, RTX 3090
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger @patrickvonplaten
## Information
I've noticed a training speed regression between 4.4.0 and 4.6.0:
4.4.0 BS8:
{'train_runtime': 21.363, 'train_samples_per_second': 5.851, 'epoch': 1.0}
{'train_runtime': 21.6148, 'train_samples_per_second': 5.783, 'epoch': 1.0}
{'train_runtime': 21.7867, 'train_samples_per_second': 5.737, 'epoch': 1.0}
4.6.1 BS8:
{'train_runtime': 23.7011, 'train_samples_per_second': 5.274, 'epoch': 1.0}
{'train_runtime': 24.2845, 'train_samples_per_second': 5.147, 'epoch': 1.0}
{'train_runtime': 23.5801, 'train_samples_per_second': 5.301, 'epoch': 1.0}
4.4.0 BS4:
{'train_runtime': 25.4107, 'train_samples_per_second': 9.838, 'epoch': 1.0}
4.6.1 BS4:
{'train_runtime': 31.2902, 'train_samples_per_second': 7.99, 'epoch': 1.0}
I'm running the Pytorch 1.8.1. release, on my RTX3090/Ryzen 3700X workstation.
The performance loss seems to increase with smaller batch sizes, leading me to think it's something in Trainer.
Although I found the regression with Sequence Classification, I've found the the slowdown transfers to other tasks as well.
## To reproduce
```
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TrainingArguments, Trainer
BATCH_SIZE = 4
raw_datasets = load_dataset("imdb")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
full_train_dataset = tokenized_datasets["train"]
full_eval_dataset = tokenized_datasets["test"]
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
training_args = TrainingArguments("test_trainer", num_train_epochs=1, per_device_train_batch_size=BATCH_SIZE)
trainer = Trainer(
model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset
)
trainer.train()
``` | 05-24-2021 12:50:03 | 05-24-2021 12:50:03 | Thanks for flagging! I used your reproducer (very nice by the way!) and a git bisect and it comes from #10937, which reworks the way memory metrics are computed. You just have to set `skip_memory_metrics=True` in your `TrainingArguments` to skip the memory metric computation and you will get your performance back.<|||||>Thanks, that did the trick! Turns out that setting skip_memory_metrics also improves performance ~5% on 4.4.0.
Could I suggest that this parameter is enabled by default? It seems to me that this is debugging functionality, and shouldn't be enabled normally.<|||||>Yes, this is done by the PR mentioned above!<|||||>Quick comment on the already closed issue. Had to debug this issue independently, due to an older fork.
It seems that the the issue is not just slowdown when enabling memory metrics, but also, there is performance variability from run to run.
One of the signatures of the issue is that there is no performance loss or variability in backward() call (run in the Autograd C engine). The optimizer.step() had the greatest impact, followed by forward propagation. Based on those observations, the issue is suspected to be due to fast I/O ( gpu kernel calls during optimizer.step() and forward) affecting multithreading in the Python GIL.
Skipping memory metrics fixes issue sufficiently. There is still a logging thread, and a progress-bar (tqdm) thread. Adding this note here as a warning that multithreading during forward or optimizer.step() might cause performance loss/variability. <|||||>Ouch, hope that it didn't cost you too much time! Thanks for the further info on the problem |
transformers | 11,844 | closed | Fix flos single node | This PR fixes a bug typo where in single-node settings, flos in the Trainer would stay constant, and also updates the flo count in the trainer state at every log occasion (instead of every model-saving occasion) so that users that wish to use a flo-logging callback can access it more frequently. I feel like the first bug should have been caught by a test (at the moment few other people use trainer flos, since it's mostly for large-scale training and researchers that need or want to report flos, and neither group uses the Trainer a lot, so it's mostly the HF bigscience efforts), and I should make one.
@sgugger | 05-24-2021 12:29:58 | 05-24-2021 12:29:58 | Not sure why the CI is failing, is it related to the PR? Doesn't look so to me but I may be missing something<|||||>No, the CI is failing all the time those days, don't worry about it. |
transformers | 11,843 | closed | Issues loading finetuned BERT | Hello, I’m having issues loading a finetuned BERT model for binary classification. I have this class for the BERT model:
```
class BertClassifier(nn.Module):
def __init__(self, freeze_bert=False):
super(BertClassifier, self).__init__()
self.bert = BertModel.from_pretrained('bert-base-multilingual-uncased')
self.lstm = nn.LSTM(768, 50, batch_first=True, bidirectional=True)
self.linear = nn.Linear(50*2 , 2)
if freeze_bert:
for param in self.bert.parameters():
param.requires_grad = False
def forward(self, input_ids, attention_mask):
outputs = self.bert(input_ids=input_ids,attention_mask=attention_mask)
sequence_output = outputs[0]
sequence_output, _ = self.lstm(sequence_output)
linear_output = self.linear(sequence_output[:, -1])
return linear_output
```
The model is `bert_classifier = BertClassifier(freeze_bert=False)`
I save the model by the below line:
`torch.save(bert_classifier.state_dict(), 'finetuned_model.pt')`
Then in another .py file i want to load the model and i have the below code:
```
model = BertModel.from_pretrained('bert-base-multilingual-uncased')
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-uncased', do_lower_case=True)
model.load_state_dict(torch.load('finetuned_model.pt'))
```
When I run the code, i receive the below error on this line `model.load_state_dict(torch.load('finetuned_model.pt'))` :
```
RuntimeError: Error(s) in loading state_dict for BertModel:
Missing key(s) in state_dict: "embeddings.position_ids", "embeddings.word_embeddings.weight", "embeddings.position_embeddings.weight", "embeddings.token_type_embeddings.weight", "embeddings.LayerNorm.weight", "embeddings.LayerNorm.bias", "encoder.layer.0.attention.self.query.weight", "encoder.layer.0.attention.self.query.bias", "encoder.layer.0.attention.self.key.weight", "encoder.layer.0.attention.self.key.bias", "encoder.layer.0.attention.self.value.weight", "encoder.layer.0.attention.self.value.bias", "encoder.layer.0.attention.output.dense.weight", "encoder.layer.0.attention.output.dense.bias", "encoder.layer.0.attention.output.LayerNorm.weight", "encoder.layer.0.attention.output.LayerNorm.bias", "encoder.layer.0.intermediate.dense.weight", "encoder.layer.0.intermediate.dense.bias", "encoder.layer.0.output.dense.weight", "encoder.layer.0.output.dense.bias", "encoder.layer.0.output.LayerNorm.weight", "encoder.layer.0.output.LayerNorm.bias", "encoder.layer.1.attention.self.query.weight", "encoder.layer.1.attention.self.query.bias", "encoder.layer.1.attention.self.key.weight", "encoder.layer.1.attention.self.key.bias", "encoder.layer.1.attention.self.value.weight", "encoder.layer.1.attention.self.value.bias", "encoder.layer.1.attention.output.dense.weight", "encoder.layer.1.attention.output.dense.bias", "encoder.layer.1.attention.output.LayerNorm.weight", "encoder.layer.1.attention.output.LayerNorm.bias", "encoder.layer.1.intermediate.dense.weight", "encoder.layer.1.intermediate.dense.bias", "encoder.layer.1.output.dense.weight", "encoder.layer.1.output.dense.bias", "encoder.layer.1.output.LayerNorm.weight", "encoder.layer.1.output.LayerNorm.bias", "encoder.layer.2.attention.self.query.weight", "encoder.layer.2.attention.self.query.bias", "encoder.layer.2.attention.self.key.weight", "encoder.layer.2.attention.self.key.bias", "encoder.layer.2.attention.self.value.weight", "encoder.layer.2.attention.self.value.bias", "encoder.layer.2.attention.output.dense.weight", "encoder.layer.2.attention.output.dense.bias", "encoder.layer.2.attention.output.LayerNorm.weight", "encoder.layer.2.attention.output.LayerNorm.bias", "encoder.layer.2.intermediate.dense.weight", "encoder.layer.2.intermediate.dense.bias", "encoder.layer.2.output.dense.weight", "encoder.layer.2.output.dense.bias", "encoder.layer.2.output.LayerNorm.weight", "encoder.layer.2.output.LayerNorm.bias", "encoder.layer.3.attention.self.query.weight", "encoder.layer.3.attention.self.query.bias", "encoder.layer.3.attention.self.key.weight", "encoder.layer.3.attention.self.key.bias", "encoder.layer.3.attention.self.value.weight", "encoder.layer.3.attention.self.value.bias", "encoder.layer.3.attention.output.dense.weight", "encoder.layer.3.attention.output.dense.bias", "encoder.layer.3.attention.output.LayerNorm.weight", "encoder.layer.3.attention.output.LayerNorm.bias", "encoder.layer.3.intermediate.dense.weight", "encoder.layer.3.intermediate.dense.bias", "encoder.layer.3.output.dense.weight", "encoder.layer.3.output.dense.bias", "encoder.layer.3.output.LayerNorm.weight", "encoder.layer.3.output.LayerNorm.bias", "encoder.layer.4.attention.self.query.weight", "encoder.layer.4.attention.self.query.bias", "encoder.layer.4.attention.self.key.weight", "encoder.layer.4.attention.self.key.bias", "encoder.layer.4.attention.self.value.weight", "encoder.layer.4.attention.self.value.bias", "encoder.layer.4.attention.output.dense.weight", "encoder.layer.4.attention.output.dense.bias", "encoder.layer.4.attention.output.LayerNorm.weight", "encoder.layer.4.attention.output.LayerNorm.bias", "encoder.layer.4.intermediate.dense.weight", "encoder.layer.4.intermediate.dense.bias", "encoder.layer.4.output.dense.weight", "encoder.layer.4.output.dense.bias", "encoder.layer.4.output.LayerNorm.weight", "encoder.layer.4.output.LayerNorm.bias", "encoder.layer.5.attention.self.query.weight", "encoder.layer.5.attention.self.query.bias", "encoder.layer.5.attention.self.key.weight", "encoder.layer.5.attention.self.key.bias", "encoder.layer.5.attention.self.value.weight", "encoder.layer.5.attention.self.value.bias", "encoder.layer.5.attention.output.dense.weight", "encoder.layer.5.attention.output.dense.bias", "encoder.layer.5.attention.output.LayerNorm.weight", "encoder.layer.5.attention.output.LayerNorm.bias", "encoder.layer.5.intermediate.dense.weight", "encoder.layer.5.intermediate.dense.bias", "encoder.layer.5.output.dense.weight", "encoder.layer.5.output.dense.bias", "encoder.layer.5.output.LayerNorm.weight", "encoder.layer.5.output.LayerNorm.bias", "encoder.layer.6.attention.self.query.weight", "encoder.layer.6.attention.self.query.bias", "encoder.layer.6.attention.self.key.weight", "encoder.layer.6.attention.self.key.bias", "encoder.layer.6.attention.self.value.weight", "encoder.layer.6.attention.self.value.bias", "encoder.layer.6.attention.output.dense.weight", "encoder.layer.6.attention.output.dense.bias", "encoder.layer.6.attention.output.LayerNorm.weight", "encoder.layer.6.attention.output.LayerNorm.bias", "encoder.layer.6.intermediate.dense.weight", "encoder.layer.6.intermediate.dense.bias", "encoder.layer.6.output.dense.weight", "encoder.layer.6.output.dense.bias", "encoder.layer.6.output.LayerNorm.weight", "encoder.layer.6.output.LayerNorm.bias", "encoder.layer.7.attention.self.query.weight", "encoder.layer.7.attention.self.query.bias", "encoder.layer.7.attention.self.key.weight", "encoder.layer.7.attention.self.key.bias", "encoder.layer.7.attention.self.value.weight", "encoder.layer.7.attention.self.value.bias", "encoder.layer.7.attention.output.dense.weight", "encoder.layer.7.attention.output.dense.bias", "encoder.layer.7.attention.output.LayerNorm.weight", "encoder.layer.7.attention.output.LayerNorm.bias", "encoder.layer.7.intermediate.dense.weight", "encoder.layer.7.intermediate.dense.bias", "encoder.layer.7.output.dense.weight", "encoder.layer.7.output.dense.bias", "encoder.layer.7.output.LayerNorm.weight", "encoder.layer.7.output.LayerNorm.bias", "encoder.layer.8.attention.self.query.weight", "encoder.layer.8.attention.self.query.bias", "encoder.layer.8.attention.self.key.weight", "encoder.layer.8.attention.self.key.bias", "encoder.layer.8.attention.self.value.weight", "encoder.layer.8.attention.self.value.bias", "encoder.layer.8.attention.output.dense.weight", "encoder.layer.8.attention.output.dense.bias", "encoder.layer.8.attention.output.LayerNorm.weight", "encoder.layer.8.attention.output.LayerNorm.bias", "encoder.layer.8.intermediate.dense.weight", "encoder.layer.8.intermediate.dense.bias", "encoder.layer.8.output.dense.weight", "encoder.layer.8.output.dense.bias", "encoder.layer.8.output.LayerNorm.weight", "encoder.layer.8.output.LayerNorm.bias", "encoder.layer.9.attention.self.query.weight", "encoder.layer.9.attention.self.query.bias", "encoder.layer.9.attention.self.key.weight", "encoder.layer.9.attention.self.key.bias", "encoder.layer.9.attention.self.value.weight", "encoder.layer.9.attention.self.value.bias", "encoder.layer.9.attention.output.dense.weight", "encoder.layer.9.attention.output.dense.bias", "encoder.layer.9.attention.output.LayerNorm.weight", "encoder.layer.9.attention.output.LayerNorm.bias", "encoder.layer.9.intermediate.dense.weight", "encoder.layer.9.intermediate.dense.bias", "encoder.layer.9.output.dense.weight", "encoder.layer.9.output.dense.bias", "encoder.layer.9.output.LayerNorm.weight", "encoder.layer.9.output.LayerNorm.bias", "encoder.layer.10.attention.self.query.weight", "encoder.layer.10.attention.self.query.bias", "encoder.layer.10.attention.self.key.weight", "encoder.layer.10.attention.self.key.bias", "encoder.layer.10.attention.self.value.weight", "encoder.layer.10.attention.self.value.bias", "encoder.layer.10.attention.output.dense.weight", "encoder.layer.10.attention.output.dense.bias", "encoder.layer.10.attention.output.LayerNorm.weight", "encoder.layer.10.attention.output.LayerNorm.bias", "encoder.layer.10.intermediate.dense.weight", "encoder.layer.10.intermediate.dense.bias", "encoder.layer.10.output.dense.weight", "encoder.layer.10.output.dense.bias", "encoder.layer.10.output.LayerNorm.weight", "encoder.layer.10.output.LayerNorm.bias", "encoder.layer.11.attention.self.query.weight", "encoder.layer.11.attention.self.query.bias", "encoder.layer.11.attention.self.key.weight", "encoder.layer.11.attention.self.key.bias", "encoder.layer.11.attention.self.value.weight", "encoder.layer.11.attention.self.value.bias", "encoder.layer.11.attention.output.dense.weight", "encoder.layer.11.attention.output.dense.bias", "encoder.layer.11.attention.output.LayerNorm.weight", "encoder.layer.11.attention.output.LayerNorm.bias", "encoder.layer.11.intermediate.dense.weight", "encoder.layer.11.intermediate.dense.bias", "encoder.layer.11.output.dense.weight", "encoder.layer.11.output.dense.bias", "encoder.layer.11.output.LayerNorm.weight", "encoder.layer.11.output.LayerNorm.bias", "pooler.dense.weight", "pooler.dense.bias".
Unexpected key(s) in state_dict: "bert.embeddings.position_ids", "bert.embeddings.word_embeddings.weight", "bert.embeddings.position_embeddings.weight", "bert.embeddings.token_type_embeddings.weight", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.self.query.weight", "bert.encoder.layer.0.attention.self.query.bias", "bert.encoder.layer.0.attention.self.key.weight", "bert.encoder.layer.0.attention.self.key.bias", "bert.encoder.layer.0.attention.self.value.weight", "bert.encoder.layer.0.attention.self.value.bias", "bert.encoder.layer.0.attention.output.dense.weight", "bert.encoder.layer.0.attention.output.dense.bias", "bert.encoder.layer.0.attention.output.LayerNorm.weight", "bert.encoder.layer.0.attention.output.LayerNorm.bias", "bert.encoder.layer.0.intermediate.dense.weight", "bert.encoder.layer.0.intermediate.dense.bias", "bert.encoder.layer.0.output.dense.weight", "bert.encoder.layer.0.output.dense.bias", "bert.encoder.layer.0.output.LayerNorm.weight", "bert.encoder.layer.0.output.LayerNorm.bias", "bert.encoder.layer.1.attention.self.query.weight", "bert.encoder.layer.1.attention.self.query.bias", "bert.encoder.layer.1.attention.self.key.weight", "bert.encoder.layer.1.attention.self.key.bias", "bert.encoder.layer.1.attention.self.value.weight", "bert.encoder.layer.1.attention.self.value.bias", "bert.encoder.layer.1.attention.output.dense.weight", "bert.encoder.layer.1.attention.output.dense.bias", "bert.encoder.layer.1.attention.output.LayerNorm.weight", "bert.encoder.layer.1.attention.output.LayerNorm.bias", "bert.encoder.layer.1.intermediate.dense.weight", "bert.encoder.layer.1.intermediate.dense.bias", "bert.encoder.layer.1.output.dense.weight", "bert.encoder.layer.1.output.dense.bias", "bert.encoder.layer.1.output.LayerNorm.weight", "bert.encoder.layer.1.output.LayerNorm.bias", "bert.encoder.layer
```
I’ve tried to modify the save part to model.save_pretrained('finetuned_model.pt') but i received an error saying that save_pretrained function doesn’t exist in the model that i defined.
I also tried to save it with `torch.save(bert_classifier.state_dict(), 'finetuned_model.pt') `and load it with` model = BertModel.from_pretrained('finetuned_model.pt') `but i receive the error : `UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte`
And I also tried to save it like this `torch.save(bert_classifier.state_dict(), 'model/finetuned_model.bin')` and load it like this:
```
config = BertConfig.from_pretrained('bert-base-multilingual-uncased', num_labels=2)
model = BertModel.from_pretrained('bert-base-multilingual-uncased')
model.load_state_dict(torch.load("model/finetuned_model.bin"))
```
and received the same big error as above. Any idea how can this be fixed so i can save and load the model successfully ? Any help will be much appreciated. | 05-24-2021 11:43:43 | 05-24-2021 11:43:43 | It's advised to save models using the [`.save_pretrained()` method](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.save_pretrained). You can then read it back in using the `.from_pretrained()` method.
Note that you should specify the name of a directory, not the name of a PyTorch checkpoint.<|||||>I already tried to use` .save_pretrained()` but I have defined a class name BertClassifier where i defined an LSTM layer and a linear layer to be added to preatrained BertModel.from_pretrained('bert-base-multilingual-uncased') and when i tried to use save_pretrained() i received an error saying that save_pretrained is not defined in BertClassifier class.
**Update** i have updated the question with how my model is computed. <|||||>Oh ok, so your `BertClassifier` is just an nn.Module? I now see why your model reloading is not working. As you can see here:
```
RuntimeError: Error(s) in loading state_dict for BertModel:
Missing key(s) in state_dict: "embeddings.position_ids", "embeddings.word_embeddings.weight", "embeddings.position_embeddings.weight", "embeddings.token_type_embeddings.weight", "embeddings.LayerNorm.weight", "embeddings.LayerNorm.bias", "encoder.layer.0.attention.self.query.weight", "encoder.layer.0.attention.self.query.bias", "encoder.layer.0.attention.self.key.weight", "encoder.layer.0.attention.self.key.bias", "encoder.layer.0.attention.self.value.weight", "encoder.layer.0.attention.self.value.bias", "encoder.layer.0.attention.output.dense.weight", "encoder.layer.0.attention.output.dense.bias", "encoder.layer.0.attention.output.LayerNorm.weight", "encoder.layer.0.attention.output.LayerNorm.bias", "encoder.layer.0.intermediate.dense.weight", "encoder.layer.0.intermediate.dense.bias", "encoder.layer.0.output.dense.weight", "encoder.layer.0.output.dense.bias", "encoder.layer.0.output.LayerNorm.weight", "encoder.layer.0.output.LayerNorm.bias", "encoder.layer.1.attention.self.query.weight", "encoder.layer.1.attention.self.query.bias", "encoder.layer.1.attention.self.key.weight", "encoder.layer.1.attention.self.key.bias", "encoder.layer.1.attention.self.value.weight", "encoder.layer.1.attention.self.value.bias", "encoder.layer.1.attention.output.dense.weight", "encoder.layer.1.attention.output.dense.bias", "encoder.layer.1.attention.output.LayerNorm.weight", "encoder.layer.1.attention.output.LayerNorm.bias", "encoder.layer.1.intermediate.dense.weight", "encoder.layer.1.intermediate.dense.bias", "encoder.layer.1.output.dense.weight", "encoder.layer.1.output.dense.bias", "encoder.layer.1.output.LayerNorm.weight", "encoder.layer.1.output.LayerNorm.bias", "encoder.layer.2.attention.self.query.weight", "encoder.layer.2.attention.self.query.bias", "encoder.layer.2.attention.self.key.weight", "encoder.layer.2.attention.self.key.bias", "encoder.layer.2.attention.self.value.weight", "encoder.layer.2.attention.self.value.bias", "encoder.layer.2.attention.output.dense.weight", "encoder.layer.2.attention.output.dense.bias", "encoder.layer.2.attention.output.LayerNorm.weight", "encoder.layer.2.attention.output.LayerNorm.bias", "encoder.layer.2.intermediate.dense.weight", "encoder.layer.2.intermediate.dense.bias", "encoder.layer.2.output.dense.weight", "encoder.layer.2.output.dense.bias", "encoder.layer.2.output.LayerNorm.weight", "encoder.layer.2.output.LayerNorm.bias", "encoder.layer.3.attention.self.query.weight", "encoder.layer.3.attention.self.query.bias", "encoder.layer.3.attention.self.key.weight", "encoder.layer.3.attention.self.key.bias", "encoder.layer.3.attention.self.value.weight", "encoder.layer.3.attention.self.value.bias", "encoder.layer.3.attention.output.dense.weight", "encoder.layer.3.attention.output.dense.bias", "encoder.layer.3.attention.output.LayerNorm.weight", "encoder.layer.3.attention.output.LayerNorm.bias", "encoder.layer.3.intermediate.dense.weight", "encoder.layer.3.intermediate.dense.bias", "encoder.layer.3.output.dense.weight", "encoder.layer.3.output.dense.bias", "encoder.layer.3.output.LayerNorm.weight", "encoder.layer.3.output.LayerNorm.bias", "encoder.layer.4.attention.self.query.weight", "encoder.layer.4.attention.self.query.bias", "encoder.layer.4.attention.self.key.weight", "encoder.layer.4.attention.self.key.bias", "encoder.layer.4.attention.self.value.weight", "encoder.layer.4.attention.self.value.bias", "encoder.layer.4.attention.output.dense.weight", "encoder.layer.4.attention.output.dense.bias", "encoder.layer.4.attention.output.LayerNorm.weight", "encoder.layer.4.attention.output.LayerNorm.bias", "encoder.layer.4.intermediate.dense.weight", "encoder.layer.4.intermediate.dense.bias", "encoder.layer.4.output.dense.weight", "encoder.layer.4.output.dense.bias", "encoder.layer.4.output.LayerNorm.weight", "encoder.layer.4.output.LayerNorm.bias", "encoder.layer.5.attention.self.query.weight", "encoder.layer.5.attention.self.query.bias", "encoder.layer.5.attention.self.key.weight", "encoder.layer.5.attention.self.key.bias", "encoder.layer.5.attention.self.value.weight", "encoder.layer.5.attention.self.value.bias", "encoder.layer.5.attention.output.dense.weight", "encoder.layer.5.attention.output.dense.bias", "encoder.layer.5.attention.output.LayerNorm.weight", "encoder.layer.5.attention.output.LayerNorm.bias", "encoder.layer.5.intermediate.dense.weight", "encoder.layer.5.intermediate.dense.bias", "encoder.layer.5.output.dense.weight", "encoder.layer.5.output.dense.bias", "encoder.layer.5.output.LayerNorm.weight", "encoder.layer.5.output.LayerNorm.bias", "encoder.layer.6.attention.self.query.weight", "encoder.layer.6.attention.self.query.bias", "encoder.layer.6.attention.self.key.weight", "encoder.layer.6.attention.self.key.bias", "encoder.layer.6.attention.self.value.weight", "encoder.layer.6.attention.self.value.bias", "encoder.layer.6.attention.output.dense.weight", "encoder.layer.6.attention.output.dense.bias", "encoder.layer.6.attention.output.LayerNorm.weight", "encoder.layer.6.attention.output.LayerNorm.bias", "encoder.layer.6.intermediate.dense.weight", "encoder.layer.6.intermediate.dense.bias", "encoder.layer.6.output.dense.weight", "encoder.layer.6.output.dense.bias", "encoder.layer.6.output.LayerNorm.weight", "encoder.layer.6.output.LayerNorm.bias", "encoder.layer.7.attention.self.query.weight", "encoder.layer.7.attention.self.query.bias", "encoder.layer.7.attention.self.key.weight", "encoder.layer.7.attention.self.key.bias", "encoder.layer.7.attention.self.value.weight", "encoder.layer.7.attention.self.value.bias", "encoder.layer.7.attention.output.dense.weight", "encoder.layer.7.attention.output.dense.bias", "encoder.layer.7.attention.output.LayerNorm.weight", "encoder.layer.7.attention.output.LayerNorm.bias", "encoder.layer.7.intermediate.dense.weight", "encoder.layer.7.intermediate.dense.bias", "encoder.layer.7.output.dense.weight", "encoder.layer.7.output.dense.bias", "encoder.layer.7.output.LayerNorm.weight", "encoder.layer.7.output.LayerNorm.bias", "encoder.layer.8.attention.self.query.weight", "encoder.layer.8.attention.self.query.bias", "encoder.layer.8.attention.self.key.weight", "encoder.layer.8.attention.self.key.bias", "encoder.layer.8.attention.self.value.weight", "encoder.layer.8.attention.self.value.bias", "encoder.layer.8.attention.output.dense.weight", "encoder.layer.8.attention.output.dense.bias", "encoder.layer.8.attention.output.LayerNorm.weight", "encoder.layer.8.attention.output.LayerNorm.bias", "encoder.layer.8.intermediate.dense.weight", "encoder.layer.8.intermediate.dense.bias", "encoder.layer.8.output.dense.weight", "encoder.layer.8.output.dense.bias", "encoder.layer.8.output.LayerNorm.weight", "encoder.layer.8.output.LayerNorm.bias", "encoder.layer.9.attention.self.query.weight", "encoder.layer.9.attention.self.query.bias", "encoder.layer.9.attention.self.key.weight", "encoder.layer.9.attention.self.key.bias", "encoder.layer.9.attention.self.value.weight", "encoder.layer.9.attention.self.value.bias", "encoder.layer.9.attention.output.dense.weight", "encoder.layer.9.attention.output.dense.bias", "encoder.layer.9.attention.output.LayerNorm.weight", "encoder.layer.9.attention.output.LayerNorm.bias", "encoder.layer.9.intermediate.dense.weight", "encoder.layer.9.intermediate.dense.bias", "encoder.layer.9.output.dense.weight", "encoder.layer.9.output.dense.bias", "encoder.layer.9.output.LayerNorm.weight", "encoder.layer.9.output.LayerNorm.bias", "encoder.layer.10.attention.self.query.weight", "encoder.layer.10.attention.self.query.bias", "encoder.layer.10.attention.self.key.weight", "encoder.layer.10.attention.self.key.bias", "encoder.layer.10.attention.self.value.weight", "encoder.layer.10.attention.self.value.bias", "encoder.layer.10.attention.output.dense.weight", "encoder.layer.10.attention.output.dense.bias", "encoder.layer.10.attention.output.LayerNorm.weight", "encoder.layer.10.attention.output.LayerNorm.bias", "encoder.layer.10.intermediate.dense.weight", "encoder.layer.10.intermediate.dense.bias", "encoder.layer.10.output.dense.weight", "encoder.layer.10.output.dense.bias", "encoder.layer.10.output.LayerNorm.weight", "encoder.layer.10.output.LayerNorm.bias", "encoder.layer.11.attention.self.query.weight", "encoder.layer.11.attention.self.query.bias", "encoder.layer.11.attention.self.key.weight", "encoder.layer.11.attention.self.key.bias", "encoder.layer.11.attention.self.value.weight", "encoder.layer.11.attention.self.value.bias", "encoder.layer.11.attention.output.dense.weight", "encoder.layer.11.attention.output.dense.bias", "encoder.layer.11.attention.output.LayerNorm.weight", "encoder.layer.11.attention.output.LayerNorm.bias", "encoder.layer.11.intermediate.dense.weight", "encoder.layer.11.intermediate.dense.bias", "encoder.layer.11.output.dense.weight", "encoder.layer.11.output.dense.bias", "encoder.layer.11.output.LayerNorm.weight", "encoder.layer.11.output.LayerNorm.bias", "pooler.dense.weight", "pooler.dense.bias".
Unexpected key(s) in state_dict: "bert.embeddings.position_ids", "bert.embeddings.word_embeddings.weight", "bert.embeddings.position_embeddings.weight", "bert.embeddings.token_type_embeddings.weight", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.self.query.weight", "bert.encoder.layer.0.attention.self.query.bias", "bert.encoder.layer.0.attention.self.key.weight", "bert.encoder.layer.0.attention.self.key.bias", "bert.encoder.layer.0.attention.self.value.weight", "bert.encoder.layer.0.attention.self.value.bias", "bert.encoder.layer.0.attention.output.dense.weight", "bert.encoder.layer.0.attention.output.dense.bias", "bert.encoder.layer.0.attention.output.LayerNorm.weight", "bert.encoder.layer.0.attention.output.LayerNorm.bias", "bert.encoder.layer.0.intermediate.dense.weight", "bert.encoder.layer.0.intermediate.dense.bias", "bert.encoder.layer.0.output.dense.weight", "bert.encoder.layer.0.output.dense.bias", "bert.encoder.layer.0.output.LayerNorm.weight", "bert.encoder.layer.0.output.LayerNorm.bias", "bert.encoder.layer.1.attention.self.query.weight", "bert.encoder.layer.1.attention.self.query.bias", "bert.encoder.layer.1.attention.self.key.weight", "bert.encoder.layer.1.attention.self.key.bias", "bert.encoder.layer.1.attention.self.value.weight", "bert.encoder.layer.1.attention.self.value.bias", "bert.encoder.layer.1.attention.output.dense.weight", "bert.encoder.layer.1.attention.output.dense.bias", "bert.encoder.layer.1.attention.output.LayerNorm.weight", "bert.encoder.layer.1.attention.output.LayerNorm.bias", "bert.encoder.layer.1.intermediate.dense.weight", "bert.encoder.layer.1.intermediate.dense.bias", "bert.encoder.layer.1.output.dense.weight", "bert.encoder.layer.1.output.dense.bias", "bert.encoder.layer.1.output.LayerNorm.weight", "bert.encoder.layer.1.output.LayerNorm.bias", "bert.encoder.layer
```
Every parameter name that you saved has a "bert" prefix to it, because when you defined your `BertClassifier`, you probably defined the `BertModel` inside it using `self.bert = BertModel.from_pretrained("...")`.
So of course, you can't load it back into a `BertModel`, without first removing the "bert" prefix from all parameter names. Do you understand? You should however be able to directly load the weights into a `BertClassifier`.<|||||>Oh ok, so basically i have to remove the bert. prefix? And how would i have to do this ?<|||||>Is there a reason you don't want to load your weights into a `BertClassifier`, but only the `BertModel`? Because this:
```
model = BertClassifier(freeze_bert=False)
model.load_state_dict(torch.load('finetuned_model.pt')))
```
should work. In case you only want to have a `BertModel`, then you'll need to remove the "bert" prefix from the parameter names. This can be done as follows:
```
from transformers import BertModel, BertConfig
model = BertClassifier(freeze_bert=False)
model.load_state_dict(torch.load('finetuned_model.pt'))
new_state_dict = dict()
for name, param in model.state_dict().items():
name = name[4:]
new_state_dict[name] = param
config = BertConfig.from_pretrained('bert-base-multilingual-uncased', num_labels=2)
model = BertModel.from_pretrained('bert-base-multilingual-uncased')
for name, param in model.state_dict().items():
model.state_dict()[name].copy_(new_state_dict[name])
```
<|||||>So the thing is, i trained and defined the BertClassifier in a .py file and in another .py file i want to use the fine-tuned model on user input data. If I do 'model = BertClassifier(freeze_bert=False)' i will have to import ` from bert import BertClassifier` and when i run the code, it starts the training of the model again...
**Update:** So if i use the above code and after that torch.save the model.state_dict() and will it be the same finetuned model? will i have the same accuracy? <|||||>If i just take the definition of BertClassifier class in the .py file where I want to test the model on user input and i do the following :
```
model = BertClassifier(freeze_bert=False)
model.load_state_dict(torch.load('finetuned_model.pt'))
```
Will that be a workaround? As i told before, if I just import the BertClassifier from the other .py file where i trained it, on this file it will start the training again). Doing the above i see that doesn't start the training and `print(model.state_dict().keys())` returns `"bert.embeddings.position_ids", "bert.embeddings.word_embeddings.weight"` which should work fine if I'm not wrong. <|||||>It's weird that when you import the model, it starts the training again. You only need to import the definition of the model, not the training related code.
```
model = BertClassifier(freeze_bert=False)
model.load_state_dict(torch.load('finetuned_model.pt'))
```
This should work indeed. <|||||>Many thanks for the help! |
transformers | 11,842 | closed | Fix bug in Masked Language Modeling example scripts (#11840)) | # What does this PR do?
when `data_args.line_by_line == False`, the script firstly converts given examples into input_ids, token_type_ids, attention_mask and special_tokens_mask including cls_token, sep_token. Then it concatenates all tokenized outputs and generate chunks of max_seq_length. However, it will generate unintended training examples such as [871, 512, 2492, 1111, 947, 533] not [2 (cls_token), 512, 2492, 1111, 947, 3 (sep_token)]. I fix this problem.
Fixes #11840
@sgugger, @patil-suraj | 05-24-2021 10:27:35 | 05-24-2021 10:27:35 | The `line_by_line=False` argument is not supposed to be used for BERT-like pretraining objectives, it it there to do GPT-like pretraining. Maybe it does not make sense to have it in `run_mlm` at all.
In any case this fix will not necessarily work for all models supported by the script, as the special tokens may be slightly different than what is hard-coded.<|||||>#10737 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,841 | closed | Generate Function call throughs error when "inputs_embeds" argument passed | When using `inputs_embeds` as the argument instead of 'input_ids' while trying to generate text with GPT2 model, an error pops up about `input_ids`.
```
from transformers import GPT2Tokenizer, GPT2LMHeadModel, GPT2Config
import transformers
import torch
import torch.nn as nn
model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
some_random_texts = "This is a nice place to eat"
tokenized_text = tokenizer.encode(some_random_texts, return_tensors='pt')
tokenized_text_embeds = model.transformer.wte(tokenized_text)
output = model.generate(inputs_embeds=tokenized_text_embeds, max_length=50)
```
The error generated is:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-2-643fe4803ba4> in <module>
12 tokenized_text_embeds = model.transformer.wte(tokenized_text)
13
---> 14 output = model.generate(inputs_embeds=tokenized_text_embeds, max_length=50)
C:\ProgramData\Anaconda3\envs\hugging_face\lib\site-packages\torch\autograd\grad_mode.py in decorate_context(*args, **kwargs)
24 def decorate_context(*args, **kwargs):
25 with self.__class__():
---> 26 return func(*args, **kwargs)
27 return cast(F, decorate_context)
28
C:\ProgramData\Anaconda3\envs\hugging_face\lib\site-packages\transformers\generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, **model_kwargs)
891 # init `attention_mask` depending on `pad_token_id`
892 model_kwargs["attention_mask"] = self._prepare_attention_mask_for_generation(
--> 893 input_ids, pad_token_id, eos_token_id
894 )
895
C:\ProgramData\Anaconda3\envs\hugging_face\lib\site-packages\transformers\generation_utils.py in _prepare_attention_mask_for_generation(self, input_ids, pad_token_id, eos_token_id)
401 if is_pad_token_in_inputs_ids and is_pad_token_not_equal_to_eos_token_id:
402 return input_ids.ne(pad_token_id).long()
--> 403 return input_ids.new_ones(input_ids.shape, dtype=torch.long)
404
405 def _prepare_encoder_decoder_kwargs_for_generation(
AttributeError: 'NoneType' object has no attribute 'new_ones'
```
While solving this issue, I find that 'attention_mask' argument needs to be included too, to avoid this `self._prepare_attention_mask_for_generation` function being called altogether, another error pops ups.
The changes made are as follows:
```
from transformers import GPT2Tokenizer, GPT2LMHeadModel, GPT2Config
import transformers
import torch
import torch.nn as nn
model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
some_random_texts = "This is a nice place to eat"
tokenized_text = tokenizer.encode(some_random_texts, return_tensors='pt')
tokenized_text_embeds = model.transformer.wte(tokenized_text)
att_mask = torch.ones(tokenized_text.shape[1])
output = model.generate(inputs_embeds=tokenized_text_embeds, attention_mask=att_mask, max_length=50)
```
and the error that pops up now is:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-7-17cdf18aa52b> in <module>
14 att_mask = torch.ones(tokenized_text.shape[1])
15
---> 16 output = model.generate(inputs_embeds=tokenized_text_embeds, attention_mask=att_mask, max_length=50)
C:\ProgramData\Anaconda3\envs\hugging_face\lib\site-packages\torch\autograd\grad_mode.py in decorate_context(*args, **kwargs)
24 def decorate_context(*args, **kwargs):
25 with self.__class__():
---> 26 return func(*args, **kwargs)
27 return cast(F, decorate_context)
28
C:\ProgramData\Anaconda3\envs\hugging_face\lib\site-packages\transformers\generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, **model_kwargs)
917 raise ValueError("Make sure that `model_kwargs` include `encoder_outputs` of type `ModelOutput`.")
918
--> 919 if input_ids.shape[-1] >= max_length:
920 input_ids_string = "decoder_input_ids" if self.config.is_encoder_decoder else "input_ids"
921 logger.warning(
AttributeError: 'NoneType' object has no attribute 'shape'
```
Am I doing something wrong here or is there is a bug in the code of `generation_utils.GenerationMixin.generate()`?
Version control:
transformers: 4.6.1
torch: 1.7.1
python: 3.7.4
@patrickvonplaten | 05-24-2021 10:16:42 | 05-24-2021 10:16:42 | Hey @abhikasd6523,
I don't think that `generate()` currently supports `inputs_embeds` correctly. It would require quite some changes in `generate()` to make it work I'm afraid. Can you give me some more background on your use-case for passing `inputs_embeds` instead of `input_ids`. If it's a general enough use-case, I think we could try to make the required changes to `generate()`<|||||>Thanks a lot for replying.
I am trying to connect a custom encoder to the GPT2 model and would want to pass the vectored last layer values of the Encoder as input embeds. The goal is to generate a random sentence with this type of connection and architecture.<|||||>I see - did you try to directly use the `sample(...)` method? Think that this could work<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,840 | closed | Bug in MLM example scripts | when `data_args.line_by_line == False`, the script firstly converts given examples into input_ids, token_type_ids, attention_mask and special_tokens_mask **including cls_token, sep_token**. Then it concatenates all tokenized outputs and generate chunks of max_seq_length. However, it will generate unintended training examples such as [871, 512, 2492, 1111, 947, 533] not [2 (cls_token), 512, 2492, 1111, 947, 3 (sep_token)].
```python
if data_args.line_by_line:
# When using line_by_line, we just tokenize each nonempty line.
padding = "max_length" if data_args.pad_to_max_length else False
def tokenize_function(examples):
# Remove empty lines
examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()]
return tokenizer(
examples["text"],
padding=padding,
truncation=True,
max_length=max_seq_length,
# We use this option because DataCollatorForLanguageModeling (see below) is more efficient when it
# receives the `special_tokens_mask`.
return_special_tokens_mask=True,
)
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=[text_column_name],
load_from_cache_file=not data_args.overwrite_cache,
)
else:
# Otherwise, we tokenize every text, then concatenate them together before splitting them in smaller parts.
# We use `return_special_tokens_mask=True` because DataCollatorForLanguageModeling (see below) is more
# efficient when it receives the `special_tokens_mask`.
def tokenize_function(examples):
return tokenizer(examples[text_column_name], return_special_tokens_mask=True)
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not data_args.overwrite_cache,
)
# Main data processing function that will concatenate all texts from our dataset and generate chunks of
# max_seq_length.
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
total_length = (total_length // max_seq_length) * max_seq_length
# Split by chunks of max_len.
result = {
k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]
for k, t in concatenated_examples.items()
}
return result
# Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a
# remainder for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value
# might be slower to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
tokenized_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=data_args.preprocessing_num_workers,
load_from_cache_file=not data_args.overwrite_cache,
)
``` | 05-24-2021 09:43:09 | 05-24-2021 09:43:09 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,839 | closed | [Flax] Fix PyTorch import error | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Running `run_mlm_flax.py` should not have to rely on a PyTorch import. Thanks for spotting this error @marcvanzee !
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-24-2021 09:35:52 | 05-24-2021 09:35:52 | |
transformers | 11,838 | closed | Is 10% in annotation different from 0.5 in code? | ERROR: type should be string, got "https://github.com/huggingface/transformers/blob/0cbddfb190ab9b05b6575fbf818aae17bad4d24a/src/transformers/data/data_collator.py#L387\r\n\r\n```python\r\n # 10% of the time, we replace masked input tokens with random word\r\n indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced\r\n random_words = torch.randint(len(self.tokenizer), labels.shape, dtype=torch.long)\r\n inputs[indices_random] = random_words[indices_random]\r\n```\r\n\r\n" | 05-24-2021 08:55:58 | 05-24-2021 08:55:58 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,837 | closed | Module torch has no attribute minimum for modeling_big_bird.py | Hello
I came across `module 'torch' has no attribute minimum` from the following two lines
1.https://github.com/huggingface/transformers/blob/73fde1defe9be259a47b9024525882f3ec420994/src/transformers/models/big_bird/modeling_big_bird.py#L662
2.https://github.com/huggingface/transformers/blob/73fde1defe9be259a47b9024525882f3ec420994/src/transformers/models/big_bird/modeling_big_bird.py#L796
I think `torch.minimum` should be replaced with `torch.min` | 05-24-2021 06:31:07 | 05-24-2021 06:31:07 | `torch.minimum` was only added in August 2020 to PyTorch, so `torch.minimum` is probably only part of torch 1.7+. To work for previous versions, it should indeed be replaced by `torch.min`.
The README of this repository states that: "This repository is tested on Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ and TensorFlow 2.3+."
cc @vasudevgupta7<|||||>This bug is making it impossible to use BigBird in combination with AWS HuggingFace setup as that one is currently restricted to Pytorch 1.6. (https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face)
Are there any plans to fix the modeling_big_bird.py so that it is backward compatible or agree with AWS on support of never version of Pytorch for HuggingFace containers?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,836 | closed | Not able to fine tune language model | I am trying to fine tune a language model using sagemaker huggingface API
I am using the code
```
import sagemaker
from sagemaker.huggingface import HuggingFace
# gets role for executing training job
role = sagemaker.get_execution_role()
hyperparameters = {
'model_name_or_path':'EleutherAI/gpt-neo-1.3B',
'output_dir':'/opt/ml/model' ,
'data_dir': container_data_train,
'output_dir': container_model_dir
# add your remaining hyperparameters
# more info here https://github.com/huggingface/transformers/tree/v4.4.2/examples/language-modeling
}
# git configuration to download our fine-tuning script
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.4.2'}
container_data_train = '/opt/ml/input/data/training'
container_data_test = '/opt/ml/input/data/testing'
container_model_dir = '/opt/ml/model'
# creates Hugging Face estimator
huggingface_estimator = HuggingFace(
entry_point='run_mlm.py',
source_dir='./examples/language-modeling',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
git_config=git_config,
transformers_version='4.4.2',
pytorch_version='1.6.0',
py_version='py36',
hyperparameters = hyperparameters
)
# starting the train job
huggingface_estimator.fit({'train':'s3://train-data-gpt/'})
```
The input data is there. It is a .txt file with english sentences in each line. However after any number of try I am getting error as
**File "run_mlm.py", line 170, in __post_init__
raise ValueError("Need either a dataset name or a training/validation file.")
ValueError: Need either a dataset name or a training/validation file.**
However the training job was launched as here is the log:
```
Training Env:
{
"additional_framework_parameters": {},
"channel_input_dirs": {
"train": "/opt/ml/input/data/train"
},
"current_host": "algo-1",
"framework_module": "sagemaker_pytorch_container.training:main",
"hosts": [
"algo-1"
],
"hyperparameters": {
"output_dir": "/opt/ml/model",
"model_name_or_path": "EleutherAI/gpt-neo-1.3B",
"data_dir": "/opt/ml/input/data/training"
},
"input_config_dir": "/opt/ml/input/config",
"input_data_config": {
"train": {
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"
}
},
"input_dir": "/opt/ml/input",
"is_master": true,
"job_name": "huggingface-pytorch-training-2021-05-24-06-11-02-967",
"log_level": 20,
"master_hostname": "algo-1",
"model_dir": "/opt/ml/model",
"module_dir": "s3://sagemaker-us-west-2-928825629101/huggingface-pytorch-training-2021-05-24-06-11-02-967/source/sourcedir.tar.gz",
"module_name": "run_mlm",
"network_interface_name": "eth0",
"num_cpus": 8,
"num_gpus": 1,
"output_data_dir": "/opt/ml/output/data",
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"resource_config": {
"current_host": "algo-1",
"hosts": [
"algo-1"
],
"network_interface_name": "eth0"
},
"user_entry_point": "run_mlm.py"
}
Environment variables:
SM_HOSTS=["algo-1"]
SM_NETWORK_INTERFACE_NAME=eth0
SM_HPS={"data_dir":"/opt/ml/input/data/training","model_name_or_path":"EleutherAI/gpt-neo-1.3B","output_dir":"/opt/ml/model"}
SM_USER_ENTRY_POINT=run_mlm.py
SM_FRAMEWORK_PARAMS={}
SM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}
SM_INPUT_DATA_CONFIG={"train":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}}
SM_OUTPUT_DATA_DIR=/opt/ml/output/data
SM_CHANNELS=["train"]
SM_CURRENT_HOST=algo-1
SM_MODULE_NAME=run_mlm
SM_LOG_LEVEL=20
SM_FRAMEWORK_MODULE=sagemaker_pytorch_container.training:main
SM_INPUT_DIR=/opt/ml/input
SM_INPUT_CONFIG_DIR=/opt/ml/input/config
SM_OUTPUT_DIR=/opt/ml/output
SM_NUM_CPUS=8
SM_NUM_GPUS=1
SM_MODEL_DIR=/opt/ml/model
SM_MODULE_DIR=s3://sagemaker-us-west-2-928825629101/huggingface-pytorch-training-2021-05-24-06-11-02-967/source/sourcedir.tar.gz
SM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{"train":"/opt/ml/input/data/train"},"current_host":"algo-1","framework_module":"sagemaker_pytorch_container.training:main","hosts":["algo-1"],"hyperparameters":{"data_dir":"/opt/ml/input/data/training","model_name_or_path":"EleutherAI/gpt-neo-1.3B","output_dir":"/opt/ml/model"},"input_config_dir":"/opt/ml/input/config","input_data_config":{"train":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","is_master":true,"job_name":"huggingface-pytorch-training-2021-05-24-06-11-02-967","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-west-2-928825629101/huggingface-pytorch-training-2021-05-24-06-11-02-967/source/sourcedir.tar.gz","module_name":"run_mlm","network_interface_name":"eth0","num_cpus":8,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"run_mlm.py"}
SM_USER_ARGS=["--data_dir","/opt/ml/input/data/training","--model_name_or_path","EleutherAI/gpt-neo-1.3B","--output_dir","/opt/ml/model"]
SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate
SM_CHANNEL_TRAIN=/opt/ml/input/data/train
SM_HP_OUTPUT_DIR=/opt/ml/model
SM_HP_MODEL_NAME_OR_PATH=EleutherAI/gpt-neo-1.3B
SM_HP_DATA_DIR=/opt/ml/input/data/training
PYTHONPATH=/opt/ml/code:/opt/conda/bin:/opt/conda/lib/python36.zip:/opt/conda/lib/python3.6:/opt/conda/lib/python3.6/lib-dynload:/opt/conda/lib/python3.6/site-packages
Invoking script with the following command:
/opt/conda/bin/python3.6 run_mlm.py --data_dir /opt/ml/input/data/training --model_name_or_path EleutherAI/gpt-neo-1.3B --output_dir /opt/ml/model
```
@philschmid @sgugger | 05-24-2021 06:27:15 | 05-24-2021 06:27:15 | Hey @ghoshmithun,
When you are using the `examples/` / "use sagemaker" with custom data on s3 `huggingface_estimator.fit({'train':'s3://train-data-gpt/'})`.
You need to provide the hyperparameter `train_file` with the path to your file from s3. In your case, this would be `/opt/ml/input/data/train/my_train_file.csv`.
[reference to `train_file` parameter defined in the `run_mlm.py`](https://github.com/huggingface/transformers/blob/6da129cb3152d93c425aab08a92d68c99e09d252/examples/pytorch/language-modeling/run_mlm.py#L114)
[documentation for language-modelling](https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling#language-model-training)
P.S. Not sure if `masked language modeling` is the preferred task for `GPT-NEO`. I think it is`causal language modeling` as for `GPT-2`
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,835 | closed | Tiny fix in README.md of run_flax_mlm | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-24-2021 06:01:28 | 05-24-2021 06:01:28 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,834 | closed | convert_pytorch_checkpoint_to_tf2.py AttributeError: embeddings.word_embeddings.weight not found in PyTorch model | I am trying to convert a finetuned bert model to tensorflow. The model was finetuned using pytorch-pretrained-bert on bert-base-multilingual-cased. But I am getting the following error while trying to convert using the tuned checkpoint.
code:
```
from transformers import convert_pytorch_checkpoint_to_tf2
convert_pytorch_checkpoint_to_tf2.convert_pt_checkpoint_to_tf("bert", "best_network.pt",
"bert-base-multilingual-cased",
"bert_aligned.ckpt")
```
error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-16-e0bc4c4758a1> in <module>
1 convert_pytorch_checkpoint_to_tf2.convert_pt_checkpoint_to_tf("bert", "best_network.pt",
2 "bert-base-multilingual-cased",
----> 3 "bert_aligned.ckpt")
~/opt/anaconda3/lib/python3.7/site-packages/transformers/convert_pytorch_checkpoint_to_tf2.py in convert_pt_checkpoint_to_tf(model_type, pytorch_checkpoint_path, config_file, tf_dump_path, compare_with_pt_model, use_cached_models)
271 pytorch_checkpoint_path = cached_path(pytorch_checkpoint_url, force_download=not use_cached_models)
272 # Load PyTorch checkpoint in tf2 model:
--> 273 tf_model = load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path)
274
275 if compare_with_pt_model:
~/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py in load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path, tf_inputs, allow_missing_keys)
91
92 return load_pytorch_weights_in_tf2_model(
---> 93 tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys
94 )
95
~/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py in load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs, allow_missing_keys)
166 continue
167
--> 168 raise AttributeError("{} not found in PyTorch model".format(name))
169
170 array = pt_state_dict[name].numpy()
AttributeError: embeddings.word_embeddings.weight not found in PyTorch model
``` | 05-24-2021 02:29:59 | 05-24-2021 02:29:59 | Are you trying to convert it a HuggingFace TensorFlow object? If so can you do the following?
```
from transformers import TFBertForPreTraining
model = TFBertForPreTraining.from_pretrained(path_to_checkpoint, from_pt=True)
```
How did you fine-tune your model? How did you save it? Did you train it using HuggingFace transformers? Can you load it back in a PyTorch object or is it failing too?<|||||>@LysandreJik thanks for the reply.
this was pretrained to do multilingual alignment and trained using pytorch-pretrained-bert. Example code snippet:
```
def get_bert(bert_model, bert_do_lower_case):
from pytorch_pretrained_bert import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained(bert_model, do_lower_case = bert_do_lower_case) bert =
BertModel.from_pretrained(bert_model) return tokenizer, bert
class WordLevelBert(nn.Module):
""" Runs BERT on sentences but only keeps the last subword embedding for each word. """
def __init__(self, model, do_lower_case):
super().__init__()
self.bert_tokenizer, self.bert = get_bert(model, do_lower_case)
self.dim = self.bert.pooler.dense.in_features
self.max_len = self.bert.embeddings.position_embeddings.num_embeddings
if use_cuda:
self.cuda()
def forward(self, sentences, include_clssep = True):
batch_size = 128
ann_full = None
for i in range(0, len(sentences), batch_size):
ann = self.annotate(sentences[i:i+batch_size], include_clssep = include_clssep)
.....
```
and I saved it in the following manner after training:
```
torch.save({'state_dict': model.state_dict(), 'trainer' : trainer.state_dict(),}, 'best_network.pt')
```
Update:
I could get rid of the error by making start_prefix_to_remove="" and by making pt_state_dict=pt_state_dict['state_dict'] in the file: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_pytorch_utils.py
But now I get this new error:
```
~/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py in load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path, tf_inputs, allow_missing_keys)
92
93 return load_pytorch_weights_in_tf2_model(
---> 94 tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys
95 )
96
~/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py in load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs, allow_missing_keys)
169 continue
170
--> 171 raise AttributeError("{} not found in PyTorch model".format(name))
172
173 array = pt_state_dict[name].numpy()
AttributeError: cls.seq_relationship.weight not found in PyTorch model
```
the fine tuned state dict can be loaded fine by:
```
from pytorch_pretrained_bert import BertTokenizer, BertModel
bert = BertModel.from_pretrained('bert-base-multilingual-cased')
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased', do_lower_case = False)
bert.load_state_dict(torch.load('best_network.pt')['state_dict'])
```
And I got similar type of error while finetuning and saving a tutorial notebook this one https://www.kaggle.com/eggwhites2705/transformers-multi-label-classification
----------------------------------------------------------------------------------------------
I use this code now and got a tf model:
```
from transformers import TFBertModel
model = TFBertModel.from_pretrained("./demo_model", from_pt=True)
model.save("./demo_tf")
```
I got a .pb model and variable files like .data and .index but no .meta file. My aim is to use this .data and .index file insted of the original bert initial checkpoint in the tydiqa code: https://github.com/google-research-datasets/tydiqa/tree/master/baseline
<|||||>Thank you for clarifying! In this case, if you have a PyTorch model that correctly loads (we recommend always using `from_pretrained`/`save_pretrained` rather than `.save()` and `torch.load`) that you want to convert to the "original" TensorFlow, then you should be able to use this script:
https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/convert_bert_pytorch_checkpoint_to_original_tf.py
It will convert your model to a TF1 model usable in Google's original repository.<|||||>@LysandreJik thanks a lot. This solved my issue. Can you give me an idea about one thing: when I saved the tensorflow checkpoint with TFBertModel.from_pretrained("./demo_model", from_pt=True).save(..) the saved checkpoint was smaller in size like 411 mb. Today I got the right sized checkpoint with the script you suggested (711 mb)....why the size was smaller in previous approach?<|||||>I'm not entirely sure, but I guess it would make sense for the sizes to be different as the saving format is different between TFBertModel (TF2) and the TF1 saved checkpoint. Maybe our TF expert @Rocketknight1 has more insights :)<|||||>No idea, unfortunately! I don't **think** the format changed that massively between TF1 and TF2.
One things that strikes me is that 711mb for a bert-base model is quite large: With 110M parameters, we should expect it to take up about 110*4 = 440mb of space uncompressed, because each parameter is a 32-bit (4-byte) float. That said, if it works, don't question it™<|||||>THANKS @LysandreJik AND @Rocketknight1 . This is helpful. Yes it took 440 earlier but when I used the tf1 conversion script it took 711 mb. I will finetune tydiqa on both of these models and let's see, if there is any difference any performence. |
transformers | 11,833 | closed | [BUG] Trainer predict bug under DDP model. | ### Background
The model is trained with DDP.
The error denotes that the batch is smaller than the model required.
However, the test file cannot use `droplast`.
How I can predict the test file with DDP or remove DDP?
### Code
```
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=10, # total number of training epochs
per_device_train_batch_size=2, # batch size per device during training
per_device_eval_batch_size=8, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=1e-2, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=100,
evaluation_strategy='epoch',
gradient_accumulation_steps=4,
metric_for_best_model="f1",
fp16=True
)
trainer = Trainer(
model=model, # the instantiated Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset, # evaluation dataset
compute_metrics=compute_metrics
)
trainer.train()
test_dataset = WNUTDataset(test_encodings)
predictions, labels, _ = trainer.predict(test_dataset)
predictions = np.argmax(predictions, axis=2)
```
### Bugs
```
~/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/_functions.py in forward(ctx, target_device, dim, *inputs)
66 ctx.unsqueezed_scalar = False
67 ctx.input_sizes = tuple(map(lambda i: i.size(ctx.dim), inputs))
---> 68 return comm.gather(inputs, ctx.dim, ctx.target_device)
69
70 @staticmethod
~/anaconda3/lib/python3.7/site-packages/torch/cuda/comm.py in gather(tensors, dim, destination)
164 concatenating ``tensors`` along ``dim``.
165 """
--> 166 return torch._C._gather(tensors, dim, destination)
RuntimeError: Gather got an input of invalid size: got [1024, 7, 768], but expected [1024, 8, 768]
```
@sgugger | 05-23-2021 22:56:43 | 05-23-2021 22:56:43 | Without seeing the whole stack trace, your version of Transformers used (please follow the issue template!) or the code you are using to build your dataset, there is little we can do to help.<|||||>> Without seeing the whole stack trace, your version of Transformers used (please follow the issue template!) or the code you are using to build your dataset, there is little we can do to help.
Hi, the code is in https://colab.research.google.com/drive/1wmdjmU54iVSGXeJXzLO706uRzg2hlVZb?usp=sharing
At present, **I change the evaluate batch size to 1, and the prediction is successful. But it' s very slow.**
Note that I trained the model offline, not in the colab. I think maybe `Transformers` should provide a api to specify parallel traning model (the defaut is nn.DataParallel, however ... the bugs).
```
# load the best model, batch_size = 1 (for DDP bug, batch_size=8 get a error)
training_args2 = TrainingArguments(
output_dir='./results', # output directory
per_device_eval_batch_size=1, # batch size for evaluation
logging_dir='./logs', # directory for storing logs
logging_steps=100,
)
trainer2 = Trainer(
model=trainer.model, # the instantiated Transformers model to be trained
args=training_args2, # training arguments, defined above
compute_metrics=compute_metrics
)
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,832 | closed | Seq2seq-based model running slowly on TPU | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: v4.5.1 and newest version
- Platform: Colab
- Python version: 3.8
- PyTorch version (GPU?): 1.8.1
- TPU v2-8
### Who can help
@patil-suraj @patrickvonplaten
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik @sgugger
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
I tried to finetune mBART, T5, and MarianMTModel but all of them are running slowly on Colab TPUv2. I think the cause may come from Seq2seqTrainer since it works very well when I finetuned BERT on TPUv2 for MNLI Text classification.
The problem arises when using:
* The official example scripts: I use exactly the example parameter in README to train Translation model.
```
python xla_spawn.py --num_cores 8 \
seq2seq/run_translation.py \
--model_name_or_path Helsinki-NLP/opus-mt-en-ro \
--do_train \
--do_eval \
--source_lang en \
--target_lang ro \
--dataset_name wmt16 \
--dataset_config_name ro-en \
--output_dir /tmp/tst-translation \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir
```
I provide my [Colab notebook](https://colab.research.google.com/drive/1Y8kSbuZJ8ChIjgAf67F1cSciVaHfGTyq?usp=sharing)
The tasks I am working on is:
* an official GLUE/SQUaD task: WMT16
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Currently, with each step, the training duration increases many times. I want the duration is more stable and faster than GPU.
<!-- A clear and concise description of what you would expect to happen. -->
| 05-23-2021 15:37:33 | 05-23-2021 15:37:33 | I'm not sure if we ever tested whether those seq2seq models run correctly on TPU (@patil-suraj). It might be the case that a lot of the computations are dynamic and therefore are constantly re-compiled.<|||||>I realize that I haven't passed --pad_to_max_length to the script, which leads to our model running slowly. So I will close this issue. Thank you for your support and sorry about that.<|||||>I have tested T5 and Marian on colab TPU and they work well
@heraclex12 you are right, on TPU we should always pass `--pad_to_max_length` to avoid XLA re-compilation, and ideally `max_length` should be multiple of 8. |
transformers | 11,831 | closed | [docs] XLnet reference link bug in description of past_index Parameter of TrainingArguments | XLnet reference link bug in description of past_index Parameter of TrainingArguments
link to the doc: https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments
**current description:** Some models like TransformerXL or **:doc`XLNet <../model_doc/xlnet>`** can make use of the past hidden states for their predictions. If this argument is set to a positive int, the Trainer will use the corresponding output (usually index 2) as the past state and feed it to the model at the next training step under the keyword argument mems.
**expected description:** Some models like TransformerXL or **XLNet** can make use of the past hidden states for their predictions. If this argument is set to a positive int, the Trainer will use the corresponding output (usually index 2) as the past state and feed it to the model at the next training step under the keyword argument mems.
## Environment info
Not required
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
| 05-23-2021 05:30:08 | 05-23-2021 05:30:08 | Thanks for flagging! Should be fixed by the PR linked above! |
transformers | 11,830 | closed | Delete key or set to `None` in __getstate__ impl. | Hi,
there are some places that implement `__getstate__` because an object has a reference to an other object that is not pickable.
`__getstate__` then "deletes" the reference by setting it to `None`. Just a few examples:
https://github.com/huggingface/transformers/blob/73fde1defe9be259a47b9024525882f3ec420994/src/transformers/models/m2m_100/tokenization_m2m_100.py#L272
https://github.com/huggingface/transformers/blob/73fde1defe9be259a47b9024525882f3ec420994/src/transformers/models/marian/tokenization_marian.py#L299
IMO it would be better to delete the keys instead of setting them to `None`. Like this: `del state["sp_model"]`
What do you think @sgugger @LysandreJik ? I can provide a PR if wanted. | 05-23-2021 04:49:50 | 05-23-2021 04:49:50 | Hmmm - thinking about it - the value is also set to None by default in the constructor. I will close this... |
transformers | 11,829 | closed | [AutomaticSpeechRecognitionPipeline] CUDA support | ## Environment info
- `transformers` version: 4.6.0
- Platform: Linux-4.15.0-106-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
- pipelines: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): Wav2Vec2
The problem arises when using:
* [ X ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
* [ X ] Automatic Speech Recognition
## To reproduce
Steps to reproduce the behavior:
1. Instantiate AutomaticSpeechRecognitionPipeline with device set to GPU
2. Run pipeline inference on example audio input
```
import transformers
model = transformers.Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-xlsr-53-spanish")
tokenizer = transformers.AutoTokenizer.from_pretrained("facebook/wav2vec2-large-xlsr-53-spanish")
feature_extractor = transformers.AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-large-xlsr-53-spanish")
pl = transformers.AutomaticSpeechRecognitionPipeline(feature_extractor=feature_extractor, model=model, tokenizer=tokenizer, framework='pt',device=0)
pl('waveform.wav')
```
The snippet above results in the following error:
`RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same`
## Expected behavior
Inputs should be converted to CUDA tensor.
I believe this is happening because the feature extractor doesn't preserve the device.
I'm able to solve the issue if I add
`processed = self.ensure_tensor_on_device(**processed)`
after https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/automatic_speech_recognition.py#L136-L138
If this solution is acceptable, I'm happy to open a PR. | 05-23-2021 02:14:13 | 05-23-2021 02:14:13 | Sure, feel free to open a PR! Thanks @francescorubbo |
transformers | 11,828 | closed | possible bug in `TokenizerFast` when setting `return_offset_mapping=True` | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1, 4.6.1
- `tokenizers` version 0.10.2
- Platform: Linux/Ubuntu 18.04
- Python version: 3.9.1
- PyTorch version (GPU?): 1.7.1
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
- tokenizers: @n1t0, @LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
I am building a model (using Bert) which needs the `offsets` between tokens and original words. However, if I set `return_offset_mapping=True` in `BertTokenizerFast`, the returned encodings are not accepted by the model. Is this a bug or is it intended behavior?
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
The following code snippet reproduces the problem for me:
```python
from transformers import BertTokenizerFast, BertModel
if __name__ == '__main__':
test_string = 'text with percentage%'
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
encodings = tokenizer(test_string, return_offsets_mapping=True, return_tensors='pt')
print(encodings.keys())
model = BertModel.from_pretrained('bert-base-uncased')
out = model(**encodings)
```
I got the following error trace showing that `BertModel.forward()` does not accept `offset_mapping` which is included in the dict of `encodings`:
```
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'offset_mapping'])
Traceback (most recent call last):
File "~/trans_test.py", line 9, in <module>
out = model(**tokens)
File "~/miniconda3/envs/tf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'offset_mapping'
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
At least, `Model.forward()` is supposed to have a key-arg placeholder for `offset_mapping`. For now, a workaround solution is to pop out `offset_mapping` from `encodings` before feeding the model. :(
<!-- A clear and concise description of what you would expect to happen. -->
| 05-22-2021 19:41:39 | 05-22-2021 19:41:39 | Indeed, the model does not accept the `offset_mapping`, and does not need them for anything; so when using the standard BERT model, make sure you don't feed this value to the model.
If you're making a custom BERT model that accepts offset mappings, then you should also update the signature to handle them!<|||||>Ok. Thanks for reminding! |
transformers | 11,827 | closed | My modified `run_glue.py` works well with v4.1.1 but not good with v4.6.0 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
`The environment in which the script doesn't work well`
- `transformers` version: 4.6.0
- Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
`The environment in which the script works well`
- `transformers` version: 4.1.1
- Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- bert: @LysandreJik
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): bert-base-cased
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Here are some modified parts from the official `run_glue.py` example.
The version of the `run_glue.py` was for v4.1.1.
https://github.com/huggingface/transformers/blob/v4.1.1/examples/text-classification/run_glue.py
``` python
# Preprocessing the datasets
if data_args.task_name is not None:
sentence1_key, sentence2_key = task_to_keys[data_args.task_name]
else:
# Again, we try to have some nice defaults but don't hesitate to tweak to your use case.
non_label_column_names = [name for name in datasets["train"].column_names if name != "label"]
if "sentence1" in non_label_column_names and "sentence2" in non_label_column_names:
sentence1_key, sentence2_key = "sentence1", "sentence2"
else:
if len(non_label_column_names) >= 2:
sentence1_key, sentence2_key = non_label_column_names[:2]
if sentence2_key == "id" or sentence2_key == "idx":
sentence2_key = None
else:
sentence1_key, sentence2_key = non_label_column_names[0], None
print(f"sentence1_key {sentence1_key}")
print(f"sentence2_key {sentence2_key}")
```
``` python
train_dataset = datasets["train"]
eval_dataset = datasets["validation_matched" if data_args.task_name == "mnli" else "validation"]
# if data_args.task_name is not None:
# test_dataset = datasets["test_matched" if data_args.task_name == "mnli" else "test"]
test_dataset = datasets["test_matched" if data_args.task_name == "mnli" else "test"]
# Log a few random samples from the training set:
for index in random.sample(range(len(train_dataset)), 3):
logger.info(f"Sample {index} of the training set: {train_dataset[index]}.")
# Get the metric function
if data_args.task_name is not None:
metric = load_metric("glue", data_args.task_name)
# TODO: When datasets metrics include regular accuracy, make an else here and remove special branch from
# compute_metrics
```
``` python
def compute_metrics(p: EvalPrediction):
preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions
preds = np.squeeze(preds) if is_regression else np.argmax(preds, axis=1)
if data_args.task_name is not None:
result = metric.compute(predictions=preds, references=p.label_ids)
if len(result) > 1:
result["combined_score"] = np.mean(list(result.values())).item()
return result
elif is_regression:
# return {"mse": ((preds - p.label_ids) ** 2).mean().item()}
# use the same metric as stsb (for pearsonr, spearmanr)
metric = load_metric("glue", "stsb")
result = metric.compute(predictions=preds, references=p.label_ids)
return result
else:
return {"accuracy": (preds == p.label_ids).astype(np.float32).mean().item()}
```
I tried to update my modified script referring to the latest version of `run_glue.py,` but it didn't solve the problem.
Steps to reproduce the behavior:
``` sh
(transformers4.1.1) $ CUDA_VISIBLE_DEVICES=0 python run_emobank_4.1.1.py \
--model_name_or_path bert-base-cased \
--train_file /path/to/train.csv \
--validation_file /path/to/validation.csv \
--test_file /path/to/test.csv \
--do_train \
--do_eval \
--do_predict \
--max_seq_length 64 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 10.0 \
--load_best_model_at_end \
--evaluation_strategy epoch \
--metric_for_best_model eval_pearson \
--output_dir /path/to/result/v4.1.1 \
--overwrite_output_dir
```
If I run the script with transformers 4.1.1, it works well.
If 4.6.0, it runs without any error, but the result is not good.
`4.1.1 validation result`
```
eval_loss = 0.10065479576587677
eval_pearson = 0.6559863196369287
eval_spearmanr = 0.6244913632922552
epoch = 10.0
```
`4.6.0 validation result`
```
eval_loss = 0.16216666996479034
eval_pearson = 0.1785468733027603
eval_spearmanr = 0.18945952641568345
eval_runtime = 3.1986
eval_samples_per_second = 120.992
epoch = 10.0
```
## Expected behavior
Are there any tips to update my own script, which is written for v4.1.1, to be applied to v4.6.0+?
| 05-22-2021 17:07:16 | 05-22-2021 17:07:16 | You should upgrade to 4.6.1, I think this is related to a bug fixed by #11785. Let us know if this doesn't solve your problem!<|||||>@sgugger
Thank you for telling me the information!
I upgraded to 4.6.1 and tried running the script again, and now got the expected (or even better) result!
(I expected to reproduce the result of 4.1.1 because I used the same hyperparameters, but the result was better than that.)
Thank you again!
|
transformers | 11,826 | closed | feat: add contributor over time graph to README | Hi, community!
To better present how our community grows, we develop a tool to show contributors growing history on [https://github.com/api7/contributor-graph](https://github.com/api7/contributor-graph). Since we found it helpful, we think maybe if it could help some other community.
## WHAT IT IS
Basically, it just shows the contributors growth over time, just like the stargazers over time on the README. It would be the same with stars that we would update the graph each day, so the link would always present the real-time data. There is some other stuff to play around with if you would like to give it a try~

## HOW IT WORKS
We use Github API to get all commits, try to find the “Github way” to filter commits so the result data would be similar to Github, and then get the first commit time of each user.
Don't hesitate to tell us if there is a better place to present this graph other than this, or there are some other worries or other features you would like to have~🍻
| 05-22-2021 08:34:33 | 05-22-2021 08:34:33 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.