repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 13,834 | closed | include megatron_gpt2 in installed modules | This PR ensures `transformers/models/megatron_gpt2` gets installed - we need the conversion script in the normal install.
@sgugger | 10-01-2021 18:35:40 | 10-01-2021 18:35:40 | |
transformers | 13,833 | closed | Updating CITATION.cff to fix GitHub citation prompt BibTeX output. | # What does this PR do?
This is a small tweak to the `CITATION.cff` file to fix the output from the GitHub 'cite this repository' prompt, as [this is now supported](https://github.com/citation-file-format/ruby-cff/commit/5407f8e207c1f362568bb21e22865a512be9841f) in the Gem GitHub uses, and that gem is now live on GitHub.
Old BibTeX output:
```
@misc{Wolf_Transformers_StateoftheArt_Natural_2020,
author = {Wolf, Thomas and Debut, Lysandre and Sanh, Victor and Chaumond, Julien and Delangue, Clement and Moi, Anthony and Cistac, Perric and Ma, Clara and Jernite, Yacine and Plu, Julien and Xu, Canwen and Le Scao, Teven and Gugger, Sylvain and Drame, Mariama and Lhoest, Quentin and Rush, Alexander M.},
month = {10},
pages = {38--45},
title = {{Transformers: State-of-the-Art Natural Language Processing}},
url = {https://www.aclweb.org/anthology/2020.emnlp-demos.6},
year = {2020}
}
```
New BibTeX output:
```
@inproceedings{Wolf_Transformers_StateoftheArt_Natural_2020,
author = {Wolf, Thomas and Debut, Lysandre and Sanh, Victor and Chaumond, Julien and Delangue, Clement and Moi, Anthony and Cistac, Perric and Ma, Clara and Jernite, Yacine and Plu, Julien and Xu, Canwen and Le Scao, Teven and Gugger, Sylvain and Drame, Mariama and Lhoest, Quentin and Rush, Alexander M.},
month = {10},
pages = {38--45},
publisher = {Association for Computational Linguistics},
title = {{Transformers: State-of-the-Art Natural Language Processing}},
url = {https://www.aclweb.org/anthology/2020.emnlp-demos.6},
year = {2020}
}
```
Tagging @sgugger who merged the original PR that added a `CITATION.cff` file in https://github.com/huggingface/transformers/pull/13214.
| 10-01-2021 14:34:49 | 10-01-2021 14:34:49 | |
transformers | 13,832 | closed | AttributeError: type object 'EnglishDefaults' has no attribute 'create_tokenizer' | I am using OpenAI gpt2. and I am facing this error.
```
File "/home/anaconda/envs/nlp/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1871, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/anaconda/envs/nlp/lib/python3.7/site-packages/transformers/models/openai/tokenization_openai.py", line 107, in __init__
self.nlp = _nlp.Defaults.create_tokenizer(_nlp)
AttributeError: type object 'EnglishDefaults' has no attribute 'create_tokenizer'
```
I have
```
tokenizers 0.10.3
transformers 4.11.2
```
Can you guide me why is this.? If you can guide to fix it. | 10-01-2021 13:52:33 | 10-01-2021 13:52:33 | I think that this functionality requires spaCy v2 and you have spaCy v3 installed (or the other way around). Try installing the opposite version.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,831 | closed | Add Tensorflow handling of ONNX conversion | Add tf2onnx and onnx packages to setup.py
Use them in convert.py to handle ONNX conversion of TF models.
Add tests of conversion to onnx for tensorflow models
# What does this PR do?
This PR develps the new feature mentioned in https://github.com/huggingface/transformers/issues/13534#issue-994256428
To do so, I had to add two packages to setup.py but I am not sure what the policy is regarding the addition of new packages. Maybe it should me done otherwise but I haven't found any relevant documentation. Therefore, I have decided to add setup.py to my commits but I am aware you might want to do it differently. The two packages are tf2onnx and onnx.
The documentation might also have to be changed but I am not sure how.
Happy to have any feedback, this is my first PR :)
<!-- Remove if not applicable -->
Fixes # 994256428
## Before submitting
## Who can review?
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. | 10-01-2021 13:10:08 | 10-01-2021 13:10:08 | Hi @Albertobegue thank you for this very nice contribution! We're currently working on a large refactor / enhancement of the ONNX export in #14700 so I'll come back to this PR once that's been merged :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @Albertobegue now that #14358 has been merged, would you mind rebasing your branch on `master` and resolving the merge conflicts in your PR? I'll then be happy to review it and give you some feedback :)<|||||>> Also, have you checked that the unit tests pass? You can run them with:
>
> ```
> RUN_SLOW=1 pytest tests/test_onnx_v2.py
> ```
>
> or if you just want to test a single architecture with
>
> ```
> RUN_SLOW=1 pytest tests/test_onnx_v2.py -k "distilbert" -rp
> ```
I have, yes, so I don't understand why the .circleci checks fail?
<|||||>Hey @Albertobegue is this ready for another review? If not, just ping me here when it it :)<|||||>> Hey @Albertobegue is this ready for another review? If not, just ping me here when it it :)
It is, yes, but I don't understand why the checks on circleci fail, since I don't think I have modified test_file_utils or anything imported by it. Maybe I'm wrong and I did while rebasing though.<|||||>> > Hey @Albertobegue is this ready for another review? If not, just ping me here when it it :)
>
> It is, yes, but I don't understand why the checks on circleci fail, since I don't think I have modified test_file_utils or anything imported by it. Maybe I'm wrong and I did while rebasing though.
Hey @Albertobegue sorry for the delay. Do you mind if I make a few commit to your branch to get the tests passing?<|||||>Of course, not. Thank you!<|||||>Hey @Albertobegue I've fixed the merge conflicts and most of the failing unit tests. There's some missing bits in the ONNX unit tests, but I'll ping you once I've added them :)
<|||||>Thanks for refactoring the export code @Albertobegue - this PR looks good to go on my side 🚀 !
Pinging @LysandreJik @michaelbenayoun and the TensorFlow team @Rocketknight1 @gante for their blessing too 😇 <|||||>Great contribution, thanks @Albertobegue !<|||||>Merging this now - will tackle the docs in a separate PR :) |
transformers | 13,830 | closed | Add on option to output a checkpoint every x minutes | # 🚀 Feature request
Extending `transformers.trainer_utils.IntervalStrategy` by an option `MINUTES` to be able to save a checkpoint based on training time.
## Motivation
For hyper-parameter searches over large batch sizes (where gradient accumulation is necessary) the time between steps can vary by orders of magnitude between different runs. This could lead to very different number of written checkpoints for different runs. In this case it could be useful to specify to have checkpoints output every x hours rather than every x steps.
## Contribution
Happy to submit a PR for that.
| 10-01-2021 12:19:58 | 10-01-2021 12:19:58 | cc @sgugger <|||||>This sounds like a useful feature, feel free to suggest a PR to add that!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,829 | closed | Fix warning situation: UserWarning: max_length is ignored when padding=True" | # What does this PR do?
Fixes #13826
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-01-2021 09:38:41 | 10-01-2021 09:38:41 | The change looks good to me.<|||||>> I don't think the warning should be just completely removed as it has some importance. We could check the `truncation` argument`and leave the warning if it's not set to`True`.
You mean `truncation=False` and `max_length=10` => Should that not raise another warning, that `max_length` is ignored when you have set `truncation` to `False`.
I would argue that `max_length` and `padding=True` has no connection => hence, create no warning<|||||>I don't think users are confused by what is happening when `truncation=False`, they are confused by what happens when `padding=True`. Remember that most users don't read the doc so they don't know `padding=True` means "longest" and not "max length".<|||||>> I don't think users are confused by what is happening when `truncation=False`, they are confused by what happens when `padding=True`. Remember that most users don't read the doc so they don't know `padding=True` means "longest" and not "max length".
Yes, that makes sense.
So this warning can be raised when `truncation == False or truncation == 'do_not_truncate'`<|||||>Thank you for your comments.
I updated the code and title. |
transformers | 13,828 | closed | Image Segmentation pipeline | # What does this PR do?
## TLDR
Implements `image-segmentation` pipeline (for [DetrForSegmentation](https://huggingface.co/transformers/model_doc/detr.html#detrforsegmentation) atm).
## API specifications
Input: image source (identical to the input of `image-classification` & `object-detection` pipelines)
Output:
```python
List(Dict(
mask: str, # base64 str
label: float,
score: int,
))
```
### Two design choices I've made and would like to discuss (& modify if needed):
1. Output `png_string` has masks information using same mechanism as COCO panoptic segmentation annotations. See section `4. Panoptic Segmentation` from https://cocodataset.org/#format-data. Paraphrasing a bit:
> per-pixel segment ids are stored in the PNG string. Each segment is assigned a unique id. Unlabeled pixels (void) are assigned a value of 0. Note that when you load the PNG as an RGB image, you will need to compute the ids via ids=R+G*256+B*256^2.
2. Image segmentation pipeline accepts `subtask` arg. There are different variations of segmentation task (semantic, instance, panoptic, etc. see image below). If a model doesn't implement requested subtask, it gets defaulted to what's available. See example below:
https://github.com/huggingface/transformers/blob/dd5c2697f129988bd9c70b52555e49dd32c78bd2/src/transformers/models/detr/feature_extraction_detr.py#L738-L739
<img width="400" src="https://user-images.githubusercontent.com/48327001/126451023-4edd68e4-c552-422a-9765-297a470d36c6.jpg">
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
Discussed in huggingface/hub-docs#43, huggingface/hub-docs#6, https://github.com/huggingface/huggingface_hub/pull/378
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs),
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
#### Update 1:
Updated output shape:
```python
List(Dict(
mask: str, # base64 str
label: float,
score: int,
))
```
The goal for any segmentation architecture is to implement the most detailed version of segmentation subtask they can so that other subtasks can be reconstructed (if needed). For example, if an architecture post_process_segmentation method implements part-aware panoptic, other subtasks (including semantic, instance, etc.) can be reconstructed from part-aware panoptic output since all the details needed are in there | 10-01-2021 09:23:48 | 10-01-2021 09:23:48 | Please let me know if I should merge this PR @Narsil @NielsRogge @LysandreJik <|||||>It's gtg for me. |
transformers | 13,827 | closed | Bort always predict wrong words | Even on api interface Bort predict wrong words in MaskedLM for example here - https://huggingface.co/amazon/bort?text=Paris+is+the+%3Cmask%3E+of+France.
So there has to be some configuration bug that should be fixed.
The same happens when calling Bort using auto classes.
Could you take a look?
| 10-01-2021 09:03:35 | 10-01-2021 09:03:35 | Pinging @stefan-it here as he contributed this model<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @Oxi84 , we were not able to get mask prediction working, even with the original Gluonnlp implementation. So I think it's a good idea to disable the inference for now.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,826 | closed | Tokenizer - Raises wrong "UserWarning: `max_length` is ignored when `padding`=`True`" | In the newest version of transformers (4.11.2 & 4.12.0.dev0) I get the following warning:
```
C:\Anaconda3\envs\sbert\lib\site-packages\transformers\tokenization_utils_base.py:2227: UserWarning: `max_length` is ignored when `padding`=`True`.
warnings.warn("`max_length` is ignored when `padding`=`True`.")
```
Code to re-produce:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
texts = ["Short sentence", "A really really really really really long sentence to test max length"]
output = tokenizer(texts, padding=True, truncation=True, max_length=5, return_tensors='pt')
print(output['input_ids'].shape)
output = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
print(output['input_ids'].shape)
```
Output:
```
C:\Anaconda3\envs\sbert\lib\site-packages\transformers\tokenization_utils_base.py:2227: UserWarning: `max_length` is ignored when `padding`=`True`.
warnings.warn("`max_length` is ignored when `padding`=`True`.")
torch.Size([2, 5])
torch.Size([2, 14])
````
As we see, max_length is not ignored when padding = True. It truncates the text as expected to a max_length of 5.
I would say that the warning is incorrect and should not be raised.
Should I fix it?
Or is it really intended that max_length is ignored when padding=True? This would be horrible, I want to truncate my text to a certain max_length. | 10-01-2021 08:07:31 | 10-01-2021 08:07:31 | Issue is connected to this PR
https://github.com/huggingface/transformers/pull/13509
PR by @shirayu
Reviewed by @sgugger and @LysandreJik <|||||>I think it is right and the following line should be removed.
https://github.com/huggingface/transformers/blob/8bbb53e20b7873ba7f63be70d4d798e0c3568bfa/src/transformers/tokenization_utils_base.py#L2226-L2227
I must have misunderstood the behavior.
I thought the length was determined independently of the ``max_length`` value as the comment in the last line 2230.
https://github.com/huggingface/transformers/blob/8bbb53e20b7873ba7f63be70d4d798e0c3568bfa/src/transformers/tokenization_utils_base.py#L2223-L2230
Please let me know if it is correct to remove those two lines and I should submit a pull request for a fix.<|||||>max_length has impact on truncation.
E.g. you pass a 4 token and 50 token input text, max_length=10 => text is truncated to 10 tokens, i.e. you have now two texts, one with 4 tokens, one with 10 tokens.
Next, we have padding.
`True` and `'longest'` pads the text to 10 tokens. In that case, `padding='max_length'` would also pad to 10 tokens.
A difference is if you pass two texts with e.g. 4 and 6 tokens with `max_length=10`.
`True` and `'longest'` pads the text to 6 tokens
`padding='max_length'` would pad to 10 tokens.
Yes, the two lines you highlighted that creates the warning should be removed.
Would be great if you could create the PR.
Edit:
Here some code to show the difference:
```
from transformers import AutoTokenizer
import transformers
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
short_input = ["One Two", "One Two Three"]
long_input = ["Short sentence", "A really really really really really long sentence to test max length"]
output = tokenizer(short_input, padding=True, truncation=True, max_length=10, return_tensors='pt')
print(output['input_ids'].shape) #Output: 5 tokens
output = tokenizer(short_input, padding='max_length', truncation=True, max_length=10, return_tensors='pt')
print(output['input_ids'].shape) #Output: 10 tokens
output = tokenizer(long_input, padding=True, truncation=True, max_length=10, return_tensors='pt')
print(output['input_ids'].shape) #Truncated to 10 tokens & padded to 10 tokens
output = tokenizer(long_input, padding='max_length', truncation=True, max_length=10, return_tensors='pt')
print(output['input_ids'].shape) #Truncated to 10 tokens & padded to 10 tokens
```<|||||>Thank you for the comment.
I've created a pull request. |
transformers | 13,825 | closed | Consistent speech model input names for the Seq2SeqTrainer generate function | # 🚀 Feature request
Could we maybe have a consistent naming convention for speech models? So far we have:
- [`input_features`](https://huggingface.co/transformers/model_doc/speech_to_text.html#speech2textforconditionalgeneration)
- [`input_values`](https://huggingface.co/transformers/model_doc/wav2vec2.html#wav2vec2forctc)
- [`input_ids`](https://huggingface.co/transformers/model_doc/speech_to_text_2.html#speech2text2forcausallm)
From what I can tell, these are mostly the same for the purposes of how the `Seq2SeqTrainer` interprets them.
## Motivation
This would prevent the need for custom `Seq2SeqTrainer` classes and would make training more modular.
## Your contribution
A change in param names would do the trick but could break a lot of code. Alternatively adding the capability to accept different key values in the `generate` function [here](https://github.com/huggingface/transformers/blob/41436d3dfb98e0d17f018db29790b65663358edf/examples/legacy/seq2seq/seq2seq_trainer.py#L219) would work too using a (clunky) mapping such as `INPUT_MAPPING_LABELS = {"input_features": "input_ids", "input_values": "input_ids", "input_ids": "input_ids"}`.
| 10-01-2021 07:23:26 | 10-01-2021 07:23:26 | Yes indeed. @patrickvonplaten
Also, the `generate` method will be extended to also work with `pixel_values` (for models that generate text given an image). The general input of `generate` could then be called `inputs` (which could be `input_ids` for text, `input_features` for speech, `pixel_values` for images, etc.).
Using a mapping is probably the way to ensure backwards compatibility.<|||||>I can tackle this next week :-)<|||||>Related: https://github.com/huggingface/transformers/issues/14421<|||||>This is resolved now with https://github.com/huggingface/transformers/pull/14802/files no?<|||||>Hey there. Looks cool! One super edge case is `SpeechEncoderDecoderModel`.
For example, if we use a `Wav2Vec2Model` encoder and a `XLMRobertaForCausalLM` decoder, we would have `['input_ids', 'attention_mask']` when using either `processor.tokenizer.model_input_names` for Wav2Vec2 or `tokenizer.model_input_names` for XLMRoBERTa, when in fact the model requires `input_values` in the `generate` function.<|||||>@patrickvonplaten the only way I can think of a truly generalized solution without breaking changes or mappings, is using `inspect` on the `forward` function:
```python
import inspect
from transformers import SpeechEncoderDecoderModel
model = SpeechEncoderDecoderModel.from_pretrained('facebook/s2t-wav2vec2-large-en-de')
list(inspect.signature(model.forward).parameters)[0]
```
For the above example that would return `input_values` (as opposed to using `Speech2TextProcessor.tokenizer.model_input_names` which would show `input_ids`). But like I say, super edge case and happy to leave it as is.<|||||>Great catch @OllieBroadhurst! I'll include a fix in https://github.com/huggingface/transformers/pull/14856 . Hope to have it merged by tomorrow |
transformers | 13,824 | closed | :sparkles: update image classification example | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #13663
- removed `image_size` arg
- chose to use feature extractor to define the params of the torchvision transforms
- If you want to override the default of 224, it probably means you're pre-training. You can define a preprocessor config and pass `--feature_extractor_name preprocessor_config.json` if needed.
Fixes #13802
- Updated torch to be `torch>=1.5.0`, as that was the lowest version that worked for me.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-01-2021 07:21:21 | 10-01-2021 07:21:21 | @nateraw,
I went back to the deepspeed model zoo PR from some months back https://github.com/huggingface/transformers/pull/12695
where I used `--image_size=30` to keep the test as light as possible. 224 is pretty heavy. so now these tests break as I still had the old arg.
what's the easiest way to set image_size=30 in the new way?
I don't see this being documented at: https://github.com/huggingface/transformers/tree/master/examples/pytorch/image-classification
You did mention `--feature_extractor_name preprocessor_config.json` but I have no idea what to put in the config file. Could this please be documented in the corresponding README.md of the example?
Thank you!
<|||||>So what was needed is:
```
$ cat vit_feature_extractor.json
{
"feature_extractor_type": "ViTFeatureExtractor",
"size": 30
}
$ ... run_image_classification.py ... --feature_extractor_name vit_feature_extractor.json
```
It'd probably be useful to put in `README.md` as it's far from trivial to figure out from our existing docs.
|
transformers | 13,823 | closed | Add MarianMT to models exportable with ONNX | # 🚀 Feature request
Add the support to convert the MarianMT model with the `transformers.onnx` package documented [here](https://huggingface.co/transformers/serialization.html). The conversion now returns `marian () is not supported yet. Only [...] are supported. If you want to support (marian) please propose a PR or open up an issue.`
## Motivation
MarianMT is one of the best translation models in the hub because of the extensive number of pretrained language pairs, but it can be slow for real-time use cases. A conversion to ONNX combined with quantization could significantly improve inference time.
| 10-01-2021 07:21:04 | 10-01-2021 07:21:04 | Hi!
To support `MarianMT` with ONNX , we'll need to add `MarianOnnxConfig`, similar `BartOnnxConfig`
https://github.com/huggingface/transformers/blob/8bbb53e20b7873ba7f63be70d4d798e0c3568bfa/src/transformers/models/bart/configuration_bart.py#L183
Marian is very similar to BARTs, so adding a similar config as Bart should work. Would you like to open a PR to add this?
cc @michaelbenayoun <|||||>Hi @patil-suraj !
I would be glad to contribute with a PR, but there are some points I don't understand.
For example in BART, the config define these inputs:
```
OrderedDict(
[
("input_ids", {0: "batch", 1: "sequence"}),
("attention_mask", {0: "batch", 1: "sequence"}),
]
```
but BartModel takes many other inputs in its forward() method:
```
input_ids=None,
attention_mask=None,
decoder_input_ids=None,
decoder_attention_mask=None,
head_mask=None,
decoder_head_mask=None,
cross_attn_head_mask=None,
encoder_outputs=None,
past_key_values=None,
inputs_embeds=None,
decoder_inputs_embeds=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
```
and the same happens for the outputs.
```
OrderedDict(
[
("last_hidden_state", {0: "batch", 1: "sequence"}),
("past_keys", {0: "batch", 2: "sequence"}),
("encoder_last_hidden_state", {0: "batch", 1: "sequence"}),
]
```
```
last_hidden_state=decoder_outputs.last_hidden_state,
past_key_values=decoder_outputs.past_key_values,
decoder_hidden_states=decoder_outputs.hidden_states,
decoder_attentions=decoder_outputs.attentions,
cross_attentions=decoder_outputs.cross_attentions,
encoder_last_hidden_state=encoder_outputs.last_hidden_state,
encoder_hidden_states=encoder_outputs.hidden_states,
encoder_attentions=encoder_outputs.attentions,
```
What is the criterion to choose the keys to keep?<|||||>@michaelbenayoun is the onnx pro, so I will let him answer :) <|||||>@patil-suraj I'm trying with a basic configuration but I get `ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds` when `convert()` is called, because it calls `MarianMTModel.forward()` that doesn't work with the output generated by `MarianTokenizer`. The tokenizer only creates `input_ids` and `attention_mask`, which is fine when `MarianMTModel.generate()` is called, but apparently is not sufficient for `MarianMTModel.forward()`, which is weird. Is this a bug or I need to generate the `decoder_input_ids` in some way inside `generate_dummy_inputs`?<|||||>I have gone past that issue by adding the code to automatically generate `decoder_input_ids` in `MarianModel.forward()`
Now I am able to correctly convert the model to ONNX, but when doing inference I have 2 problems:
1. Dynamic batching doesn't work, it only works with the batch_size used during conversion.
2. The model doesn't generate correct sentences, only the first word is correct.
Here's the inference code:
```
tokenizer = MarianTokenizer.from_pretrained(tokenizer_name)
inputs = tokenizer(text=[sample_text] * 2, return_tensors="np", padding=True)
ort_session = ort.InferenceSession(model_name, ort.SessionOptions())
outputs = ort_session.run(["logits"], dict(inputs))
outputs = np.argmax(outputs[0], axis=-1)
words = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(words)
```
Could you please take a look at the PR and see what is causing the issues?
@patil-suraj @michaelbenayoun <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@lewtun and @NielsRogge have also expressed their interest in helping out on ONNX issues so pinging them here<|||||>Thanks for the ping @LysandreJik - happy to look into the issue :)<|||||>Hey @Maxinho96 do you mind if I continue working on your branch in #13854? This will allow your contribution to be accounted for once we eventually merge the support for MarianMT models :)<|||||>> Hey @Maxinho96 do you mind if I continue working on your branch in #13854? This will allow your contribution to be accounted for once we eventually merge the support for MarianMT models :)
Sure, thank you 🙏 |
transformers | 13,822 | closed | Could not load from pretrain even with the same code | environment: pytorch_transformers 1.1.0
I copy the code of `RobertaModel` from `pytorch_transformers.modeling_roberta` to a local directory with minimal requirements.
```
from pytorch_transformers.modeling_bert import BertModel
from pytorch_transformers.modeling_roberta import RobertaConfig, ROBERTA_PRETRAINED_MODEL_ARCHIVE_MAP, RobertaEmbeddings
class RobertaModel(BertModel):
config_class = RobertaConfig
pretrained_model_archive_map = ROBERTA_PRETRAINED_MODEL_ARCHIVE_MAP
base_model_prefix = "roberta"
def __init__(self, config):
super(RobertaModel, self).__init__(config)
self.embeddings = RobertaEmbeddings(config)
self.init_weights()
def forward(self, input_ids, token_type_ids=None, attention_mask=None, position_ids=None, head_mask=None):
if input_ids[:, 0].sum().item() != 0:
print("A sequence with no special tokens has been passed to the RoBERTa model. "
"This model requires special tokens in order to work. "
"Please specify add_special_tokens=True in your encoding.")
return super(RobertaModel, self).forward(input_ids, token_type_ids, attention_mask, position_ids, head_mask)
```
And I failed to load from pretrain in my main script:
```
from MyRoberta.RobertaTest import RobertaModel
bert_model = RobertaModel.from_pretrained(args.roberta_model)
```
exception:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/envs/bs/lib/python3.6/site-packages/pytorch_transformers/modeling_utils.py", line 474, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File ".../MyRoberta/RobertaTest.py", line 51, in __init__
self.init_weights()
TypeError: init_weights() missing 1 required positional argument: 'module'
```
What's wrong? It should load successfully.
The reason I made the copy is that, I want to add a new embedding into roberta model, so I begin with running the original roberta code in my directory.
Thanks a lot. | 10-01-2021 03:26:29 | 10-01-2021 03:26:29 | Problem solved using transformers instead of pytorch_transformers.
But I'm still curious about the reason behind such problem.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,821 | closed | mT5 TensorFlow error - Attempt to convert a value (None) with an unsupported type | ## Environment info
- `transformers` version: 4.11.2
- Platform: Linux-5.11.0-37-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
Model I am using is mT5:
The problem arises when running the _transformers/examples/tensorflow/translation/run_translation.py_ file.
I made this modification to get the machine translation to run:
```
source_lang = data_args.source_lang.split("_")[0]
target_lang = data_args.target_lang.split("_")[0]
```
Modified to:
```
source_lang = data_args.source_lang
target_lang = data_args.target_lang
```
And then I ran the script with these parameters:
`--do_train True --model_name_or_path google/mt5-base --tokenizer_name google/mt5-base --output_dir output --dataset_name ccaligned_multilingual --dataset_config_name sentences-ak_GH --source_lang en_XX --target_lang ak_GH`
And I ran into the error:
`ValueError: Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>) to a Tensor.`
Changing the model and tokenizer from `google/mt5-base` to `t5-base` will fix the error I am getting, so I think it's specific to the mT5 model.
I appreciate any help or advice, I really like this library so far!
## Full error
```
/home/gcervantes/Desktop/work/python_envs/huggingface/bin/python /home/gcervantes/Desktop/work/Code/transformers/examples/tensorflow/translation/run_translation.py --do_train True --model_name_or_path google/mt5-base --tokenizer_name google/mt5-base --output_dir output --dataset_name ccaligned_multilingual --dataset_config_name sentences-ak_GH --source_lang en_XX --target_lang ak_GH
2021-09-30 17:00:16.749595: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-09-30 17:00:16.749613: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
09/30/2021 17:00:17 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(
_n_gpu=-1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_find_unused_parameters=None,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=False,
do_predict=False,
do_train=True,
eval_accumulation_steps=None,
eval_steps=None,
evaluation_strategy=IntervalStrategy.NO,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
gcp_project=None,
gradient_accumulation_steps=1,
gradient_checkpointing=False,
greater_is_better=None,
group_by_length=False,
hub_model_id=None,
hub_strategy=HubStrategy.EVERY_SAVE,
hub_token=None,
ignore_data_skip=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=5e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=-1,
log_level=-1,
log_level_replica=-1,
log_on_each_node=True,
logging_dir=output/runs/Sep30_17-00-17_nb24862,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=500,
logging_strategy=IntervalStrategy.STEPS,
lr_scheduler_type=SchedulerType.LINEAR,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
no_cuda=False,
num_train_epochs=3.0,
output_dir=output,
overwrite_output_dir=False,
past_index=-1,
per_device_eval_batch_size=8,
per_device_train_batch_size=8,
poly_power=1.0,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=None,
remove_unused_columns=True,
report_to=['tensorboard'],
resume_from_checkpoint=None,
run_name=output,
save_on_each_node=False,
save_steps=500,
save_strategy=IntervalStrategy.STEPS,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tpu_metrics_debug=False,
tpu_name=None,
tpu_num_cores=None,
tpu_zone=None,
use_legacy_prediction_loop=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
xla=False,
)
09/30/2021 17:00:18 - INFO - datasets.load - Found main folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.12.1/datasets/ccaligned_multilingual/ccaligned_multilingual.py at /home/gcervantes/.cache/huggingface/modules/datasets_modules/datasets/ccaligned_multilingual
09/30/2021 17:00:18 - INFO - datasets.load - Found specific version folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.12.1/datasets/ccaligned_multilingual/ccaligned_multilingual.py at /home/gcervantes/.cache/huggingface/modules/datasets_modules/datasets/ccaligned_multilingual/ecebf2fba25342d63934850b389502a24fb3d61845e74643a416e06c773ffa36
09/30/2021 17:00:18 - INFO - datasets.load - Found script file from https://raw.githubusercontent.com/huggingface/datasets/1.12.1/datasets/ccaligned_multilingual/ccaligned_multilingual.py to /home/gcervantes/.cache/huggingface/modules/datasets_modules/datasets/ccaligned_multilingual/ecebf2fba25342d63934850b389502a24fb3d61845e74643a416e06c773ffa36/ccaligned_multilingual.py
09/30/2021 17:00:18 - INFO - datasets.load - Found dataset infos file from https://raw.githubusercontent.com/huggingface/datasets/1.12.1/datasets/ccaligned_multilingual/dataset_infos.json to /home/gcervantes/.cache/huggingface/modules/datasets_modules/datasets/ccaligned_multilingual/ecebf2fba25342d63934850b389502a24fb3d61845e74643a416e06c773ffa36/dataset_infos.json
09/30/2021 17:00:18 - INFO - datasets.load - Found metadata file for dataset https://raw.githubusercontent.com/huggingface/datasets/1.12.1/datasets/ccaligned_multilingual/ccaligned_multilingual.py at /home/gcervantes/.cache/huggingface/modules/datasets_modules/datasets/ccaligned_multilingual/ecebf2fba25342d63934850b389502a24fb3d61845e74643a416e06c773ffa36/ccaligned_multilingual.json
09/30/2021 17:00:18 - INFO - datasets.info - Loading Dataset Infos from /home/gcervantes/.cache/huggingface/modules/datasets_modules/datasets/ccaligned_multilingual/ecebf2fba25342d63934850b389502a24fb3d61845e74643a416e06c773ffa36
09/30/2021 17:00:18 - INFO - datasets.builder - Overwrite dataset info from restored data version.
09/30/2021 17:00:18 - INFO - datasets.info - Loading Dataset info from /home/gcervantes/.cache/huggingface/datasets/ccaligned_multilingual/sentences-ak_GH/1.0.0/ecebf2fba25342d63934850b389502a24fb3d61845e74643a416e06c773ffa36
09/30/2021 17:00:18 - WARNING - datasets.builder - Reusing dataset ccaligned_multilingual (/home/gcervantes/.cache/huggingface/datasets/ccaligned_multilingual/sentences-ak_GH/1.0.0/ecebf2fba25342d63934850b389502a24fb3d61845e74643a416e06c773ffa36)
09/30/2021 17:00:18 - INFO - datasets.info - Loading Dataset info from /home/gcervantes/.cache/huggingface/datasets/ccaligned_multilingual/sentences-ak_GH/1.0.0/ecebf2fba25342d63934850b389502a24fb3d61845e74643a416e06c773ffa36
100%|██████████| 1/1 [00:00<00:00, 899.29it/s]
loading configuration file https://huggingface.co/google/mt5-base/resolve/main/config.json from cache at /home/gcervantes/.cache/huggingface/transformers/5ebfd830555547194403d6803baa127970de59b443c04b7a1a60b16a97ed3958.b589da7dac64196f9764abaf2c4c7e507cec8b14b96da3ef270d924f155062de
Model config MT5Config {
"_name_or_path": "/home/patrick/hugging_face/t5/mt5-base",
"architectures": [
"MT5ForConditionalGeneration"
],
"d_ff": 2048,
"d_kv": 64,
"d_model": 768,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "mt5",
"num_decoder_layers": 12,
"num_heads": 12,
"num_layers": 12,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"tie_word_embeddings": false,
"tokenizer_class": "T5Tokenizer",
"transformers_version": "4.11.0.dev0",
"use_cache": true,
"vocab_size": 250112
}
loading configuration file https://huggingface.co/google/mt5-base/resolve/main/config.json from cache at /home/gcervantes/.cache/huggingface/transformers/5ebfd830555547194403d6803baa127970de59b443c04b7a1a60b16a97ed3958.b589da7dac64196f9764abaf2c4c7e507cec8b14b96da3ef270d924f155062de
Model config MT5Config {
"_name_or_path": "/home/patrick/hugging_face/t5/mt5-base",
"architectures": [
"MT5ForConditionalGeneration"
],
"d_ff": 2048,
"d_kv": 64,
"d_model": 768,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "mt5",
"num_decoder_layers": 12,
"num_heads": 12,
"num_layers": 12,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"tie_word_embeddings": false,
"tokenizer_class": "T5Tokenizer",
"transformers_version": "4.11.0.dev0",
"use_cache": true,
"vocab_size": 250112
}
loading file https://huggingface.co/google/mt5-base/resolve/main/spiece.model from cache at /home/gcervantes/.cache/huggingface/transformers/4764ec347af4d2d6286acbe1d9d630ac0afd8554a4c4a64170e0b663fd2e2412.84ea7af2df68dc8db434d3160aab65cce8ac63ce5b6f7743f8c9a4a14b4f77e2
loading file https://huggingface.co/google/mt5-base/resolve/main/tokenizer.json from cache at None
loading file https://huggingface.co/google/mt5-base/resolve/main/added_tokens.json from cache at None
loading file https://huggingface.co/google/mt5-base/resolve/main/special_tokens_map.json from cache at /home/gcervantes/.cache/huggingface/transformers/0d7d5b3fc19bf58d4b274990c8bcf5e307726bc18d95f40a1436dfb6a0892f85.294ebaa4cd17bb284635004c92d2c4d522ec488c828dcce0c2471b6f28e3fe82
loading file https://huggingface.co/google/mt5-base/resolve/main/tokenizer_config.json from cache at /home/gcervantes/.cache/huggingface/transformers/afba33be693521ccefbde6d03b93b5c517d7108ba31f6c08000ed52c2cea45c9.28bbf90ae7962b1b7211c0ce8b2006f968c82439ec9c47e0847ba63642f9435a
loading configuration file https://huggingface.co/google/mt5-base/resolve/main/config.json from cache at /home/gcervantes/.cache/huggingface/transformers/5ebfd830555547194403d6803baa127970de59b443c04b7a1a60b16a97ed3958.b589da7dac64196f9764abaf2c4c7e507cec8b14b96da3ef270d924f155062de
Model config MT5Config {
"_name_or_path": "/home/patrick/hugging_face/t5/mt5-base",
"architectures": [
"MT5ForConditionalGeneration"
],
"d_ff": 2048,
"d_kv": 64,
"d_model": 768,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "mt5",
"num_decoder_layers": 12,
"num_heads": 12,
"num_layers": 12,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"tie_word_embeddings": false,
"tokenizer_class": "T5Tokenizer",
"transformers_version": "4.11.0.dev0",
"use_cache": true,
"vocab_size": 250112
}
loading configuration file https://huggingface.co/google/mt5-base/resolve/main/config.json from cache at /home/gcervantes/.cache/huggingface/transformers/5ebfd830555547194403d6803baa127970de59b443c04b7a1a60b16a97ed3958.b589da7dac64196f9764abaf2c4c7e507cec8b14b96da3ef270d924f155062de
Model config MT5Config {
"_name_or_path": "/home/patrick/hugging_face/t5/mt5-base",
"architectures": [
"MT5ForConditionalGeneration"
],
"d_ff": 2048,
"d_kv": 64,
"d_model": 768,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "mt5",
"num_decoder_layers": 12,
"num_heads": 12,
"num_layers": 12,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"tie_word_embeddings": false,
"tokenizer_class": "T5Tokenizer",
"transformers_version": "4.11.0.dev0",
"use_cache": true,
"vocab_size": 250112
}
09/30/2021 17:00:22 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/gcervantes/.cache/huggingface/datasets/ccaligned_multilingual/sentences-ak_GH/1.0.0/ecebf2fba25342d63934850b389502a24fb3d61845e74643a416e06c773ffa36/cache-d7a5cf279d2e727e.arrow
Tensorflow: setting up strategy
2021-09-30 17:00:22.340094: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-30 17:00:22.340493: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-09-30 17:00:22.340535: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublas.so.11'; dlerror: libcublas.so.11: cannot open shared object file: No such file or directory
2021-09-30 17:00:22.340572: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublasLt.so.11'; dlerror: libcublasLt.so.11: cannot open shared object file: No such file or directory
2021-09-30 17:00:22.340609: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcufft.so.10'; dlerror: libcufft.so.10: cannot open shared object file: No such file or directory
2021-09-30 17:00:22.340646: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcurand.so.10'; dlerror: libcurand.so.10: cannot open shared object file: No such file or directory
2021-09-30 17:00:22.340682: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusolver.so.11'; dlerror: libcusolver.so.11: cannot open shared object file: No such file or directory
2021-09-30 17:00:22.340718: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusparse.so.11'; dlerror: libcusparse.so.11: cannot open shared object file: No such file or directory
2021-09-30 17:00:22.340754: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudnn.so.8'; dlerror: libcudnn.so.8: cannot open shared object file: No such file or directory
2021-09-30 17:00:22.340762: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1835] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2021-09-30 17:00:22.341064: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
loading weights file https://huggingface.co/google/mt5-base/resolve/main/tf_model.h5 from cache at /home/gcervantes/.cache/huggingface/transformers/41c2fc682e5acee0c74105c9950da8f133eef8879ef0e2e2edd37c4d237da2ee.ffac6e54739b6e6cd3d9e8b6671a9514d3b1b755459a51fdc1749d110e5a5a1d.h5
2021-09-30 17:00:22.636446: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
All model checkpoint layers were used when initializing TFMT5ForConditionalGeneration.
All the layers of TFMT5ForConditionalGeneration were initialized from the model checkpoint at google/mt5-base.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFMT5ForConditionalGeneration for predictions without further training.
Traceback (most recent call last):
File "/home/gcervantes/Desktop/work/Code/transformers/examples/tensorflow/translation/run_translation.py", line 622, in <module>
main()
File "/home/gcervantes/Desktop/work/Code/transformers/examples/tensorflow/translation/run_translation.py", line 493, in main
model.resize_token_embeddings(len(tokenizer))
File "/home/gcervantes/Desktop/work/Code/transformers/src/transformers/modeling_tf_utils.py", line 856, in resize_token_embeddings
model_embeds = self._resize_token_embeddings(new_num_tokens)
File "/home/gcervantes/Desktop/work/Code/transformers/src/transformers/modeling_tf_utils.py", line 901, in _resize_token_embeddings
new_lm_head_decoder = self._get_resized_lm_head_decoder(old_lm_head_decoder, new_num_tokens)
File "/home/gcervantes/Desktop/work/Code/transformers/src/transformers/modeling_tf_utils.py", line 981, in _get_resized_lm_head_decoder
self._get_word_embedding_weight(self.get_input_embeddings()) == old_lm_head_decoder
File "/home/gcervantes/Desktop/work/python_envs/huggingface/lib/python3.8/site-packages/tensorflow/python/ops/variables.py", line 1092, in __eq__
return gen_math_ops.equal(self, other, incompatible_shape_error=False)
File "/home/gcervantes/Desktop/work/python_envs/huggingface/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py", line 3208, in equal
return equal_eager_fallback(
File "/home/gcervantes/Desktop/work/python_envs/huggingface/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py", line 3237, in equal_eager_fallback
_attr_T, _inputs_T = _execute.args_to_matching_eager([x, y], ctx, [])
File "/home/gcervantes/Desktop/work/python_envs/huggingface/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 273, in args_to_matching_eager
tensor = ops.convert_to_tensor(
File "/home/gcervantes/Desktop/work/python_envs/huggingface/lib/python3.8/site-packages/tensorflow/python/profiler/trace.py", line 163, in wrapped
return func(*args, **kwargs)
File "/home/gcervantes/Desktop/work/python_envs/huggingface/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 1566, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/gcervantes/Desktop/work/python_envs/huggingface/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 346, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/home/gcervantes/Desktop/work/python_envs/huggingface/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 271, in constant
return _constant_impl(value, dtype, shape, name, verify_shape=False,
File "/home/gcervantes/Desktop/work/python_envs/huggingface/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 283, in _constant_impl
return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
File "/home/gcervantes/Desktop/work/python_envs/huggingface/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 308, in _constant_eager_impl
t = convert_to_eager_tensor(value, ctx, dtype)
File "/home/gcervantes/Desktop/work/python_envs/huggingface/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 106, in convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>) to a Tensor.
Process finished with exit code 1
```
| 09-30-2021 23:26:00 | 09-30-2021 23:26:00 | Hey @gcervantes8,
I don't really understand why this change is needed:
```
source_lang = data_args.source_lang.split("_")[0]
target_lang = data_args.target_lang.split("_")[0]
```
What error do you get without making this change? Can you maybe copy-paste the command you run that gives you an error **without** having made the above changes
<|||||>Hey @patrickvonplaten thanks for the help!
Without making the change I get this error:
```
Traceback (most recent call last):
File "/home/gcervantes/Desktop/work/Code/transformers/examples/tensorflow/translation/run_translation.py", line 620, in <module>
main()
File "/home/gcervantes/Desktop/work/Code/transformers/examples/tensorflow/translation/run_translation.py", line 450, in main
train_dataset = train_dataset.map(
File "/home/gcervantes/Desktop/work/python_envs/huggingface/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "/home/gcervantes/Desktop/work/python_envs/huggingface/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/gcervantes/Desktop/work/python_envs/huggingface/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "/home/gcervantes/Desktop/work/python_envs/huggingface/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2048, in _map_single
batch = apply_function_on_filtered_inputs(
File "/home/gcervantes/Desktop/work/python_envs/huggingface/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1939, in apply_function_on_filtered_inputs
function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "/home/gcervantes/Desktop/work/Code/transformers/examples/tensorflow/translation/run_translation.py", line 424, in preprocess_function
inputs = [ex[source_lang] for ex in examples["translation"]]
File "/home/gcervantes/Desktop/work/Code/transformers/examples/tensorflow/translation/run_translation.py", line 424, in <listcomp>
inputs = [ex[source_lang] for ex in examples["translation"]]
KeyError: 'en'
Process finished with exit code 1
```
I do this because in the `ccaligned_multilingual` data, the keys used in the JSON file are `ak_GH` and `en_XX`.
The original code modifies the language code so that it only uses the characters before the `_`, in this example it would give `en` and `ak` giving me a key error.
<|||||>Without making any changes to the code, running _transformers/examples/tensorflow/translation/run_translation.py_ with these arguments also gives the same error.
`--do_train True --model_name_or_path google/mt5-base --tokenizer_name google/mt5-base --output_dir output --dataset_name opus_euconst --dataset_config_name cs-en --source_lang cs --target_lang en`
Changing the model and tokenizer to `google/byt5-base` gives no error.<|||||>Hey @gcervantes8,
Thanks for the answer! @Rocketknight1 - could you maybe give this a look? I think you've recently worked with the TF translation script no? :-)<|||||>@patrickvonplaten On my list, I'll try to investigate this today or tomorrow!<|||||>Hi @gcervantes8, sorry to be annoying, but can I ask you to test this with the [TF translation notebook too](https://github.com/huggingface/notebooks/blob/master/examples/translation-tf.ipynb)? Just swap in the mT5 model and your dataset there, and then if you encounter the same issue, you can save and upload the notebook with your changes and the error outputs. I know it's a bit lazy of me, but it'll make it much easier for me to reproduce and locate the problem!<|||||>Hey @Rocketknight1 thanks for the help!
So I tried running the model with the TF translation notebook, but I didn't encounter the issue strangely enough.
These are the changes I made to the TF Notebook.
I changed the model.
`model_checkpoint = "google/mt5-base"`
Changed the dataset
`raw_datasets = load_dataset("opus_euconst", "cs-da")`
I modified the source language and the target language specified before the preprocess function
```
source_lang = "cs"
target_lang = "da"
```
I modified the batch size because I was getting out of memory errors
`batch_size = 1`
And I also had to remove the `validation_dataset`.
So this might be specific to the _transformers/examples/tensorflow/translation/run_translation.py_ script<|||||>Hm, that's quite unusual because the scripts should be similar. I'll try to reproduce with the example script here in the next couple of days and let you know what I find.<|||||>I looked into it more and it seems that the `resize_token_embeddings` function in _src/transformers/modeling_tf_utils.py_ expects the `get_output_embeddings` function in _src/transformers/models/t5/modeling_tf_t5.py_ to return an object with the attribute `weight` or `decoder`.
The model works for T5 because in the `get_output_embeddings` T5 function, `self.config.tie_word_embeddings` is True and it doesn't go to the `else` part of the `if` statement which only returns the Tensor.
I'm not really sure how the best way for this to be fixed is. @patrickvonplaten what do you think?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I think this issue still needs to be addressed, I'm still receiving the error retraining a mT5 Model using TensorFlow.<|||||>It seems this issue is the same (or maybe just similar?) as issue #13839
And it looks like #14329 will probably fix it, so I'll close this issue. |
transformers | 13,820 | closed | Allow dataset to be an optional argument for (Distributed)LengthGroupedSampler | Fix #13797.
The idea is that now `dataset` and `model_input_name` are solely used to figure out `lengths` in `__init__` and serve no other purpose. | 09-30-2021 23:10:47 | 09-30-2021 23:10:47 | |
transformers | 13,819 | closed | [Fix]: Send model output to cpu before numpy cast in token_clf_pipeline | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #13816
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/issues/13816
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). No docs changes
- [X] Did you write any new necessary tests? No additional test needed
## Who can review?
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Library:
- pipelines: @LysandreJik | 09-30-2021 21:38:57 | 09-30-2021 21:38:57 |
@Narsil Do you want me to also add a test (all currents tests are passing) ?
in `tests/test_pipelines_token_classification.py` like :
```python
@require_torch_gpu
@slow
def test_correct_devices(self):
sentence = "This dummy sentence checks if all the variables can be loaded on gpu and bring back to cpu"
ner = TokenClassificationPipeline(model="distilbert-base-cased", device=0)
```
Have a great day<|||||>Hi @Ierezell .
Unfortunately this fix is not desirable, as we're trying to have all "CPU/GPU" logic be contained inside the `Pipeline` class (in `src/transformers/pipelines/base.py`).
I added another proposed fix (the test is still not ran as part of the unit tests (it's still slow) pinging @LysandreJik for advice on if/how we can make GPU tests part of the unit tests.
(I also added you as co-author as you figured out the issue and a solid fix)<|||||>HI @Narsil, indeed it seems better to handle all the logic in the same place. I should've checked before.
I looked at your fix and it's perfect for me.
For the test, unfortunately, even if it's only a really small computation, the test needs GPU... Is it this bad to have it only as CI/CD test and not as a unit test?
Thanks for keeping me as coauthor.
Have a great day! <|||||>Making ALL pipeline tests GPU as unit regular tests is something that was already raised internally.
Currently pipeline tests are relatively fast, so it really is doable. Slow tests are run on every release + once a day if I am not mistaken so it's already something.
<|||||>Closing as resolved by #13856. Let me know if you'd like to reopen this. |
transformers | 13,818 | closed | Weird behavior of BertLMHeadModel and RobertaForCausalLM | Hi there,
Thanks for putting together this awesome repo!
I met two problems when trying to **use encoder-based models (e.g. BERT, RoBERTa) for causal language modeling**, i.e. scoring the conditional likelihood of texts given previous texts. Namely,
- RoBERTa has super large perplexity values, and
- BERT cannot correctly compare the relative perplexity of simple sentences.
Would appreciate it if you could kindly help! Description below:
## Environment info
- `transformers` version: 4.8.2
- Platform: linux
- Python version: 3.7.9
- PyTorch version (GPU?): 1.5.0 (gpu)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger @LysandreJik @patrickvonplaten
(I'm tagging the main contributors to the relevant lines from git blame; apologies if not the right people!)
## Information
Models I am using (Bert, XLNet ...): BERT, RoBERTa
The problem arises when using:
* [x] my own modified scripts: (give details below)
Please see the code snippet under "to reproduce".
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
As mentioned before, I'm hoping to use encoder-based models (e.g. BERT, RoBERTa) for causal language modeling, i.e. **scoring the conditional likelihood of texts given previous texts**.
## To reproduce
Steps to reproduce the behavior:
I'm following [this doc](https://huggingface.co/transformers/perplexity.html) and [this issue](https://github.com/huggingface/transformers/issues/473) which were written for GPT2. I'm trying to adapt it for BERT and RoBERTa.
1. Load the pretrained models
2. Feed the prompt and the ending into the model
3. Get the loss, and exponentiate it
Code snippet:
This is a minimal class that I write. You can directly run it by substituting the `cache_dir` variable.
```
import os
import numpy as np
import csv
import math
cuda = "0"
os.environ["CUDA_VISIBLE_DEVICES"] = cuda
import torch
from transformers import GPT2LMHeadModel, GPT2TokenizerFast, GPT2Config
from transformers import BertTokenizer, BertLMHeadModel, BertConfig
from transformers import RobertaTokenizer, RobertaForCausalLM, RobertaConfig
cache_dir=".cache/transformers"
class PPLScorer():
"""A LM scorer for the conditional perplexity of an ending given a prompt."""
def __init__(self, model_name):
"""Initialize model and tokenizer."""
self.model_name = model_name
if "gpt2" in model_name:
with torch.no_grad():
self.LM = GPT2LMHeadModel.from_pretrained(model_name, cache_dir=cache_dir).to("cuda:0")
self.LM.eval()
self.tokenizer = GPT2TokenizerFast.from_pretrained(model_name, cache_dir=cache_dir)
elif "roberta" in model_name:
with torch.no_grad():
config = RobertaConfig.from_pretrained(model_name)
config.is_decoder = True # We'd like to use it as a standalone decoder
self.LM = RobertaForCausalLM.from_pretrained(model_name, config=config, cache_dir=cache_dir).to("cuda:0")
self.LM.eval()
self.tokenizer = RobertaTokenizer.from_pretrained(model_name, cache_dir=cache_dir)
elif "bert" in model_name:
with torch.no_grad():
config = BertConfig.from_pretrained(model_name)
config.is_decoder = True # We'd like to use it as a standalone decoder
self.LM = BertLMHeadModel.from_pretrained(model_name, config=config, cache_dir=cache_dir).to("cuda:0")
self.LM.eval()
self.tokenizer = BertTokenizer.from_pretrained(model_name, cache_dir=cache_dir)
else:
raise ValueError(f"Unknown model name: {model_name}.")
def get_input_ids(self, sentence, is_ending = False):
"""Get input ids of a sentence."""
tokens = self.tokenizer.tokenize(sentence)
# GPT and RoBERTa adds a 'Ġ' character before each non-starting token. Here we manually add it.
if "gpt" in self.model_name or "roberta" in self.model_name:
if is_ending:
tokens[0] = 'Ġ' + tokens[0]
return tokens
def score_conditional(self, prompt, ending):
"""Get the conditional likelihood of the ending given the prompt."""
prompt_tokens = self.get_input_ids(prompt, is_ending=False)
ending_tokens = self.get_input_ids(ending, is_ending=True)
all_tokens = prompt_tokens + ending_tokens
input_ids = torch.tensor([self.tokenizer.convert_tokens_to_ids(all_tokens)]).to("cuda:0")
target_ids = input_ids.clone()
# ignore the loss on the prompt tokens
target_ids[:, :len(prompt_tokens)] = -100
with torch.no_grad():
outputs = self.LM(input_ids, labels=target_ids)
log_likelihood = outputs[0].detach().item()
ppl = math.exp(log_likelihood)
return ppl
if __name__ == "__main__":
# You can modify model_name
model_name = ["bert-base-uncased",
"roberta-base",
"gpt2",
][1]
conditional_LM = PPLScorer(model_name)
# Test 1
prompt = "I love"
ending1 = "you."
ending2 = "is."
score1 = conditional_LM.score_conditional(prompt, ending1)
score2 = conditional_LM.score_conditional(prompt, ending2)
print(score1, score2)
if score1 < score2:
print("Ending 1 is more likely.")
elif score1 > score2:
print("Ending 2 is more likely.")
else:
print("Equally likely.")
```
## Expected behavior
It is expected that models score ending1 as more likely than ending2, therefore score1 should be smaller than score2.
However,
1. When `model_name` is `"bert-base-uncased"`, the output is:
> 801.6779910371988 432.06698212552516
> Ending 2 is more likely.
which means BERT thinks "I love is." is more plausible than "I love you."?
2. When `model_name` is `"roberta-base"`, the output is:
> 7402846.771924554 950510861.61753
> Ending 1 is more likely.
Though it correctly scores ending1 as more likely, the perplexity values are super large.
3. We also tried a couple of other sentences and model variations (e.g. bert/roberta large), but the problems persist. Instead, gpt2-based models have no issue (the comparison is always correct, and the perplexity scores are usually tens to hundreds).
Could you please take a look? Thanks in advance for any help!
Best,
Veronica
| 09-30-2021 21:11:04 | 09-30-2021 21:11:04 | Hey @veronica320 ! I hope everything is going well with you.
From what I have seen, you are using the pre-trained model from Huggingface's Hub for instantiating the LMHead, that's correct?
It might be the case that these pre-trained models were originally trained with a masked language modeling objective in mind, so when applying them to a causal language modeling task without fine-tuning they might be having a hard time to decode complete sequences.
I would suggest to attach the LMHead model as you have been doing, but instead of directly trying to predict/score it, fine-tune (train) for a few iterations on your dataset with a causal language modeling objective-like.
GPT-2, at least the pre-trained `gpt2` model does not have this problem because it was pre-trained according to a causal language modeling objective, which is essentially what you are trying to achieve.
Best regards,
Gustavo.<|||||>Hi @gugarosa, thanks a lot! Do you happen to know if there're any such fine-tuned checkpoints for BERT/RoBERTa that I can use directly?
Cause I was hoping to get **a language model trained on generic English texts** (e.g. BERT/RoBERTa's pretraining data), and directly evaluate them on my data. Given the size of their pretraining data, is it realistic to do it myself?
EDIT: Actually, would you recommend **any other models** (e.g. BertForMaskedLM?) or **evaluation metrics** (other than perplexity) instead? Our end goal is just to "score sentences" with BERT/RoBERTa.
Thanks again for your help!
<|||||>HI all, could you move this discussion on the [forums](https://discuss.huggingface.co/) so it can benefit the whole community? We keep the issues for bugs and feature requests only :-)
Thank you!<|||||>Yes, I made a post [here](https://discuss.huggingface.co/t/using-bert-and-roberta-for-causal-language-modeling/10442?u=veronica320). Would appreciate it if you could give more suggestions! |
transformers | 13,817 | closed | Adds `PreTrainedModel.framework` attribute | # What does this PR do?
This PR introduces an attribute called `framework` in `PreTrainedModel`, `FlaxPreTrainedModel`, and `TFPreTrainedModel`. The purpose of this attribute is to allow a user to know what framework a provided model is in, as that information is not currently very accessible.
I'm a little confused as to whether this is correctly implemented. I was basing it off of the implementation of `base_model_prefix`, which doesn't have a getattr in `FlaxPretrainedModel` and `TFPretrainedModel` despite those not (AFAICT) inheriting from `PreTrainedModel`.
## Who can review?
@patil-suraj @LysandreJik | 09-30-2021 19:16:44 | 09-30-2021 19:16:44 | Great, thank you @StellaAthena! I'm not sure I see the full picture of adding that argument - but I'm definitely not opposed to it if it's helpful for your use case. It's more robust than relying on the class name.
I believe the property implemented in PyTorch (and not implemented in Flax and TensorFlow) isn't voluntary - the former was implemented early (two years ago), and the latter was overlooked.
For this property in particular (`framework`), I believe having it as a simple attribute should be enough for all three frameworks.
Thanks for offering a PR!<|||||>Thanks for the PR!
The `base_model_prefix` serves a different purpose here, it indicates the module name used for the base module in a model with a specific head on top. For example, the `base_model_prefix` for bert is `bert`, which is used by the head models as the module name for the base model
https://github.com/huggingface/transformers/blob/8bbb53e20b7873ba7f63be70d4d798e0c3568bfa/src/transformers/models/bert/modeling_bert.py#L1486-L1492
This attribute is useful when loading a base model weights into a model with a head. And the reason `base_model` property is only added in PT `PreTrainedModel` and not in `FlaxPreTrainedModel` is because in pt it's possible to return a submodule and using this the user can access the base model if he needs (for example to freeze the base).
This is not possible for example in flax, because flax modules are stateless, and returning base_model will return a reference to the module without weights. Hope this makes it clear.
And for this property `framework`, IMO we could simply add it as a getter property and return the framework string, adding it just as a getter will also prevent users from accidentally setting it.<|||||>Thank you both for the explication, it makes understanding why the `transformers` code is the way it is.
> Great, thank you @StellaAthena! I'm not sure I see the full picture of adding that argument - but I'm definitely not opposed to it if it's helpful for your use case. It's more robust than relying on the class name.
When writing code that takes a user-defined `transformers` model as an input there are a lot of weird gotchas. The impetus for this PR was my attempt to generalize [Google's BIG Bench](https://github.com/google/BIG-bench) to work with arbitrary `transformer` models, but I suspect it'll also be useful to [EleutherAI's LM Eval Harness](https://github.com/eleutherai/lm-evaluation-harness) and other similar projects. Unfortunately, there are important properties of models that are impossible to derive from the `config` file. Another example of this is the fact that some tokenizers auto-append <eot> to the end of generations while others do not.
> And for this property `framework`, IMO we could simply add it as a getter property and return the framework string, adding it just as a getter will also prevent users from accidentally setting it.
That's an interesting idea. My thought was that this approach would cause it to be encoded in `config` files, which seems like a good best practice to follow.<|||||>@patil-suraj I have updated the code to follow your suggestion. The failing tests seem to have to do with an indentation error that I cannot work out. I even copied an existing function rather than write my own, in case there was something funky about how my keyboard was registering!
**Edit:** it looks like I was being fooled by a misleading error message! Changing `string` to `str` solved the problem.<|||||>@LysandreJik @patil-suraj I don't think I can do any more. I'm having trouble installing Jax, which may be the blocker? IDK. The below image shows me running `make fixup` and then the verification test that the readout asks me to run.
<img width="1203" alt="Screen Shot 2021-10-07 at 3 21 29 PM" src="https://user-images.githubusercontent.com/15899312/136450365-266bc51f-9456-40a3-92dd-9f4f934c8544.png"><|||||>Hi @StellaAthena no problem. I could take care of this, would it be okay if I push to your branch?<|||||>> Hi @StellaAthena no problem. I could take care of this, would it be okay if I push to your branch?
Absolutely! Thanks<|||||>Thanks for working on the PR @StellaAthena! |
transformers | 13,816 | closed | Device error on TokenClassificationPipeline | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.0
- Platform: Linux-5.14.8-arch1-1-x86_64-with-arch
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
Library:
- pipelines: @LysandreJik
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [X] the official example scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
## To reproduce
Steps to reproduce the behavior:
1. Create a `pipe = TokenClassificationPipeline(model=DistilBertForTokenClassification.from_pretrained("PATH"))`
2. Pipe some text in `pipe(["My", "text", "tokens"])`
3. Get a `TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.`
## Expected behavior
Be able to run the pipeline
The pipeline should bring data to gpu/cpu or model to gpu/cpu and vice versa.
The traceback
```shell
In .venv/lib/python3.7/site-packages/transformers/pipelines/token_classification.py:209 in _forward
206 │ │ if self.framework == "tf":
207 │ │ │ outputs = self.model(model_inputs.data)[0][0].numpy()
208 │ │ else:
❱ 209 │ │ │ outputs = self.model(**model_inputs)[0][0].numpy() <== HERE
210 │ │ return {
211 │ │ │ "outputs": outputs,
212 │ │ │ "special_tokens_mask": special_tokens_mask,
```
Placing a `.cpu()` would solve the problem
Thanks in advance for any help
Have a wonderful day | 09-30-2021 18:31:36 | 09-30-2021 18:31:36 | Nice catch! Would you like to open a PR with the fix?<|||||>Yes, I can do it for only 6 characters <|||||>Done, See pull request above: https://github.com/huggingface/transformers/pull/13819
I let the CI/CD tests run as there is no new features and I didn't want to run them locally burning my pc down :)
I made it fast but tell me if anything is not okay.
Have a great day<|||||>similar issue later in the file, line 223
```
220 │ │ sentence = model_outputs["sentence"] │
│ 221 │ │ input_ids = model_outputs["input_ids"][0] │
│ 222 │ │ offset_mapping = model_outputs["offset_mapping"][0] if model_o │
│ ❱ 223 │ │ special_tokens_mask = model_outputs["special_tokens_mask"][0].numpy() │
│ 224 │ │ │
│ 225 │ │ scores = np.exp(outputs) / np.exp(outputs).sum(-1, keepdims=Tr │
│ 226 │ │ pre_entities = self.gather_pre_entities(
```<|||||>Thanks, I committed new changes.
@LysandreJik Do you want me to also add a test (all currents tests are passing) ?
in `tests/test_pipelines_token_classification.py` like :
```python
@require_torch_gpu
@slow
def test_correct_devices(self):
sentence = "This dummy sentence checks if all the variables can be loaded on gpu and bring back to cpu"
ner = TokenClassificationPipeline(model="distilbert-base-cased", device=0)
```
<|||||>I believe this was fixed by https://github.com/huggingface/transformers/pull/13856, which also implemented tests. |
transformers | 13,815 | closed | [FLAX] glue training example refactor | # What does this PR do?
refactor glue training example similar to other flax training examples.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patil-suraj @sgugger | 09-30-2021 16:32:21 | 09-30-2021 16:32:21 | Ping here<|||||>@patil-suraj @patrickvonplaten
rebased with current master<|||||>@patil-suraj - this has been open since a while now, could you please take care of it?<|||||>@patil-suraj refactored according to comments |
transformers | 13,814 | closed | Update Loss calculation in prediction_step | In `predition_step` function, loss is calculated using `_compute_loss` function. If the `compute_loss` function is overridden by a customized loss function, the evaluation will still calculate the original loss. I think it is more interesting to use `compute_loss` function rather than `_compute_loss` function in `prediction_step`.
# What does this PR do?
## Before submitting
- [ x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Did you make sure to update the documentation with your changes? I think that no need to update documentation.
## Who can review?
Library:
- text generation: @patrickvonplaten
- trainer: @sgugger
- pipelines: @LysandreJik | 09-30-2021 15:27:58 | 09-30-2021 15:27:58 | No this won't work with label smoothing, so we can't accept that change.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,813 | closed | Fix gather for TPU | # What does this PR do?
This PR fixes #13798, which is cause by the gather of trainer loss, a 0d-tensor. The code does not add one dimension to those to concatenate properly, resulting in the error. | 09-30-2021 14:51:01 | 09-30-2021 14:51:01 | Thanks for fixing it - sorry should have tested on TPU as well :-/<|||||>No worries! |
transformers | 13,812 | closed | TypeError: forward() got an unexpected keyword argument 'attention_mask' | ## Environment info
- `transformers` version: 4.10.0
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.9.6
- PyTorch version (GPU?): 1.9.0+cpu (False)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
@patrickvonplaten @patil-suraj
## Information
I am using EncoderDecoderModel (encoder=TransfoXLModel, decoder=TransfoXLLMHeadModel) to train a generative model for text summarization using the 'multi_x_science_sum' huggingface dataset
When the training starts below error is given and training stops
TypeError: forward() got an unexpected keyword argument 'attention_mask'
## To reproduce
```
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
txl2txl = EncoderDecoderModel.from_encoder_decoder_pretrained('transfo-xl-wt103', 'transfo-xl-wt103')
training_args = Seq2SeqTrainingArguments(
predict_with_generate=True,
evaluation_strategy="steps",
per_device_train_batch_size=batch_size, # 4
per_device_eval_batch_size=batch_size, # 4
output_dir="output",
logging_steps=2,
save_steps=10,
eval_steps=4,
num_train_epochs=1
)
trainer = Seq2SeqTrainer(
model=txl2txl,
tokenizer=tokenizer,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_data_processed,
eval_dataset=validation_data_processed
)
trainer.train()
TypeError: forward() got an unexpected keyword argument 'attention_mask'
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
C:\Users\DILEEP~1\AppData\Local\Temp/ipykernel_21416/3777690609.py in <module>
7 eval_dataset=validation_data_processed
8 )
----> 9 trainer.train()
~\.conda\envs\msresearch\lib\site-packages\transformers\trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1282 tr_loss += self.training_step(model, inputs)
1283 else:
-> 1284 tr_loss += self.training_step(model, inputs)
1285 self.current_flos += float(self.floating_point_ops(inputs))
1286
~\.conda\envs\msresearch\lib\site-packages\transformers\trainer.py in training_step(self, model, inputs)
1787 loss = self.compute_loss(model, inputs)
1788 else:
-> 1789 loss = self.compute_loss(model, inputs)
1790
1791 if self.args.n_gpu > 1:
~\.conda\envs\msresearch\lib\site-packages\transformers\trainer.py in compute_loss(self, model, inputs, return_outputs)
1819 else:
1820 labels = None
-> 1821 outputs = model(**inputs)
1822 # Save past state if it exists
1823 # TODO: this needs to be fixed and made cleaner later.
~\.conda\envs\msresearch\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
~\.conda\envs\msresearch\lib\site-packages\transformers\models\encoder_decoder\modeling_encoder_decoder.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, **kwargs)
423
424 if encoder_outputs is None:
--> 425 encoder_outputs = self.encoder(
426 input_ids=input_ids,
427 attention_mask=attention_mask,
~\.conda\envs\msresearch\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: forward() got an unexpected keyword argument 'attention_mask'
```
As a sidenote, when I do the same task with following setting, the training starts without a problem
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
bert2bert= EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')
Please provide me assistance on how to do the training with TransformerXL to TransformerXL model | 09-30-2021 14:13:06 | 09-30-2021 14:13:06 | We haven't really tested `TransformerXL` with `EncoderDecoderModel` so I'm not sure if it's will work or not since it's a bit of a different model. One major difference is that `TransformerXL` does not accept `attetion_mask` but in `EncoderDecoderModel` it's passed each time. You could try by removing `attetion_mask`, and see if it works.
Also, `TransformerXL` is a decoder-only model, so it might not give the best results as an encoder.
And out of curiosity, is there any reason you want to try TransformerXL to TransformerXL model?<|||||>Thanks @patil-suraj very much for your response.
The reason I am using TransformerXL to TransformerXL model is to enable model to process long sequences as I am trying to address a document summarization problem for a research. And its recurrent nature would be extremely beneficial for my task.
If possible please explain me
1. Why TransformerXL does not accept an attention mask..?
2. How to try by removing the attetion_mask. (if possible)
3. As you are saying TransformerXL would not perform well as an encoder, any suggestion for a model to use as an encoder while having TranformerXL as a decoder.?
4. If you are planning to test 'TransformerXL to TransformerXL' with EncoderDecoderModel in future..?<|||||>Hey @dpitawela sorry to only answer now.
1. I'm not very familiar with TransformerXL, so not sure about the `attention_mask`, @patrickvonplaten do you know why?
2. instead of removing the attention mask etc, I will suggest using a different model which can process long sequence, will explain that below.
3. Yes, IMO `TransformerXL` might not be a good choice for the encoder, since the model is trained as a decoder. Also, it is trained on `WikiText-103` which is not a good enough dataset for pre-training. There two other models which can process long sequences. `Longformer` and `BigBird`.
You could use the longformer as encoder and bert/gpt2 as decoder or you could use the [LED model](https://huggingface.co/docs/transformers/model_doc/led).
And BigBird can be used as both encoder and decoder. So you could use `bigbird2bigbird` if the target sequences are also longer. Or `bigbird` to bert/gpt2.
4. IMO transforxl is not a good choice for such task, so probably not.
Hope this helps :) <|||||>Regarding the `attention_mask`, I'm actually also not sure why this is not used in TransfoXL. I think the reason could be that the model is always used in a causal_mask (LM objective) setting. This would mean that an attention_mask is unnecessary when training the model since inputs are padded to the right which are masked anyways.
Gently pinging @TevenLeScao here - maybe he has a better answer<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,811 | closed | AttributeError when running question answering models for result |
- `transformers` version: 4.11.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
I am using bert-multi-cased-finetuned-xquadv1 for question answering I was trying it with the example from the model repo website (https://huggingface.co/mrm8488/bert-multi-cased-finetuned-xquadv1)for the example given in the website it's working fine but when I tried with custom input I fetched from the crawler it's failing with the AttributeError: 'list' object has no attribute 'tolist'
The problem arises when using:
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mrm8488/bert-multi-cased-finetuned-xquadv1",
tokenizer="mrm8488/bert-multi-cased-finetuned-xquadv1"
)
qa_pipeline({
'context': "Yes Bank founder Rana Kapoor has approached the Bombay High Court, challenging a special court's order from August this year that had remanded him in police custody for a week in a multi-crore loan fraud case. Kapoor, who is currently lodged in Taloja Jail, is an accused in the loan fraud case and some related matters being probed by the CBI and Enforcement Directorate. A single bench presided over by Justice S K Shinde on Tuesday posted the plea for further hearing on October 14. In his plea filed through advocate Vijay Agarwal, Kapoor claimed that the special court's order permitting the CBI's request for police custody on August 14 was illegal and in breach of the due process of law. Therefore, his police custody and subsequent judicial custody in the case were all illegal. Kapoor has urged the High Court to quash and set aside the special court's order dated August 14. As per his plea, in August this year, the CBI had moved two applications before the special court, one seeking permission to arrest Kapoor, who was already in judicial custody at the time in another case, and the other, seeking his police custody. While the special court refused to grant permission to the CBI to arrest Kapoor, it granted the central agency's plea for his custody. Kapoor, however, said in his plea that before filing an application for his arrest, the CBI had not followed the process of issuing him a notice under Section 41 of the CrPC for appearance before it. He further said that the CBI had not taken prior sanction as mandated under section 17 A of the Prevention of Corruption Act for prosecuting him. The special court, however, had said in its order at the time that as Kapoor was already in judicial custody in another case and was not a free man the procedure mandated under Section 41 of the CrPC need not have been adhered to as far as issuing a prior notice of appearance was concerned. ADVERTISING It had also said that case records showed that the investigating officer had taken an approval from a managing director of Yes Bank before beginning the proceedings against Kapoor and such a permission was a valid sanction. However, Kapoor in his plea said that the above order was bad in law and sought that it be quashed and set aside. The law mandated that if initial action was not in consonance with legal procedures, then all subsequent actions must be held as illegal, he said, urging the High Court to declare the CBI remand and custody and all subsequent proceedings including the further custody as illegal and void ab-initio. In a separate plea before the High Court, Kapoor's daughter Rakhee Kapoor-Tandon has sought exemption from in-person appearance before a special PMLA court. Rakhee has stated that she is a resident of the United Kingdom and is unable to travel to India owing to restrictions imposed due to the COVID-19 pandemic. According to the CBI, in the present case, Kapoor had obtained a gratification or pecuniary advantage of ₹ 307 crore, and thereby caused Yes Bank a loss of ₹ 1,800 crore by extending credit facilities to Avantha Group, when it was not eligible for the same",
'question': "Is this person invovled in fraud?"
})
## To reproduce
Steps to reproduce the behaviour:
1. Just follow the code snippet given above

## Expected behavior
{'answer': 'a loan fraud case', 'end': 250, 'score': 0.552571678161621, 'start': 276}
| 09-30-2021 12:26:39 | 09-30-2021 12:26:39 | I also have this issue as listed above. The code is similar but instead of having specific model and tokenizer, I have the default.
<img width="633" alt="MyCode" src="https://user-images.githubusercontent.com/70382249/135517907-493203f1-983a-4e9a-a5be-2ac0113a9fc3.PNG">
I’m still a beginner to NLP, so please excuse and let me know if I make any mistakes.
**WHEN IT WORKS FINE**:
//span_idx - int value from 0 to len(p_mask) - 1 (inclusive)
I’m not sure if this is the reason but let me explain what I think it is. Whenever the text is short enough where the text doesn’t need to be converted into a batch of inputs, the issue doesn’t occur. The list inside p_mask is of type <class 'numpy.ndarray'>. This makes sense because when iterating through p_mask, we can call p_mask[span_idx].tolist() to convert it to a python list.
As you can see with the image below, when I cut the text to a length of 500, it worked perfectly.
<img width="380" alt="CodeWithoutError" src="https://user-images.githubusercontent.com/70382249/135520827-5b86afd8-68f0-42d9-a1da-bd890d747994.PNG">
<img width="413" alt="InfoWithoutError" src="https://user-images.githubusercontent.com/70382249/135519793-c73812f4-ece2-4303-af2f-55b8acbed4ad.PNG">
**WHEN ISSUE OCCURS:**
//span_idx - int value from 0 to len(p_mask) - 1 (inclusive)
The issue occurs when the length of the text is too long and needs to be converted into batches, my example having a length of 6819. As you can see in the image below, the length of p_mask is now 6. I believe this means there are 6 batches. But now the type of each list inside of p_mask is a python list object, not of type <class 'numpy.ndarray'>. So now when we call p_mask[span_idx].tolist(), it would give this error because it doesn't make sense to convert a python list into a python list.
<img width="363" alt="InfoWhenErrors" src="https://user-images.githubusercontent.com/70382249/135520783-87363fb6-3ef6-4e3a-aa08-5ca81d293189.PNG">
<img width="329" alt="CodeWithError" src="https://user-images.githubusercontent.com/70382249/135522691-de4ae7ce-52b4-476a-8b2e-bb02a9841af7.PNG">
**WHERE p_mask IS CREATED:**
Here is the code for p_mask being created.
<img width="592" alt="p_mask" src="https://user-images.githubusercontent.com/70382249/135522345-0be39098-2f5d-4565-b55d-7119266ce9ea.PNG">
I tried changing putting the list comprehension into a numpy array, but it caused another error which I'm not sure how it's caused and is probably past my knowledge.
I could be completely wrong if this is the cause but I would just like to share my thoughts.<|||||>Thank you for raising this error! This indeed happens when the length is too long, great diagnosis @DeepP2667!
Putting @Narsil in cc<|||||>So I was doing some more experiments and found that this works fine when I have installed transformer version 4.10.2 and 4.10.3

@LysandreJik @Narsil please look into this also
and @DeepP2667 you can use downgrade the transformer version and use it for the time being<|||||>I'm not sure who to ping, @LysandreJik @Narsil
**The file this was in**: question_answering.py
So I was looking to this issue and I would like to share my thoughts on what’s happening. Please excuse me if this is not a correct approach.
**CHANING p_mask**:
First off, the AttributeError was there because the lists inside of p_mask were python lists instead of numpy arrays. It can be fixed by converting them to a numpy array. By converting each list to a numpy array, the call “p_mask[span_idx].tolist()” is able to be changed to a python list. Also, the shape of p_mask becomes the same matrix as version 4.10.2.
<img width="614" alt="p_mask_withNPArray" src="https://user-images.githubusercontent.com/70382249/135661422-3b1bdd7e-53ec-4504-983a-e066865869a3.PNG">
**PADDING ERROR:**
But this caused another error. The shape of one of lists, (the last one), inside p_mask had a different shape then the rest. This is because there was no padding.
I ran the code with the sample text and question. I saw that it went into the preprocessing function. It skipped the if statement “not self.tokenizer.is_fast” and went to the else statement. Here the padding was set to “no padding”, which is why the last list in p_mask had a different shape than the rest. In version 4.10.2, the same function had a padding of 'longest'.
Version 4.10.2 – Has padding = kwargs[“padding”] which is ‘longest’
<img width="490" alt="Version_10 4 2_PaddingLongest" src="https://user-images.githubusercontent.com/70382249/135661444-4b11d163-768a-4b88-bb18-d81f6142e097.PNG">
Latest Version – has padding = padding, where padding is “do_not_pad”
<img width="550" alt="LatestVersion_NoPadding" src="https://user-images.githubusercontent.com/70382249/135661482-ce8b7aed-76b9-40b2-8f15-4ce7cb57bacb.PNG">
**CHANGED PADDING:**
I tested in version 4.10.2 and when I ran my pipeline with the code posted in the earlier comment, in the preprocess function, it went to the else statement. So both the latest and 4.10.2 version go to the else statement with the sample text and question I provided. I saw that the padding in 4.10.2 was set to “longest” while in the latest version was set to “do_not_pad”. So I switched the padding in the latest version to “longest” to match version 4.10.2 and it seemed to fix the problem. I got a score that was the same in both the latest and 4.10.2 when using the same text and question.
Version 4.10.2, original code used.
<img width="556" alt="4 10 2VersionInfo" src="https://user-images.githubusercontent.com/70382249/135661619-6175c00c-c421-4ce0-8b32-ffe9bf1c4c2e.PNG">
Latest Version - I changed the padding in the preprocessing to 'longest' instead of 'do_not_pad'.
<img width="566" alt="LatestVersionInfo" src="https://user-images.githubusercontent.com/70382249/135661767-047f5697-70a4-40f9-b968-e64086de77bf.PNG">
<img width="413" alt="paddingSetToLongestLatestVersion" src="https://user-images.githubusercontent.com/70382249/135661775-875d7dab-4d69-41da-b22b-da6e3794e444.PNG">
Again this may not be the correct approach, which I apologize if it isn't, since it does seem like I am going back to version 4.10.2. I noticed that the start and end in the results were different but I assumed it was just an updated tokenization method in the newer version. Also I haven't tested it yet on different models instead of the default, which I can gladly do if wanted!<|||||>Hi, proposed a PR to fix this.
There seemed to have been slightly specific defaults too in earlier versions (384 max_length, and 128 stride).
Does anyone here know where that comes from ? The default are somehow changed slightly to support the test models we're running (so it should run more configurations/models than before) |
transformers | 13,810 | closed | [Examples] Improve mapping in accelerate examples | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
In distributed training, only the main process should process the datasets. Change all accelerate examples accordingly
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-30-2021 12:13:40 | 09-30-2021 12:13:40 | |
transformers | 13,809 | closed | skip gptj slow generate tests | # What does this PR do?
As discussed offline, this PR skips the slow generations tests for GPT-J due to GPU OOM. These should be re-enabled with a bigger GPU on CI or when model parallelism #13726 is implemented and then run the tests on multi-GPU.
These slow tests should be run manually before merging anything related to GPTJ modeling. | 09-30-2021 11:28:32 | 09-30-2021 11:28:32 | |
transformers | 13,808 | closed | Some weights of BeitModel were not initialized from the model checkpoint | ## Environment info
- `transformers` version: 4.11.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
## Information
Model I am using (Bert, XLNet ...): [BEiT](https://huggingface.co/transformers/master/model_doc/beit.html)
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Run the example code with various values for `model_name`:
1. `model_name = 'microsoft/beit-base-patch16-224-pt22k'`
2. `model_name = 'microsoft/beit-base-patch16-224-pt22k-ft22k'`
3. `model_name = 'microsoft/beit-base-patch16-224'`
```python
from transformers import BeitFeatureExtractor, BeitModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model_name = 'microsoft/beit-base-patch16-224-pt22k'
# model_name = 'microsoft/beit-base-patch16-224-pt22k-ft22k'
# model_name = 'microsoft/beit-base-patch16-224'
feature_extractor = BeitFeatureExtractor.from_pretrained(model_name)
model = BeitModel.from_pretrained(model_name)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
Case 1:
```
Some weights of the model checkpoint at microsoft/beit-base-patch16-224-pt22k were not used when initializing BeitModel: ['layernorm.weight', 'lm_head.bias', 'layernorm.bias', 'lm_head.weight']
- This IS expected if you are initializing BeitModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BeitModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BeitModel were not initialized from the model checkpoint at microsoft/beit-base-patch16-224-pt22k and are newly initialized: ['beit.pooler.layernorm.bias', 'beit.pooler.layernorm.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
Case 2:
```
Some weights of the model checkpoint at microsoft/beit-base-patch16-224-pt22k-ft22k were not used when initializing BeitModel: ['classifier.weight', 'classifier.bias']
- This IS expected if you are initializing BeitModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BeitModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
Case 3:
```
Some weights of the model checkpoint at microsoft/beit-base-patch16-224 were not used when initializing BeitModel: ['classifier.weight', 'classifier.bias']
- This IS expected if you are initializing BeitModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BeitModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
## Expected behavior
Cases 2 and 3 are as expected: the classifier is not used when initializing.
However, case 1:
- does not make use of `['layernorm.weight', 'layernorm.bias']` when initializing,
- does not initialize `['beit.pooler.layernorm.bias', 'beit.pooler.layernorm.weight']`.
I think it might be an oversight.
Quotes of the relevant parts of the log for case 1:
```
Some weights of the model checkpoint at microsoft/beit-base-patch16-224-pt22k
were not used when initializing BeitModel:
['layernorm.weight', 'lm_head.bias', 'layernorm.bias', 'lm_head.weight']
```
```
Some weights of BeitModel were not initialized from the model checkpoint at microsoft/beit-base-patch16-224-pt22k
and are newly initialized:
['beit.pooler.layernorm.bias', 'beit.pooler.layernorm.weight']
``` | 09-30-2021 10:33:34 | 09-30-2021 10:33:34 | Hi,
The 'microsoft/beit-base-patch16-224-pt22k' model is the one that was pre-trained only using a masked image modeling objective. It should be loaded from a `BeitForMaskedImageModeling` model, which adds a `layernorm` + `lm_head` on top of `BeitModel` as can be seen [here](https://github.com/huggingface/transformers/blob/7db2a79b387fd862ffb0af72f7148e6371339c7f/src/transformers/models/beit/modeling_beit.py#L679). It also doesn't make use of the pooler of `BeitModel`, which is why these weights are not initialized.<|||||>Thank you for the answer!
I did not know that the `layernorm` was considered to be a part of the classifier head for this objective.
https://github.com/huggingface/transformers/blob/7db2a79b387fd862ffb0af72f7148e6371339c7f/src/transformers/models/beit/modeling_beit.py#L679-L688
So I thought it was an oversight and that the pre-trained weights would be copied to `self.layernorm`:
https://github.com/huggingface/transformers/blob/7db2a79b387fd862ffb0af72f7148e6371339c7f/src/transformers/models/beit/modeling_beit.py#L560-L571 |
transformers | 13,807 | closed | [Don't merge now] Add cross attention to TFGPT2 | # What does this PR do?
Add cross attention to TFGPT2.
This was previously done in #13222, but we decided to move this to a new PR.
This PR could be merged only after #13222 is merged.
## Who can review? | 09-30-2021 08:43:19 | 09-30-2021 08:43:19 | |
transformers | 13,806 | closed | Loading wav2vec2 pre-trained models | Hi, my aim is to get embeddings from a pre-trained wav2vec2 model using my own data (over 9k samples for each: train, dev, and test; the wavs are from 1 to 4 secs of duration)
I got two main inquiries:
1. I am trying to load a pre-trained model in the following way and I get "CUDA out of memory" error:
```device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_name = "facebook/wav2vec2-large-xlsr-53-german"
feature_extractor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2Model.from_pretrained(model_name)
model.to(device)
```
Am I missing something here? The only way I could load the model is when I don't do `.to(device)` so it loads in the RAM, but in this way, inferencing is way too slow.
2. After loading the model without using GPU, I could get the embeddings in this way:
```
input_values = feature_extractor(train_dataset["speech"], return_tensors="pt", padding=True,
feature_size=1, sampling_rate=16000 )
outs = model(**input_values)
last_hiddens = outs.last_hidden_state # embeddings here correspond to the sequence of last_hidden_states
last_cnn_embs = outs.extract_features # sequence of features of the last conv layer of the model
```
Saving the embeddings to a binary file resulted in over 20 GB for the `last_hiddens` and over 10 GB for the `last_cnn_embs`.
Is that a normal thing? | 09-30-2021 08:18:04 | 09-30-2021 08:18:04 | Hey @jvel07 - you should be able to load the model to GPU. The model is "only" 1GB in size so it should be easily useable on most GPUs. Are you sure there aren't any other processes occupying the GPU when you try loading the model?
Regarding the binary files - I can't really say anything here as I've never saved the embedding outputs to binaries.<|||||>Can you please check the forum: https://discuss.huggingface.co/ for more specific Wav2Vec2 questions? :-) Also these blog posts might be useful:
- https://huggingface.co/blog/fine-tune-wav2vec2-english
- https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
|
transformers | 13,805 | closed | T5ForConditionalGeneration: enabling using past_key_values and labels in training | In T5ForConditionalGeneration, when training, we're not able to use the parameter _past_key_values_ together with _labels_.
And even if we use decoder_input_ids rather than labels, only the last target token is used, if the _past_key_values_ parameter is provided.
These limitations do not exist in other models like BART and BERT.
Fixes:
- Delete the whole if statement to enabling using past_key_values in the T5Decoder when training.
These changes will not affect the `forward()` and `generate()` under normal conditions. And they make it possible to modify _past_key_values_ in the training of T5 the same way I've done in BARTForConditionalGeneration.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/13711
## Who can review?
@patrickvonplaten, @patil-suraj
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-30-2021 06:19:26 | 09-30-2021 06:19:26 | @patil-suraj - can you take a final look here? |
transformers | 13,804 | closed | How to use LayoutLM Tensorflow version on Google Colab? | Hi,
Could you please help me implement LayoutLM's tensorflow version in Google Colab?
Previously I have implemented using the following link: https://github.com/microsoft/unilm.git
I would like to understand how to implement the Tensorflow version.
https://huggingface.co/atahmasb/tf-layoutlm-base-uncased
Thank you. | 09-30-2021 05:04:05 | 09-30-2021 05:04:05 | You can check out the documentation in order to use the model: https://huggingface.co/transformers/model_doc/layoutlm.html#tflayoutlmmodel<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,803 | closed | [testing] auto-replay captured streams | This is a sync with the BigScience project https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/120
tldr; `CaptureStd*` interferes with `pytest` flags like `-sv` that should normally prints the std streams to the output. This PR fixes that.
Detailed: We noticed that when `CaptureStd` was used on the whole sub-process run it'd make it very difficult to debug problematic tests, since well it eats the output. So this PR changes the default behavior to replay the captured streams and adds a config arg to disable that.
This is a testing feature so backward compatibility shouldn't be an issue, and it will make things easier for us developers, rather than more difficult. And by default `pytest` captures/hides the std streams anyway.
@sgugger, @LysandreJik | 09-29-2021 23:51:22 | 09-29-2021 23:51:22 | |
transformers | 13,802 | closed | [examples/pytorch/image-classification] why does it require `torch>=1.9.0` | Why do we have these particular requirements at:
https://github.com/huggingface/transformers/blob/b90096fe1424e1927e549b73b5db67341a0d27d0/examples/pytorch/image-classification/requirements.txt#L1-L2
I tested it and it works with pt-1.8.2 just fine.
```
RUN_SLOW=1 pyt examples/pytorch/test_examples.py -k test_run_image_classification
```
Let's not set a higher version requirement than it actually is needed.
I propose to remove both entries altogether unless they are actually needed and if so tested with the lowest supported version.
It's important in this particular case since HF Deepspeed integration tests are now included in the Deepspeeed CI which runs on pt-1.8 so I can't test HF image models on the Deepspeed side then.
Thank you!
@nateraw, @LysandreJik | 09-29-2021 21:01:55 | 09-29-2021 21:01:55 | Feel free to PR if ya want if it's a blocker for you! Otherwise I'll get to it when I fix the config issue (on mobile can't link to issue number)<|||||>I can wait no problem, @nateraw - I just didn't know which lower version is actually required.
Thank you! |
transformers | 13,801 | closed | Transformer Hosted API Broken | When you change the default example the following error shows up on Hosted inference API

| 09-29-2021 20:39:02 | 09-29-2021 20:39:02 | cc @nreimers <|||||>The issue is known and has to do with some cache file cleaning we use for the widgets: Files older than 30 days are deleted, but this causes some issues with the caching system of sentence-transformers.
@osanseviero has posted a PR in sentence-transformers that will re-download files. The PR is merged in sentence-transformers and will soon pushed to pip. It will then solve this issue. <|||||>[This](https://github.com/UKPLab/sentence-transformers/pull/1116) is the PR for reference. I'll update this issue once all changes are pushed to pip + deployed in the API.<|||||>The newest version of sentence-transformers is pushed to pip: version 2.1.0
It can now be deployed in the API.<|||||>This will be fixed with https://github.com/huggingface/huggingface_hub/pull/383 deployment<|||||>This is fixed now! |
transformers | 13,800 | closed | Bart: check if decoder_inputs_embeds is set | In BartForConditionalGeneration.forward, if labels are provided,
decoder_input_ids are set to the labels shifted to the right.
This is problematic: if decoder_inputs_embeds is also set,
the call to self.model, which eventually gets to BartDecoder.forward,
will raise an error.
The fix is quite simple, similar to what is there already in
BartModel.forward. Mainly, we should not
compute decoder_input_ids if decoder_inputs_embeds is provided.
# What does this PR do?
Fixes #12475
## Who can review?
@patrickvonplaten | 09-29-2021 20:19:00 | 09-29-2021 20:19:00 | |
transformers | 13,799 | closed | Bart: check if decoder_inputs_embeds is set | In BartForConditionalGeneration.forward, if labels are provided,
decoder_input_ids are set to the labels shifted to the right.
This is problematic: if decoder_inputs_embeds is also set,
the call to self.model, which eventually gets to BartDecoder.forward,
will raise an error.
The fix is quite simple, similar to what is there already in
BartModel.forward. Mainly, we should not
compute decoder_input_ids if decoder_inputs_embeds is provided.
# What does this PR do?
Fixes #12475
## Who can review?
@patrickvonplaten | 09-29-2021 20:03:54 | 09-29-2021 20:03:54 | |
transformers | 13,798 | closed | transformers seems to have recently been "bricked" | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.0.dev0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger
## Information
The example script below was working fine until today. I believe that it was working in version `4.11.0.dev0`. If you can please tell me how to checkout the source for `4.11.0.dev0` from github, I will confirm that it works.
## To reproduce
Steps to reproduce the behavior:
On a TPU colab instance with High-RAM, run:
```
CHECKPOINT=bert-large-uncased
DATASET=rte
EPOCHS=2
BATCH_SIZE=16
LEARNING_RATE=3e-5
python transformers/examples/pytorch/xla_spawn.py --num_cores 8 \
transformers/examples/pytorch/text-classification/run_glue.py \
--model_name_or_path $CHECKPOINT \
--task_name $DATASET \
--seed 10000 \
--output_dir results \
--overwrite_output_dir \
--num_train_epochs $EPOCHS \
--evaluation_strategy no \
--logging_strategy epoch \
--save_strategy epoch \
--per_device_train_batch_size $BATCH_SIZE \
--per_device_eval_batch_size $BATCH_SIZE \
--learning_rate $LEARNING_RATE \
--do_train
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Gives the error:
```
Exception in device=TPU:7: zero-dimensional tensor (at position 0) cannot be concatenated
Exception in device=TPU:4: zero-dimensional tensor (at position 0) cannot be concatenated
Exception in device=TPU:2: zero-dimensional tensor (at position 0) cannot be concatenated
Exception in device=TPU:1: zero-dimensional tensor (at position 0) cannot be concatenated
Exception in device=TPU:6: zero-dimensional tensor (at position 0) cannot be concatenated
Exception in device=TPU:5: zero-dimensional tensor (at position 0) cannot be concatenated
Exception in device=TPU:3: zero-dimensional tensor (at position 0) cannot be concatenated
Exception in device=TPU:0: zero-dimensional tensor (at position 0) cannot be concatenated
File "/content/transformers/examples/pytorch/text-classification/run_glue.py", line 486, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/content/transformers/examples/pytorch/text-classification/run_glue.py", line 486, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/content/transformers/src/transformers/trainer.py", line 1383, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/content/transformers/src/transformers/trainer.py", line 1383, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/content/transformers/src/transformers/trainer.py", line 1467, in _maybe_log_save_evaluate
tr_loss_scalar = self._nested_gather(tr_loss).mean().item()
File "/content/transformers/src/transformers/trainer.py", line 1467, in _maybe_log_save_evaluate
tr_loss_scalar = self._nested_gather(tr_loss).mean().item()
File "/content/transformers/src/transformers/trainer.py", line 2373, in _nested_gather
tensors = nested_xla_mesh_reduce(tensors, name)
File "/content/transformers/src/transformers/trainer.py", line 2373, in _nested_gather
tensors = nested_xla_mesh_reduce(tensors, name)
File "/content/transformers/src/transformers/trainer_pt_utils.py", line 155, in nested_xla_mesh_reduce
return xm.mesh_reduce(name, tensors, torch.cat)
File "/content/transformers/src/transformers/trainer_pt_utils.py", line 155, in nested_xla_mesh_reduce
return xm.mesh_reduce(name, tensors, torch.cat)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 916, in mesh_reduce
return reduce_fn(xldata) if xldata else cpu_data
File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 916, in mesh_reduce
return reduce_fn(xldata) if xldata else cpu_data
uted/xla_multiprocessing.py", line 329, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
fn(gindex, *args)
File "/content/transformers/examples/pytorch/text-classification/run_glue.py", line 564, in _mp_fn
main()
File "/content/transformers/examples/pytorch/text-classification/run_glue.py", line 486, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/content/transformers/src/transformers/trainer.py", line 1383, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/content/transformers/src/transformers/trainer.py", line 1467, in _maybe_log_save_evaluate
tr_loss_scalar = self._nested_gather(tr_loss).mean().item()
File "/content/transformers/src/transformers/trainer.py", line 2373, in _nested_gather
tensors = nested_xla_mesh_reduce(tensors, name)
File "/content/transformers/src/transformers/trainer_pt_utils.py", line 155, in nested_xla_mesh_reduce
return xm.mesh_reduce(name, tensors, torch.cat)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 916, in mesh_reduce
return reduce_fn(xldata) if xldata else cpu_data
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
fn(gindex, *args)
File "/content/transformers/examples/pytorch/text-classification/run_glue.py", line 564, in _mp_fn
main()
File "/content/transformers/examples/pytorch/text-classification/run_glue.py", line 486, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/content/transformers/src/transformers/trainer.py", line 1383, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/content/transformers/src/transformers/trainer.py", line 1467, in _maybe_log_save_evaluate
tr_loss_scalar = self._nested_gather(tr_loss).mean().item()
File "/content/transformers/src/transformers/trainer.py", line 2373, in _nested_gather
tensors = nested_xla_mesh_reduce(tensors, name)
File "/content/transformers/src/transformers/trainer_pt_utils.py", line 155, in nested_xla_mesh_reduce
return xm.mesh_reduce(name, tensors, torch.cat)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 916, in mesh_reduce
return reduce_fn(xldata) if xldata else cpu_data
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
fn(gindex, *args)
File "/content/transformers/examples/pytorch/text-classification/run_glue.py", line 564, in _mp_fn
main()
File "/content/transformers/examples/pytorch/text-classification/run_glue.py", line 486, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/content/transformers/src/transformers/trainer.py", line 1383, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/content/transformers/src/transformers/trainer.py", line 1467, in _maybe_log_save_evaluate
tr_loss_scalar = self._nested_gather(tr_loss).mean().item()
Traceback (most recent call last):
File "/content/transformers/src/transformers/trainer.py", line 2373, in _nested_gather
tensors = nested_xla_mesh_reduce(tensors, name)
File "/content/transformers/src/transformers/trainer_pt_utils.py", line 155, in nested_xla_mesh_reduce
return xm.mesh_reduce(name, tensors, torch.cat)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 916, in mesh_reduce
return reduce_fn(xldata) if xldata else cpu_data
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
fn(gindex, *args)
File "/content/transformers/examples/pytorch/text-classification/run_glue.py", line 564, in _mp_fn
main()
File "/content/transformers/examples/pytorch/text-classification/run_glue.py", line 486, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/content/transformers/src/transformers/trainer.py", line 1383, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/content/transformers/src/transformers/trainer.py", line 1467, in _maybe_log_save_evaluate
tr_loss_scalar = self._nested_gather(tr_loss).mean().item()
File "/content/transformers/src/transformers/trainer.py", line 2373, in _nested_gather
tensors = nested_xla_mesh_reduce(tensors, name)
File "/content/transformers/src/transformers/trainer_pt_utils.py", line 155, in nested_xla_mesh_reduce
return xm.mesh_reduce(name, tensors, torch.cat)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 916, in mesh_reduce
return reduce_fn(xldata) if xldata else cpu_data
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
File "/content/transformers/examples/pytorch/text-classification/run_glue.py", line 486, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/content/transformers/examples/pytorch/text-classification/run_glue.py", line 486, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/content/transformers/src/transformers/trainer.py", line 1383, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/content/transformers/src/transformers/trainer.py", line 1383, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/content/transformers/src/transformers/trainer.py", line 1467, in _maybe_log_save_evaluate
tr_loss_scalar = self._nested_gather(tr_loss).mean().item()
File "/content/transformers/src/transformers/trainer.py", line 1467, in _maybe_log_save_evaluate
tr_loss_scalar = self._nested_gather(tr_loss).mean().item()
File "/content/transformers/src/transformers/trainer.py", line 2373, in _nested_gather
tensors = nested_xla_mesh_reduce(tensors, name)
File "/content/transformers/src/transformers/trainer.py", line 2373, in _nested_gather
tensors = nested_xla_mesh_reduce(tensors, name)
File "/content/transformers/src/transformers/trainer_pt_utils.py", line 155, in nested_xla_mesh_reduce
return xm.mesh_reduce(name, tensors, torch.cat)
File "/content/transformers/src/transformers/trainer_pt_utils.py", line 155, in nested_xla_mesh_reduce
return xm.mesh_reduce(name, tensors, torch.cat)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 916, in mesh_reduce
return reduce_fn(xldata) if xldata else cpu_data
File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 916, in mesh_reduce
return reduce_fn(xldata) if xldata else cpu_data
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
50%|█████████████▌ | 20/40 [08:22<08:22, 25.15s/it]
Traceback (most recent call last):
File "transformers/examples/pytorch/xla_spawn.py", line 85, in <module>
main()
File "transformers/examples/pytorch/xla_spawn.py", line 81, in main
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 394, in spawn
start_method=start_method)
File "/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
while not context.join():
File "/usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py", line 144, in join
exit_code=exitcode
torch.multiprocessing.spawn.ProcessExitedException: process 0 terminated with exit code 17
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
No error. | 09-29-2021 19:54:18 | 09-29-2021 19:54:18 | I see where the problem comes from. Will push a fix tonight or tomorrow morning, then we will do a patch release.
In the meantime you should have no error by staying on v4.10<|||||>I run out of memory using transformers v4.X where X > 10 training `led-large-16384-arxiv` with four gradient accumulation steps and a batch size of two like in [this notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing) on an A6000 with 48 GB of RAM. I had to bump gradient accumulation steps and batch size down to 1 each to fit the model + batch on the GPU. Wild. Don't really feel like opening an issue, but yeah just thought I'd chirp in here and say that with v4.10.1 I can fit up to 8 samples per batch with four gradient accumulation steps on the A6000.
If you upgrade to `4.11.1` in the colab notebook I shared [it fails](https://gist.github.com/odellus/e1c637860acbd66280429fe1f99d4071), but for `4.10.1` it works just fine. |
transformers | 13,797 | closed | (Distributed)LengthGroupedSampler: allow only providing lengths but not a dataset | # 🚀 Feature request
Currently `LengthGroupedSampler` & `DistributedLengthGroupedSampler` require a `dataset` and optionally `lengths`. However, the sole purpose of `dataset` here is to get the lengths, so these classes should allow providing `lengths` only but not a `dataset`. This is easily doable for `LengthGroupedSampler` by giving `None` as a `dataset` (though not very elegant). However, `DistributedLengthGroupedSampler` actually calls `len(dataset)` even when `lengths` is provided (this can be easily circumvented by calling `len(lengths)` instead if provided). I think it'd be great if both classes make `dataset` an optional parameter but check that at least one of `dataset` and `lengths` is provided.
## Motivation
Sometimes people do not use a HuggingFace Dataset instance, but find the functionality provided by `LengthGroupedSampler` convenient to use. So they can manually provided `lengths`.
## Your contribution
I can submit a PR, if needed.
| 09-29-2021 18:06:48 | 09-29-2021 18:06:48 | cc @sgugger <|||||>This would be a nice addition indeed. If you want to work on this, don't hesitate to submit a PR!<|||||>Submitted: #13820 |
transformers | 13,796 | closed | [DPR] Correct init | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR corrects the fast initialization scheme for DPR. DPR overwrote the `init_weights` function instead of implementing a `_init_weights`. This PR corrects this to align DPR more with the other models. | 09-29-2021 17:51:54 | 09-29-2021 17:51:54 | |
transformers | 13,795 | closed | [docs/gpt-j] addd instructions for how minimize CPU RAM usage | # What does this PR do?
This PR updates the GPT-J docs to add tips for low CPU RAM usage and also some estimates about how much GPU RAM
it would take to fine-tune the model. | 09-29-2021 17:51:39 | 09-29-2021 17:51:39 | |
transformers | 13,794 | closed | LengthGroupedSampler: why longest examples go first? | Is there any reason for the longest, but not the shortest, examples to go first? I think in some curriculum learning literature people start with the easiest example. What's design decision behind starting with the longest ones? | 09-29-2021 17:49:13 | 09-29-2021 17:49:13 | Correct me if I'm wrong @sgugger, but I believe this is because this way, if a memory error happens, it's at the absolute beginning - and not mid-training, which is much more annoying.<|||||>Exactly, this is to get the OOM as soon as possible if it should ever happen.<|||||>That makes a lot of sense -- thanks! |
transformers | 13,793 | closed | Add TF notebooks | null | 09-29-2021 15:43:41 | 09-29-2021 15:43:41 | |
transformers | 13,792 | closed | Fix length of IterableDatasetShard and add test | # What does this PR do?
This PR fixes the length computation of an `IterableDatasetShard` added in #13780 and ensures it is correct with a test.
It also adds a missing mapping in the test fetcher util, to make sure modifications in `trainer_pt_utils.py` are properly tested. | 09-29-2021 15:20:11 | 09-29-2021 15:20:11 | |
transformers | 13,791 | closed | Make from_pretrained support parameters defined in the forward pass | # 🚀 Feature request
Be able to make the `from_pretrained` method support parameters defined in the forward pass rather than the init of a certain `nn.Module` of a model.
## Motivation
So I was implementing PerceiverIO in PyTorch, and it works fine for the text, image and optical flow modalities. However, for the [multimodal autoencoding](https://github.com/deepmind/deepmind-research/blob/master/perceiver/colabs/video_autoencoding.ipynb) one, I do have some limitations to make it fully HuggingFace-API friendly. The problem is that for this model, modality-specific parameters are defined whose shape depends on the combination of a large number of preprocessing parameters.
One could come up with (rather complex) formulas to define everything from beforehand based on the several combinations of preprocessing configurations, in order to be able to define the parameters at initialization. This is what I tried at first. However, these formulas become very complex, and hence very error prone. To give an example:
Suppose that one uses the same preprocessing parameters as defined in the [official video autoencoding Colab notebook](https://github.com/deepmind/deepmind-research/blob/master/perceiver/colabs/video_autoencoding.ipynb), then the 3 modalities of a video (images, audio and class label) have the following shape after preprocessing (preprocessing is part of the model!):
* image modality will have shape (1, 50176, 243)
* audio modality will have shape (1, 1920, 401)
* class label modality will have shape (1, 1, 700).
The Perceiver authors then use trainable modality-specific parameters that pad the respective modality to the highest number of channels + a certain minimum padding size (in this case, the class label modality has the highest number of channels, namely 700, and the minimum padding size is 4, hence all modalities should be padded to have 704 channels). Hence, the image padding parameter will be an `nn.Parameter` of shape `(1, 704-243)` = `(1, 461)`, the audio padding parameter will be an `nn.Parameter` of shape `(1, 704-401)` = `(1, 303)` and the label padding parameter will be an `nn.Parameter` of shape `(1, 704-700)` = `(1, 4)`.
However, this is only for _one specific combination of preprocessing_ (for the image modality e.g., Fourier embeddings with a certain `num_bands`, concatenation with original positions, no additional projection, then concatenation with patches, etc.). These settings are all configurable in the `ImagePreprocessor`, which makes the number of preprocessing combinations large, and hence rather difficult to come up with a general formula to calculate the `num_channels` of the image modality and audio modalities that take into account all possible settings. For this given case, the `num_channels` of the image modality is equal to `temporal_downsample * spatial_downsample * spatial_downsample * in_channels + (2 * d * num_bands + d)`, but that's only one very specific setting of preprocessing, there are a lot more.
In frameworks like Haiku and Flax, one can easily define parameters in the forward pass, as one always first needs to forward a dummy input through the model, in order to do shape inference for all parameters. In PyTorch, this is also possible, by defining the parameters using `register_parameter` in the init method, and then instantiating them in the forward pass, based on the size of the inputs:
```
import torch.nn as nn
class Test(nn.Module):
def __init__(self):
# you need to register the parameter names earlier
super().__init__()
self.register_parameter('weight', None)
def forward(self, input):
if self.weight is None:
self.weight = nn.Parameter(torch.randn(input.size()))
return self.weight @ input
```
And it's exactly this idea that would be very handy to use when implementing `PerceiverForMultimodalAutoencoding` (I've currently implemented it this way). However, this doesn't play well with the `from_pretrained()` method, as this method **assumes** that all parameters of a model can be defined at initialization. It doesn't support first forwarding a dummy input through the model in order to instantiate all parameters, and then load the weights from a state dict. Hence, loading this model with the `.from_pretrained()` method will not instantiate these parameters.
For this model, the dummy inputs would look like this:
```
images = torch.randn((1, 16, 3, 224, 224))
audio = torch.randn((1, 30720, 1))
inputs = dict(image=images, audio=audio, label=torch.zeros((images.shape[0], 700)))
```
If we can pass in a `dummy_inputs` parameter to the `from_pretrained` method, then it could use it to first instantiate all parameters of the model, and then load it with the weights from the hub.
Curious to hear your opinions, it might be a bad idea or it could be that there are other (better) options. If this is not possible, then I guess there's no option but to come up with the formulas.
cc @patrickvonplaten @sgugger @LysandreJik @patil-suraj | 09-29-2021 13:40:57 | 09-29-2021 13:40:57 | I think the suggested approach is a bit too heavy. There is an easy way to load those dynamic weights directly inside the `from_pretrained` method by adapting a bit the code. At a first glance, changing the load function [here](https://github.com/huggingface/transformers/blob/269c3d1400267d966a5b2a962c69c56ac8aca5c3/src/transformers/modeling_utils.py#L1540) by adding the following code at the end seems to work pretty well and doesn't require adding any new argument:
```diff
def load(module: nn.Module, prefix=""):
local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {})
args = (state_dict, prefix, local_metadata, True, [], [], error_msgs)
if is_deepspeed_zero3_enabled():
import deepspeed
# because zero3 puts placeholders in model params, this context
# manager gathers (unpartitions) the params of the current layer, then loads from
# the state dict and then re-partitions them again
with deepspeed.zero.GatheredParameters(list(module.parameters(recurse=False)), modifier_rank=0):
if torch.distributed.get_rank() == 0:
module._load_from_state_dict(*args)
else:
module._load_from_state_dict(*args)
for name, child in module._modules.items():
if child is not None:
load(child, prefix + name + ".")
+ for name, param in module._parameters.items():
+ if param is None and prefix + name in state_dict:
+ setattr(module, name, torch.nn.Parameter(state_dict[prefix+name]))
```
(the added code is in the last three lines). It would need to be battle-tested a bit more but just creates a proper parameters from the weight inside the state dict if a parameter is None somewhere in the model and has a corresponding weight. This is similar to how you initialize them in the forward pass, just from the model state dict.<|||||>I'm not really in favor of adapting PyTorch's way of initializing models to Flax's of Tensorflow's "tracing-the-weights" initialization scheme.
PyTorch's way of initialization scheme has a couple of advantages:
- `.from_pretrained(...)` does not run the forward pass. This is nice for debugging purposes. This might seem like a small thing, but it's a huge change in user-debugging-experience if we would start tracing PyTorch's models IMO.
- PyTorch forces the code to always define input and output dimensions of the Parameters -> this is very nice for readability. One always knows the weight structure from looking into the code (this is a big advantage to TF and Flax IMO)
> as this method assumes that all parameters of a model can be defined at initialization
-> I think it always holds true though that all parameters can be defined at initialization no? Even if it might require a bit of math to compute a dimension, one can always define all the parameter dimensions at initialization no?
I would prefer to not make any change to the `.from_pretrained(...)` method here actually as IMO it would introduce a bit too much "under-the-hood magic"<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,790 | closed | DetrFeatureExtractor fails if do_normalize set to false | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.0
- Platform: Ubuntu 20.04
- Python version: Python 3.8.10
- PyTorch version (GPU?): 1.8.1
- Tensorflow version (GPU?): -
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes (but same error on local machine CPU normal script)
### Who can help
@LysandreJik
@NielsRogge
## Information
Model I am using:
Detr with COCOText-v2 dataset for text detection
The problem arises when using:
* [x ] my own modified scripts:
```
def __getitem__(self, index):
image_id = self.image_ids[index]
image_file = self.image_info[index]
annotations = self.ann_info[index]
image_path = os.path.join(self.image_folder_path, image_file[0]['file_name'])
image = Image.open(image_path).convert("RGB")
target = {'image_id': image_id[0], 'annotations': annotations}
encoding = self.feature_extractor(images=image, annotations=target, return_tensors="pt")
pixel_values = encoding["pixel_values"].squeeze() # remove batch dimension
target = encoding["labels"][0] # remove batch dimension
return pixel_values, target
```
## To reproduce
Steps to reproduce the behavior:
1.` feature_extractor = DetrFeatureExtractor(format="coco_detection", do_resize=False, do_normalize=False, image_mean=[0.485, 0.456, 0.406], image_std=[0.229, 0.224, 0.225])`
2. `encoding = self.feature_extractor(images=image, annotations=target, return_tensors="pt")`
3.
transformers lib: DetrFeatureExtractor class line 584
```
if pad_and_return_pixel_mask:
# pad images up to largest image in batch and create pixel_mask
max_size = self._max_by_axis([list(image.shape) for image in images])
```
```
File "/home/felix/Desktop/memoresa-work-ml/Task_CV/Lightning_OCR/1_text_scene_detector/detr/dataloader.py", line 102, in __getitem__
encoding = self.feature_extractor(images=image, annotations=target, return_tensors="pt")
File "/home/felix/anaconda3/envs/work/lib/python3.8/site-packages/transformers/models/detr/feature_extraction_detr.py", line 584, in __call__
max_size = self._max_by_axis([list(image.shape) for image in images])
File "/home/felix/anaconda3/envs/work/lib/python3.8/site-packages/transformers/models/detr/feature_extraction_detr.py", line 584, in <listcomp>
max_size = self._max_by_axis([list(image.shape) for image in images])
File "/home/felix/anaconda3/envs/work/lib/python3.8/site-packages/PIL/Image.py", line 546, in __getattr__
raise AttributeError(name)
AttributeError: shape
```
## Expected behavior
| 09-29-2021 13:08:51 | 09-29-2021 13:08:51 | Thanks for reporting.
If you don't set `normalize` to `True`, then the images remain PIL images, i.e. they are not converted to Numpy arrays. If you then specify `pad_and_return_pixel_mask`, it cannot compute a shape.
I can add a check that converts the images to numpy arrays in any case. What's your use case for not normalizing the images? Do you already do this yourself?<|||||>@NielsRogge
this would be great !
I do experiment currently with Detr for text detection and would check what i get back from my dataloader so in the normalize step the annotations ( targets ) would be also normalized which makes the reconstruction harder.
So for example visualize/reconstruct some examples for test purpose with there bounding boxes before passing into the model.
I have also opened a thread about this use case if you have a bit time would be nice to get your opinion :)
[Detr for text detection](https://discuss.huggingface.co/t/detection-transformer-detr-for-text-detection-in-documents/10396)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,789 | closed | is the prediction_logits.size() is correct? | hi,
I'm using the BertForPreTraining.from_pretrained() to load the local ckpt file, and I want to get a sentence new embedding, the expect size of output is [1,9,786],but the output is [1,9,30522] and the 30522 is the size of vocab.txt. I also use the "bert-base-uncased", the result is same ,here is my code ,how could i get the expect output.
import torch
from transformers import BertTokenizer,BertForPreTraining, BertConfig
model_path = 'D:/Project/bert/pre_trained/uncased_L-12_H-768_A-12'
tokenizer = BertTokenizer.from_pretrained("{}/vocab.txt".format(model_path))
config = BertConfig.from_pretrained("{}/bert_config.json".format(model_path))
model = BertForPreTraining.from_pretrained("{}/bert_model.ckpt.index".format(model_path), from_tf=True,config=config)
input_text = "Here is some text to encode"
input_ids = tokenizer.encode(input_text, add_special_tokens=True)
print(input_ids) -> input_ids: [101, 2182, 2003, 2070, 3793, 2000, 4372, 16044, 102]
input_ids = torch.tensor([input_ids])
last_hidden_states = model(input_ids).prediction_logits
print(last_hidden_states.size())->[1,9,30522]
import torch
from transformers import BertModel, BertTokenizer,BertForPreTraining, BertConfig
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
model = BertForPreTraining.from_pretrained("bert-base-uncased")
input_text = "Here is some text to encode"
input_ids = tokenizer.encode(input_text, add_special_tokens=True)
print(input_ids) -> input_ids: [101, 2182, 2003, 2070, 3793, 2000, 4372, 16044, 102]
input_ids = torch.tensor([input_ids])
last_hidden_states = model(input_ids).prediction_logits
print(last_hidden_states.size())->[1,9,30522]
| 09-29-2021 11:06:04 | 09-29-2021 11:06:04 | To get a sentence embedding, you don't need to use `BertForPreTraining`, as this model adds the heads for pre-training (which are next sentence prediction and masked language modeling) on top. Hence, the `prediction_logits` come from the language modeling head.
You can just use `BertModel`, which is the BERT model without any head on top. To get an embedding vector, you can for example take the final hidden state of the special [CLS] token that is added at the beginning of the sentence, as follows:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
model = BertModel.from_pretrained("bert-base-uncased")
input_text = "Here is some text to encode"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
# forward pass
outputs = model(input_ids)
# get sentence vector (by taking the last hidden state of the [CLS] token)
last_hidden_states = outputs.last_hidden_state # shape (batch_size, seq_len, hidden_size)
sentence_vector = last_hidden_states[:,0,:] # shape (batch_size, hidden_size)
```
<|||||>thanks for your answer.
At first, I was used the `BertModel`, but when I use the `ckpt` file, I got this error :
> AttributeError: 'BertModel' object has no attribute 'bias'
so I searched the related issue, and change to use `BertForPreTraining`.
Now I understand the difference between `BertModel` and `BertForPtrTraining`. I would use `BertModel` to get a sentence embedding. |
transformers | 13,788 | closed | Add BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese | # What does this PR do?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Please help have a look: @patrickvonplaten, @patil-suraj Thanks.
| 09-29-2021 10:56:01 | 09-29-2021 10:56:01 | Thanks, @patil-suraj and @sgugger
I fixed the pull request based on you guys' comments.
The same comment I get from both of you is regarding the vocab_file. Here is a summary:
I did not train a sentencepiece for Vietnamese.
`bartpho-syllable` employs the existing pre-trained "sentencepiece" model from XLMRoBERTaTokenizer, and this pre-trained "sentencepiece" model is referred to as a `vocab_file` of 250K types.
`reduced_vocab_file` is a vocab containing 40K Vietnamese-specificized types extracted from the XLMRoBERTaTokenizer vocab of 250K types.
Usecase of BartphoTokenizer: Other languages can thus simply reuse BartphoTokenizer with their own `reduced_vocab_file`. The goal here is to reduce model sizes of existing pre-trained XLM-RoBERTa/mBART models when applying to a smaller set of languages instead of the whole 50/100 languages.<|||||>@sgugger , @LysandreJik, @patil-suraj , @SaulLu and @patrickvonplaten
Please could you have a look and provide your feedback for my recent changes? Thanks.<|||||>Thanks @LysandreJik
My pull request suddenly failed the check `run_tests_torch_and_flax`:
`FAILED tests/test_modeling_flax_clip.py::FlaxCLIPModelTest::test_equivalence_flax_to_pt`
`FAILED tests/test_modeling_flax_clip.py::FlaxCLIPModelTest::test_equivalence_pt_to_flax`
They are out of my control, not relating to BartphoTokenizer.
<|||||>Yes this is a problem unrelated to this PR so you can ignore those failures. They should be fixed tomorrow :-)<|||||>@sgugger I made a revision following your last comment. Thanks.
FYI, two failed tests are not related to BartphoTokenizer.<|||||>Thanks a gain for your contribution! |
transformers | 13,787 | closed | problem when loading local model | I have problem when I use
`PRE_TRAINED_MODEL_NAME = './chinese-roberta-wwm-ext'
tokenizer = BertTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME)`
and put the model on "'./chinese-roberta-wwm-ext" directory, it still download another new model in "/.cache" but not my local model. And every time I run it I have to wait for a long time.
info shows
> 09/29/2021 18:36:00 - INFO - filelock - Lock 140057189920976 acquired on /home/maoyuzhe/.cache/huggingface/transformers/92a56e79ec6564fd501527ed88ca336637eb4bfeb28d10580c3bbdfb7889a032.accd894ff58c6ff7bd4f3072890776c14f4ea34fcc08e79cd88c2d157756dceb.lock
Downloading: 100%|████████████████████████████| 107k/107k [00:00<00:00, 223kB/s]
09/29/2021 18:44:45 - INFO - filelock - Lock 140057189920976 released on /home/maoyuzhe/.cache/huggingface/transformers/92a56e79ec6564fd501527ed88ca336637eb4bfeb28d10580c3bbdfb7889a032.accd894ff58c6ff7bd4f3072890776c14f4ea34fcc08e79cd88c2d157756dceb.lock
09/29/2021 18:45:26 - INFO - filelock - Lock 140057049130192 acquired on /home/maoyuzhe/.cache/huggingface/transformers/87c7eedd995b4bae2c34df3baf2cbd5df5496bed675126427849c72e590f5574.5cc6e825eb228a7a5cfd27cb4d7151e97a79fb962b31aaf1813aa102e746584b.lock
sometimes it shows
> Traceback (most recent call last):
File "/tmp/pycharm_project_711/main.py", line 230, in <module>
tokenizer = BertTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME)
File "/home/maoyuzhe/.conda/envs/BDCI/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1655, in from_pretrained
local_files_only=local_files_only,
File "/home/maoyuzhe/.conda/envs/BDCI/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 3460, in get_fast_tokenizer_file
path_or_repo, revision=revision, use_auth_token=use_auth_token, local_files_only=local_files_only
File "/home/maoyuzhe/.conda/envs/BDCI/lib/python3.7/site-packages/transformers/file_utils.py", line 1730, in get_list_of_files
path_or_repo, revision=revision, token=token
File "/home/maoyuzhe/.conda/envs/BDCI/lib/python3.7/site-packages/huggingface_hub/hf_api.py", line 503, in model_info
r.raise_for_status()
File "/home/maoyuzhe/.conda/envs/BDCI/lib/python3.7/site-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/chinese-roberta-wwm-ext
Process finished with exit code 1
emo :( | 09-29-2021 10:43:55 | 09-29-2021 10:43:55 | Hello! Could you show the full code that yields this error? Thanks!<|||||>> Hello! Could you show the full code that yields this error? Thanks!
https://github.com/YuzheMao/ChallengeHub-Baselines/blob/main/aiqiyi-baseline.ipynb
here is the code, error occurs in 4.1<|||||>I don't see the error in the notebook!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,786 | closed | GPT2 (117M) results mismatch on PTB, enwik8 and text8 metrics | Model used: GPT2 117M parameter model
Platform: Google colab with GPU
I am trying to reproduce the results of GPT2 (117M) on various datasets but getting different values of various metrics. I have used this script from [Huggingace platform](https://huggingface.co/transformers/perplexity.html).
1. On PTB, I have used one from the source: https://github.com/wojzaremba/lstm/tree/master/data
Hopefully this is the original dataset used in GPT2 paper. **Reported value in GPT2 paper: 65.85 PPL**. I am getting the following results:
52.53 PPL (on test set using token count)
79.65 PPL (on test set using word count)
60.08 PPL (on validation set using token count)
**Can anyone clarify whether token or word count is used while calculating PPL on PTB dataset?**
2. On enwik8 dataset, **reported metric BPB value is 1.16** while I am getting a value of 2.23 using stride count of 1024. Changing stride to 512 is not having much of an affect.
3. On text8 dataset, **reported metric BPC value is 1.17** while I am getting a value of 2.29 using stride count of 1024. Again, changing stride to 512 is not improving the result.
**Is there some pre-processing that needs to be done which is not mentioned in the paper for all of the above?** | 09-29-2021 10:16:06 | 09-29-2021 10:16:06 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,785 | closed | Enable readme link synchronization | # What does this PR do?
This PR enables readme links synchronization between `README.md` and each localized README.
## Who can review?
@sgugger | 09-29-2021 07:34:41 | 09-29-2021 07:34:41 | |
transformers | 13,784 | open | Feature: Tail Free Sampling | # 🚀 Feature request
I would like to have Huggingface implement Tail Free Sampling, or TFS for short, to the official repo. The original paper for this can be found here: https://trentbrick.github.io/Tail-Free-Sampling/#tail-free-sampling-algorithm
Implementation of this code can be found here: https://github.com/finetuneanon/transformers/blob/gpt-neo-localattention3-rp-b/src/transformers/generation_logits_process.py#L243-L284
## Motivation
At the moment KoboldAI uses Finetuneanon's version of Transformers for creating stories, however, this version has several changes made to the repository that make it impossible for me to use (eg: I use an aarch64 architecture with an integrated Nvidia chip, which is only supported by Huggingface's main branch). There are calls from the KoboldAI devs to migrate due to the integrated support of GPT-J in Huggingface, however, there are some features in Finetuneanon's Transformer which are not found in the main branch. Tail Free Sampling is one such feature.
## Your contribution
I would like to file a PR with the code, however, I do lack the knowledge on how to implement the required tests and checks. An example is available on Finetuneanon's branch but it will require cherry-picking the commit to the main branch. | 09-29-2021 07:26:20 | 09-29-2021 07:26:20 | cc @patrickvonplaten <|||||>@mrseeker - would you like to open a PR for it :-) ?<|||||>Bumping to prevent getting closed, PR is WIP. |
transformers | 13,783 | closed | [WIP] Make import simple | # What does this PR do?
Fix #13390.
My be related to https://github.com/python/mypy/issues/7045.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-29-2021 06:38:00 | 09-29-2021 06:38:00 | |
transformers | 13,782 | closed | NotebookProgressBar can't display in Pycharm jupyter | this is full code
```
from transformers.utils.notebook import NotebookProgressBar
import time
pbar = NotebookProgressBar(100)
for val in range(100):
pbar.update(val)
time.sleep(0.07)
pbar.update(100)
```
when I use Pycharm jupyter, it display like this and not move

but when I use vscode, it run correctly

Can you give me some advise? Thanks | 09-29-2021 02:51:14 | 09-29-2021 02:51:14 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry I missed this! cc @sgugger if he has any idea of what might be going on<|||||>Not sure what the problem is with PyCharm notebooks. I haven't tried them before, so you should stick with the normal progress bars there.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,781 | closed | DDP BERT-Base on SQuaD2.0 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.2
- Platform: Linux-5.4.0-1056-aws-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.9.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes. I used cuda
- Using distributed or parallel set-up in script?: Data parallel
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models: Distilbert-base
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library: DistributedSampler, DistilBertForQuestionAnswering, DistilBertTokenizerFast
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
`from argparse import ArgumentParser`
import torch
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.utils.data import DataLoader, Dataset
from torch.utils.data.distributed import DistributedSampler
from transformers import DistilBertForQuestionAnswering
from transformers import DistilBertTokenizerFast
import json
from pathlib import Path
SEED = 42
BATCH_SIZE = 8
NUM_EPOCHS = 3
def read_squad(path):
path = Path(path)
with open(path, 'rb') as f:
squad_dict = json.load(f)
contexts = []
questions = []
answers = []
for group in squad_dict['data']:
for passage in group['paragraphs']:
context = passage['context']
for qa in passage['qas']:
question = qa['question']
for answer in qa['answers']:
contexts.append(context)
questions.append(question)
answers.append(answer)
return contexts, questions, answers
train_contexts, train_questions, train_answers = read_squad('../squad/train-v2.0.json')
val_contexts, val_questions, val_answers = read_squad('../squad/dev-v2.0.json')
def add_end_idx(answers, contexts):
for answer, context in zip(answers, contexts):
gold_text = answer['text']
start_idx = answer['answer_start']
end_idx = start_idx + len(gold_text)
# sometimes squad answers are off by a character or two – fix this
if context[start_idx:end_idx] == gold_text:
answer['answer_end'] = end_idx
elif context[start_idx-1:end_idx-1] == gold_text:
answer['answer_start'] = start_idx - 1
answer['answer_end'] = end_idx - 1 # When the gold label is off by one character
elif context[start_idx-2:end_idx-2] == gold_text:
answer['answer_start'] = start_idx - 2
answer['answer_end'] = end_idx - 2 # When the gold label is off by two characters
add_end_idx(train_answers, train_contexts)
add_end_idx(val_answers, val_contexts)
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
train_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True)
val_encodings = tokenizer(val_contexts, val_questions, truncation=True, padding=True)
def add_token_positions(encodings, answers):
start_positions = []
end_positions = []
for i in range(len(answers)):
start_positions.append(encodings.char_to_token(i, answers[i]['answer_start']))
end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] - 1))
# if start position is None, the answer passage has been truncated
if start_positions[-1] is None:
start_positions[-1] = tokenizer.model_max_length
if end_positions[-1] is None:
end_positions[-1] = tokenizer.model_max_length
encodings.update({'start_positions': start_positions, 'end_positions': end_positions})
add_token_positions(train_encodings, train_answers)
add_token_positions(val_encodings, val_answers)
class SquadDataset(torch.utils.data.Dataset):
def __init__(self, encodings):
self.encodings = encodings
def __getitem__(self, idx):
return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
def __len__(self):
return len(self.encodings.input_ids)
train_dataset = SquadDataset(train_encodings)
val_dataset = SquadDataset(val_encodings)
def main():
parser = ArgumentParser('DDP usage example')
parser.add_argument('--local_rank', type=int, default=-1, metavar='N', help='Local process rank.') # you need this argument in your scripts for DDP to work
args = parser.parse_args()
# keep track of whether the current process is the `master` process
args.is_master = args.local_rank == 0
args.device = torch.cuda.device(args.local_rank)
torch.cuda.set_device(args.local_rank)
# set the seed for all GPUs (also make sure to set the seed for random, numpy, etc.)
torch.cuda.manual_seed_all(SEED)
# initialize your model (BERT in this example)
model = DistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased")
# model = BertForMaskedLM.from_pretrained('bert-base-uncased')
model = torch.nn.DataParallel(model)
model = model.to(args.device)
model = DDP(
model,
device_ids=[args.local_rank],
output_device=args.local_rank
)
train_dataset = SquadDataset(train_encodings)
train_sampler = DistributedSampler(train_dataset)
train_dataloader = DataLoader(train_dataset, sampler=train_sampler, batch_size=batch_size)
val_dataset = SquadDataset(val_encodings)
val_sampler = DistributedSampler(val_dataset)
val_dataloader = DataLoader(val_dataset, sampler=train_sampler, batch_size=batch_size)
for epoch in range(NUM_EPOCHS):
model.train()
# let all processes sync up before starting with a new epoch of training
dist.barrier()
for step, batch in enumerate(train_dataloader):
batch = tuple(t.to(args.device) for t in batch)
outputs = model(*batch)
loss = outputs[0]
if __name__ == '__main__':
main()
`
## Information
Model I am using (Bert, XLNet ...): DistilBert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the above code on cuda-available devices and get the bug below.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
<img width="896" alt="Screen Shot 2021-09-28 at 16 15 13" src="https://user-images.githubusercontent.com/37657480/135178052-074ce67c-98e6-4eb8-80c7-7eb494159ed3.png">
| 09-28-2021 23:15:29 | 09-28-2021 23:15:29 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,780 | closed | Implement len in IterableDatasetShard | # What does this PR do?
Currently code using a size iterable dataset in distributed mode will fail because:
- the Trainer will detect the dataset is sized and try to get the length of the DataLoader
- the DataLoader will call the length of its dataset which is an IterableDatasetShard
- the IterableDatasetShard has no length
This PR adds the `__len__` method to `IterableDatasetShard` to solve that issue. The one thing that is midly annoying is that it will make all instance of `IterableDatasetShard` be recognized as `collections.abc.Sized` instances since they do implement the len method, even if that method will return an error. But that check is only used on the dataset passed along to the `Trainer`, not that wrapper, so I think we should be fine. | 09-28-2021 21:59:28 | 09-28-2021 21:59:28 | |
transformers | 13,779 | closed | ByT5: problem with tokenizer.decode() | ## Environment info
- transformers version: 4.11.0
- Platform: Google Colab
- Python version: 3.7.12
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help
ByT5: @patrickvonplaten
Documentation: @sgugger
## Information
Model I am using: `google/byt5-small` (the problem is the same with `google/byt5-base`).
## To reproduce
See this [notebook ](https://colab.research.google.com/drive/1ZS_zPF_ShLU0SKVLt5zYNHqoPOenBkEN?usp=sharing&authuser=1#scrollTo=PiKc6U3atGoh) that shows the problem when using `google/byt5-small` from the model hub of Hugging Face and the `tokenizer.decode()` method, when the `transformers `version is 4.11.0.
The problem does not appear with the `transformers `version 4.9.2 for example.
```
from transformers import T5ForConditionalGeneration, ByT5Tokenizer
model_checkpoint = 'google/byt5-small'
model = T5ForConditionalGeneration.from_pretrained(model_checkpoint)
tokenizer = ByT5Tokenizer.from_pretrained(model_checkpoint)
texts = ["Life is like a box of chocolates.", "Today is Monday."]
for text in texts:
inputs = tokenizer(text, padding="longest", return_tensors="pt")
output = model.generate(**inputs)
print(tokenizer.decode(output[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
```
Error:
```
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-8-6f8451a23561> in <module>()
6 output[0],
7 skip_special_tokens=True,
----> 8 clean_up_tokenization_spaces=True
9 )
10 )
2 frames
/usr/local/lib/python3.7/dist-packages/transformers/models/byt5/tokenization_byt5.py in convert_tokens_to_string(self, tokens)
238 tok_string = bytes([ord(token)])
239 bstring += tok_string
--> 240 string = bstring.decode("utf-8")
241 return string
242
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
```
## Expected behavior
2 strings as outputs of the ByT5.
| 09-28-2021 18:31:31 | 09-28-2021 18:31:31 | Hey :)
for faster debugging this can be break-downed to:
```python
from transformers import T5ForConditionalGeneration, ByT5Tokenizer
model_checkpoint = 'google/byt5-small'
tokenizer = ByT5Tokenizer.from_pretrained(model_checkpoint)
print(tokenizer.decode([258], skip_special_tokens=True, clean_up_tokenization_spaces=True))
```
The "official" ByT5 tokenizer is used from `seqio` and their implementation would return:
```python
from seqio import ByteVocabulary
tokenizer = ByteVocabulary()
tokenizer._decode([258])
# Returns:
# ''
# Better test:
tokenizer._decode([258]) == ''
# Return True
```<|||||>But as seen in the `ByteVocabulary()` implementation:
https://github.com/google/seqio/blob/main/seqio/vocabularies.py#L399
they use `errors="ignore"` as attribute of the `.decode()` function. Maybe this kind of error handling should also be applied here:
https://github.com/huggingface/transformers/blob/83d3dc0f6f8ae03e01aa5acacf88e79b2c1ecd06/src/transformers/models/byt5/tokenization_byt5.py#L240
:thinking: <|||||>Update: this seems to be intended:
https://github.com/huggingface/transformers/commit/5c7789d4167064f7464b8801c7488a9a2878480a
Pinging @Narsil :)<|||||>Hello @stefan-it.
Thank you very much for taking the time to verify the problem.
Now I understand that `string = bstring.decode(" utf-8 ", error =" ignore ")` has been replaced by `string = bstring.decode(" utf-8 ")` by @Narsil (see [5c7789d](https://github.com/huggingface/transformers/commit/5c7789d4167064f7464b8801c7488a9a2878480a)) but because of this, it is not possible anymore:
- to use for example `model.generate()` with a `ByT5` model (because it will fail)
- and it is not possible to finetune a `ByT5` model (because when evaluating metrics it will use `tokenizer.decode()` that will fail).
We must find a solution. Do you have a proposal?<|||||>@Narsil - could you take a look once you're back?<|||||>Hi @piegu @stefan-it @patrickvonplaten ,
Do you know what you would expect to see instead ?
IMHO, failing here is perfectly correct as there is no correct way to represent byte (255) on its own.
If ByT5 generates invalid bytes, then the part that is supposed to recover a string should fail just like regular Python IMO, that's a model fail (it failed to generate bytes that correspond to a real string). Ignoring the error will just hide the problem under the rug and not really solve the problem. If you really don't care about failed generated bytes, then having a way to opt-in a different way of decoding even with malformed bytes makes sense. For the library, I am not sure it's a desired behavior by default as really we're ignoring a real model error.
If I take a less "simple" error where the model would generate `b'\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff'` then if we had errors=ignore by default, this would be `''` which would be very surprising to say the least as the model actually generated 128 tokens here...
Proposed modifications to make things easier for users:
- Add a new method specific to bytes tokenizers that would return raw `bytes` instead of `str`. This would not fail, and it would really be a user responsability to use it. (`decode_bytes` ?). Here it would return `b'\xff chocolates chocol'` leaving the user in charge to do something meaningful with it.
- Add a way to override to `errors=ignore` somehow directly from `decode`. It would add non trivial complexity to tokenizer code as `decode` is a generic method though.
- Add a way to override the `errors=ignore` through some attribute of the tokenizer. A bit better as it does not spill into generic code, but probably less discoverable (would need to add this to the doc in a very clear way).
- Switching back to `errors=ignore` but I really think it's a mistake in that case (just like having `errors=ignore` within Python itself would be pretty bad).
My personal take on this is that the first solution seems the better, but happy to hear counterarguments or if I overlooked something.<|||||>Hi @Narsil.
I would start with your question `Do you know what you would expect to see instead ?`
My answer: just a model (ByT5 here) with an output that can be always decoded in order to use the model in production and to finetune it with new data.... as this is the case for all models of the HF model hub.
In fact, I read and understood your arguments about not using `errors=ignore` in the `.decode()` method (`bstring.decode("utf-8")` instead of `bstring.decode("utf-8", errors="ignore")`) but the problem is in fact about ByT5 outputs, not about its `.decode()` method.
In my opinion, the true question is: **Why the model ByT5 in the Transformers library outputs tokens that can not be decoded?**
When I use the model BERT or T5 to generate outputs, I do not have the generation of such tokens (for example, I do not have the generation of a token id that is out of the tokenizer ids list).
And if you have a loot at the HF model hub, there are the Google ByT5 models and the finetuned ones:
- **Google ByT5 models**: how did Google train their ByT5 models? Indeed, at the end of each epoch, it is necessary to use a method `.decode()` in order to obtain the generated texts and thus compare them to the targeted ones. Google used `errors=ignore`?
- **ByT5 finetuned models**: which version of Transformers the ByT5 models finetuned in the HF model hub used? Certainly the version 4.9.2 as this [notebook](https://colab.research.google.com/drive/1syXmhEQ5s7C59zU8RtHVru0wAvMXTSQ8) but not the actual one. What does it mean about the **quality** of these finetuned ByT5 models (which were finetuned with `errors=ignore` ) and how to finetune now a ByT5 model with the actual Transformers version?
What do you think? Should we focus on the method `.decode()` or on debugging ByT5?<|||||>> Hi @Narsil.
>
> I would start with your question `Do you know what you would expect to see instead ?` My answer: just a model (ByT5 here) with an output that can be always decoded in order to use the model in production and to finetune it with new data.... as this is the case for all models of the HF model hub.
ByT5, unlike ALL other models (afaik), uses raw bytes, so it has no guarantee whatsoever to output a `string`.
It will however always produce `bytes`. (hence `decode_bytes` proposal).
>
> In fact, I read and understood your arguments about not using `errors=ignore` in the `.decode()` method (`bstring.decode("utf-8")` instead of `bstring.decode("utf-8", errors="ignore")`) but the problem is in fact about ByT5 outputs, not about its `.decode()` method.
>
> In my opinion, the true question is: **Why the model ByT5 in the Transformers library outputs tokens that can not be decoded?**
I have no idea, but it's expected that if it can produce non string data, it will (at some point at least).
>
> When I use the model BERT or T5 to generate outputs, I do not have the generation of such tokens (for example, I do not have the generation of a token id that is out of the tokenizer ids list).
>
> And if you have a loot at the HF model hub, there are the Google ByT5 models and the finetuned ones:
>
> * **Google ByT5 models**: how did Google train their ByT5 models? Indeed, at the end of each epoch, it is necessary to use a method `.decode()` in order to obtain the generated texts and thus compare them to the targeted ones. Google used `errors=ignore`?
Probably differently. All `string` can be casted to `bytes` (but not the other way around). So checking two `bytes` objects was probably the way it was done as this is always possible. Take the generated output, convert to bytes, take expected string, convert to bytes and compare the.
>
> * **ByT5 finetuned models**: which version of Transformers the ByT5 models finetuned in the HF model hub used? Certainly the version 4.9.2 as this [notebook](https://colab.research.google.com/drive/1syXmhEQ5s7C59zU8RtHVru0wAvMXTSQ8) but not the actual one. What does it mean about the **quality** of these finetuned ByT5 models (which were finetuned with `errors=ignore` ) and how to finetune now a ByT5 model with the actual Transformers version?
>
>
> What do you think? Should we focus on the method `.decode()` or on debugging ByT5?
I think we should accept that `ByT5` is different from other models, propose `decode_bytes` method and let users try to do things with it.
We could also break standard API and `decode` would return `bytes` instead of `string` but that would break many things, the automated tests at the very least.<|||||>To weigh in on this discussion, I wanted to reiterate the points raised by @piegu:
> it is not possible anymore:
>
> * to use for example `model.generate()` with a `ByT5` model (because it will fail)
> * and it is not possible to finetune a `ByT5` model (because when evaluating metrics it will use `tokenizer.decode()` that will fail).
This means that it would always be required to overwrite the `evaluate` function when using `Seq2SeqTrainer` in combination with `predict_with_generate`, unless the `decode_bytes` option is directly addressed in the `Trainer`/`generate` implementation as well (creating additional overhead).
Since it is required to pass a `Tokenizer` in any case, I would prefer the option to choose directly through the tokenizer whether to ignore errors or not. I agree that it would have to be quite visible, but even for the T5 repository's implementation, this behavior is not very obvious ([reference issue](https://github.com/google-research/byt5/issues/11)), but implemented with ignoring the errors by default.
As to this point:
> So checking two bytes objects was probably the way it was done as this is always possible. Take the generated output, convert to bytes, take expected string, convert to bytes and compare the.
I don't see any indication of the evaluation on `bytes` objects instead of `string`, as there seem to be no modifications on top of the vanilla T5 modeling from their own repository.<|||||>Thanks a lot for the nice repro @stefan-it!
To be honest, I think we should just add `errors="ignore"` for the following reasons:
- One of the philosophies of `transformers` is to stay as close as possible to the original code
- If google added `errors="ignore"`, it was probably intended and is therefore not a bug IMO
- We broke backwards compatibility between 4.9 and 4.11
What do you think @Narsil ?
Also cc @LysandreJik here <|||||>If google did it, then let's do it.<|||||>I have fine-tuned a model for orthographic modernization in Spanish with a prior version of transformers. In many cases, the predicted token is an accented letter such as `"ó"`. When these tokens appear in the list `tokens` of the function `.convert_tokens_to_string()`, the addition of the `errors="ignore"` makes the conversion miss one token, producing ill-formed words that were correctly predicted by the model. The reason, I believe, is that sometimes predicted tokens, as they come as strings, can be already encoded into more than 1 byte, and forcing them to be one byte triggers a `UnicodeDecodeError` as expected, which seems to be fixed by the `errors="ignore"` addition. However, I think the problem comes from the [line above](https://github.com/patrickvonplaten/transformers/blob/cba5330dc6825b6168fe043cb4b7a6f9694f78b6/src/transformers/models/byt5/tokenization_byt5.py#L238):
```python
tok_string = bytes([ord(token)])
```
If token was `"ó"`, that line of code produces `b'\xf3'`, which cannot be converted into a string and gets caught by the ignoring of encoding errors. But if I change that line to the next (which is equivalent to [`seqio`'s implementation](https://github.com/google/seqio/blob/main/seqio/vocabularies.py#L389))
```python
tok_string = bytes(token, encoding="utf8")
```
Now the result is `b'\xc3\xb3'`, which can be safely decoded back into a string
```python
>>> b'\xc3\xb3'.decode("utf8")
'ó'
```
I have created a separate issue [here](https://github.com/huggingface/transformers/issues/14461), but have not sent a PR because this change makes tests fail. I also wonder if models trained with `errors="ignore"` are somehow invalid. It could also be that my model is wrong somehow? I don't know how it's possible that in the list of tokens there are strings that decode to more than 1 byte.<|||||>Hi @versae
Interesting.
I think we should stay away from `"ó"` since as you mentionned it has multiple encodings/representation.
The way I understand it, you are referring to a token (so an number between 0 and 255) . That's what tokens should be (even if in the linked function they are strings, they should be 1-1 mapping of numbers).
So the model generated, Token [243] in your case, which is `unicode codepoint` (`ó`). Let's keep calling it 243 for simplicity.
243 is a continuation byte, and not valid utf-8 on its own, you need another byte to have something valid. Your proposed changed would work, but because you're using the unicode codepoint as real unicode, where in fact it's just a number (represented as unicode codpoint, but that's because of previous internals of the base class it inherits from not by design). So I don't think the fix you're proposing is valid.
If you want the model to output "ó" it should instead output 2 different tokens, [195(c3), 179(b3)] and that should yield the desired result. Ultimately this generation should depend on how this model was trained, which I can't say for sure. But ideally, when outputting 243, it should never output EOS afterwards and keep on generating. If you are doing an early stopping then, you are indeed having an invalid byte sequence, and we should either ignore or crash, but we shouldn't change the meaning (output is likely to be garbage if we did it).
Does that make sense ?
Note: I tried to keep things simple and keep everything in 0-255 range, but ByT5 has 3 tokens (0, 1, 2) which are special, so the 0-255 range is shifted to 3-258 actually.
<|||||>Hi @Narsil, thanks for the detailed explanation. I trained the model using Tensorflow on Oct 15th, which according to PyPI that should mean transformers was in its 4.11.3 version. I used `source,target` pairs such as (simplifying) `"precision","precisión"` so the model could learn modern Spanish orthography.
I understand the issue with the token 243. You are absolutely right. If I print the token ids generated you can see id 246 (246 - 3 = 243), which correspond to "ó". Why would the model learn to predict a token id like that?
```python
In [1]: from transformers import pipeline
In [2]: modernize = pipeline("text2text-generation", "versae/modernisa-pre")
In [3]: modernize("presuncion")
_convert_id_to_token: index, token 115 p
_convert_id_to_token: index, token 117 r
_convert_id_to_token: index, token 104 e
_convert_id_to_token: index, token 118 s
_convert_id_to_token: index, token 120 u
_convert_id_to_token: index, token 113 n
_convert_id_to_token: index, token 102 c
_convert_id_to_token: index, token 108 i
_convert_id_to_token: index, token 246 ó
_convert_id_to_token: index, token 113 n
convert_ids_to_tokens ['p', 'r', 'e', 's', 'u', 'n', 'c', 'i', 'ó', 'n']
convert_tokens_to_string: tokens ['p', 'r', 'e', 's', 'u', 'n', 'c', 'i', 'ó', 'n']
Out[3]: [{'generated_text': 'presuncin'}]
```
It happens with any accented letter. How could the model correctly predict 243 if it's never seen such a token in training? And how is it even possible that for accented letters the correct token is predicted although it is not possible to decode it in UTF-8? It seems I'll have to re-train I guess. Sorry for derailing the original issue topic, happy to take it somewhere else.
Cheers.<|||||>@versae
Is it possible that you fed the models with unicode codepoints during training and not utf-8 encoded bytes ?
This looks like it, but I can't be sure. Since I think most accented spanish letters are still below 255 you might not have encountered any issue and been able to train your model just fine.
Just to make sure I tested, that the `byt5` tokenizer would encode `presunción` with the correct encoding:
```python
tokenizer = AutoTokenizer.from_pretrained('google/byt5-small')
okenizer.encode('presunción')
>>> [115, 117, 104, 118, 120, 113, 102, 108, 198, 182, 113, 1]
>>> # (195 + 179) so it works
```
If that's the case, then the good news is you don't necessarily need to retrain the model but maybe you need to override this function to change with your fix. Something along the way of :
```python
def convert_tokens_to_string(self, tokens):
"""Converts a sequence of tokens (string) in a single string."""
bstring = b""
for token in tokens:
if token in self.special_tokens_decoder:
tok_string = self.special_tokens_decoder[token].encode("utf-8")
elif token in self.added_tokens_decoder:
tok_string = self.special_tokens_decoder[token].encode("utf-8")
elif token in self.special_tokens_encoder:
tok_string = token.encode("utf-8")
elif token in self.added_tokens_encoder:
tok_string = token.encode("utf-8")
else:
tok_string = token.encode("utf-8")
bstring += tok_string
string = bstring.decode("utf-8", errors="ignore")
return string
tokenizer.convert_tokens_to_string = convert_tokens_to_string
```
Keep in mind:
1- This is a dirty hack
2- It might not be the core of the issue (it could be mistrained model, or some other error at training time). If it's not the core issue, this fix might just be hiding the true culprit and leading to more errors downstream.
3- You have now effectively broken your tokenizer since it won't encode the same things it decodes
But it should do the job for your specific purpose.
If you could also provide a link/script to how you trained it might provide more insights into what went wrong.<|||||>Thank you so much, @Narsil.
I trained models using the Trainer on [this dataset](https://huggingface.co/datasets/versae/modernisa) but their performance was really bad. Then I tested a small library called [SimpleT5](https://github.com/Shivanandroy/simpleT5/blob/main/simplet5/simplet5.py) that conveniently wraps Huggingface and PyTorch Lighting for T5 fine-tuning in like 3 lines of code and it worked like a charm. Or that's what I thought.
I'll use the hack for now and probably train new models to see if I'm able to get not terrible results using the Trainer.<|||||>@versae
For information, we wanted to make sure everything is the same as the original and recompared everything to make sure.
FYI, there was a period in august where transformers had such a bug on `transformers`: https://github.com/huggingface/transformers/pull/13119 Might be linked to the issue you're having (if someone trained with the bug, then it could explain behavior you're seeing).<|||||>Thanks, @Narsil.
It seems to be the case! I re-trained the models and they work perfectly fine now, and with good BLEU and CER scores :) |
transformers | 13,778 | closed | Add TFViTModel | # What does this PR do?
Add `TFViTModel`. This is also a prerequisite task before we can add `TFVisionEncoderDecoderModel` as a continuation of `FlaxVisionEncoderDecoderModel` (#13359)
## Who can review?
| 09-28-2021 17:02:26 | 09-28-2021 17:02:26 | Looking forward to it @ydshieh! Let us know if you need anything to push this to the finish line.<|||||>Hi, @LysandreJik, just an update:
Most of the necessary work is done, including the test.
I did have a problem with the input format between `torch.nn.Conv2d` and `tf.keras.layers.Conv2D` (channel first/last), which leads to some weight loading (PT->TF) issue.
But I found the usage of `tf.nn.conv2d` in `TFDebertaV2`, which can fix the issue I think. If everything goes well, we will have vision for HuggingFace TF's models soon :)<|||||>Hi, @LysandreJik , @patrickvonplaten , @Rocketknight1 , @NielsRogge
This PR is ready for review. I have run all the tests locally (including slow ones). The conversion `PT->TF` works well :)
(**Update**: there are still 2 slow tests in `test_modeling_tf_common.py` to be fixed)
One thing to keep in mind: This method
https://github.com/huggingface/transformers/blob/c9fbf7dc1969d96dcf7bf550b70b7873a8676207/src/transformers/models/vit/modeling_tf_vit.py#L88
uses `tf.image.resize`'s `bicubic` method, which has a hard-coded constant `alpah=-0.5` in TensorFlow, but `-0.75` in `torch.nn.functional.interpolate`. Therefore, for images with size different from the config.image_size, we won't get the same results between PT/TF versions.
[References]
https://pytorch.org/docs/stable/_modules/torch/nn/functional.html#interpolate
https://github.com/tensorflow/tensorflow/blob/a214ba286151b3d6347a67302b1513360f1f727e/tensorflow/core/kernels/image/resize_bicubic_op.cc#L54
<|||||>@ydshieh Hey! Overall I like the code, and this is a great PR. I did encounter one issue in testing, though - I ran a notebook to compare output values between the TF and PT models and I found quite a large divergence in some output values, although the mean divergence was small. However, even a relatively generous test like `np.allclose(tf_final_hidden_out, torch_final_hidden_out, rtol=1e-01, atol=1e-02)` failed for me. I made a Colab with the tests I ran [here](https://colab.research.google.com/drive/1D-LFj27WH7TzpEnFBhN_ywxfpnr4RpRp?usp=sharing). Is it possible there are small differences in one of the layer implementations that could cause this?
The good news, though, is that this is the only major problem I could find. Saving and loading the model worked perfectly, and the outputs were identical, which means we should have no problems making TF checkpoints once this is resolved.<|||||>@Rocketknight1 Hi, ok, I will check, thanks. Sorry about this, I thought I have run necessary tests (pt ->tf), but apparently there is something wrong.<|||||>@ydshieh Don't worry, my PRs usually have way worse problems than this, lol<|||||>Hi, @Rocketknight1 ,
(Forget my previous message, I made some stupid statements ...)
I run your test script locally on my laptop, but I always get the same results, like
```
True
0.0003039837
1.541685e-06
0.6905535
```
Currently I am not sure where the issue is - I will try to run on a Linux VM.
On a Linux VM, still no issue. Do you test with GPU? I don't have GPU on my laptop, so the tests are all on CPU.
I can try to run the tests on Colab GPU.
<|||||>@ydshieh I was testing with GPU, yes. I tried with CPU and the problem disappeared, so this seems to be GPU-specific.<|||||>Actually, let me investigate a little more - I just realized I was running the torch model on CPU. Maybe this is just a CPU - GPU difference!<|||||>>
>
> Actually, let me investigate a little more - I just realized I was running the torch model on CPU. Maybe this is just a CPU - GPU difference!
No problem. Thank you :)<|||||>Yes, I confirmed I'm seeing significant differences between the output on CPU and GPU for Torch. Most likely the problem is just caused by CPU/GPU differences instead of Torch/TF differences. I'll try to set up a GPU environment with both GPU-enabled Torch and TF tomorrow and confirm that the results are close as expected. Sorry for worrying you!<|||||>I remembered seeing something in `torch.nn.Conv2d` like
```
In some circumstances when given tensors on a CUDA device and using CuDNN,
this operator may select a nondeterministic algorithm to increase performance.
If this is undesirable, you can try to make the operation deterministic
(potentially at a performance cost) by setting `torch.backends.cudnn.deterministic = True.`
See Reproducibility for more information.
```
Maybe this is somehow related. Once we identify the cause, it would be a good idea to put this knowledge somewhere in `transformers` :)
See
https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html<|||||>I finished testing and there's still a similar difference between the torch-GPU outputs and the TF-GPU outputs, but, like you said, that could possibly be caused by using different algorithms. I tried playing around with `torch.cudnn.benchmark` but the problem persisted.
I did confirm that torch + CPU and TF + CPU give identical outputs - this means your code is very likely bug-free. I'll ask around and see if anyone on the team has any suggestions, but maybe we should just merge as-is.<|||||>@ydshieh If you want, you can try to investigate to find the single op that has differing outputs between the two libraries. However, I don't think it's totally necessary since the outputs are mostly very close, so if you're not interested we can probably merge as-is. Up to you!<|||||>@Rocketknight1 , thank you for the effort!
Let's wait a bit before merging - I would like to see if I am able to avoid the transpose in the call method first.
Is this OK for you?
About the GPU result difference, I think I won't look at it at this moment.
(One of the main reason is that it's kind difficult for me - I don't have any GPU machine. Working on Colab GPU for debugging this is not very handy.)<|||||>That's perfect, yes. Thanks for your contributions, there's no rush with finishing this PR!<|||||>@Rocketknight1 I have successfully made conv2D weight loading PT <-> TF using their own native shape, and therefore there is no need to transpose the conv. kernel in `TFPatchEmbeddings`.
If this new change is OK for Hugging Face, I will clean up this block
https://github.com/huggingface/transformers/blob/69900345c668defbb34c9071eee6c90d722c7e6e/src/transformers/models/vit/modeling_tf_vit.py#L202-L205
It might be a good idea for a second person to double check the change added to `modeling_tf_pytorch_utils.py`.
Thanks :)
<|||||>Well, let me try if I can use a `tf.keras.layers.Conv2D` with `use_bias=True` directly and still make the weight loading work :) <|||||>Use tf.keras.layers.Conv2D instead of tf.nn.conv2d, and it works well with PT/TF weight loading.<|||||>@ydshieh Taking a look now! That change to `modeling_tf_pytorch_utils.py` has generated some lively discussion in the Hugging Face Slack, lol.
In general, everyone agrees that we shouldn't need to do a `transpose()` in the `call()` method, so we need some way of transposing the weights at convert time, but `_get_tf_conv2d_kernel_names` might cause problems. Right now we have a heuristic like [this](https://github.com/huggingface/transformers/blob/2024faf1713272464336ad97f371787b24cb3c53/src/transformers/modeling_tf_pytorch_utils.py#L60) which we use for transposing certain weights already. Could we possibly just alter that heuristic slightly?
@sgugger suggests that function could detect convolution weights by their name/rank (i.e. that they are 4-dimensional tensors and contain 'kernel' in their name). Then we could return an enum from that function (NO / SIMPLE / CONV), and then use that enum in `load_pytorch_weights_in_tf2_model` to decide which weights to transpose. What do you think?<|||||>Hi @Rocketknight1
Sound like a good way, I will try it :)
(A bit overthinking previously)<|||||>@ydshieh I agree that the heuristic feels a bit dangerous, but we have plenty of tests, so we should notice if it causes any problems!<|||||>Hi, I made the suggested change. It is not completely the same as the suggestion, because
https://github.com/huggingface/transformers/blob/00c24707b54264fd2d788802d19681eda8e539cc/src/transformers/modeling_tf_pytorch_utils.py#L30
only get `tf_name` without the weight array, so I am not able to determine if a kernel is a Conv2D by checking the shape inside this method. The check is done inside:
in `load_tf2_weights_in_pytorch_model`
https://github.com/huggingface/transformers/blob/00c24707b54264fd2d788802d19681eda8e539cc/src/transformers/modeling_tf_pytorch_utils.py#L189 and
in `load_tf2_weights_in_pytorch_model`
https://github.com/huggingface/transformers/blob/00c24707b54264fd2d788802d19681eda8e539cc/src/transformers/modeling_tf_pytorch_utils.py#L365
(However, we loss some tf weight name processing that are done inside `convert_tf_weight_name_to_pt_weight_name`)
The test `TFViTModelTest::test_pt_tf_model_equivalence` passed, so I think it works fine :)
<|||||>You raise a good point @ydshieh I think it might be cleaner if we add the array as an argument to `convert_tf_weight_name_to_pt_weight_name` to only do once the right transposition instead of combining two different ones. This would be more readable, thus more bulletproof.<|||||>@sgugger It looks like `convert_tf_weight_name_to_pt_weight_name` is public, I am afraid to change it.
If Hugging Face would accept a change to this method - making a new required argument, I can do it in this PR.
(making this argument optional is not very logical, I guess). <|||||>It's meant to be internal, but if afraid of breaking anything, you can make that new argument optional and only use it if it's not None.<|||||>Well, I just checked after my message, and it is only used in 2 methods about weight loading :). I will change it.<|||||>I moved the logic of conv2D layer detection into `convert_tf_weight_name_to_pt_weight_name`.
(I found the same check is done for flax <-> pt)<|||||>Hi, is the latest change about Conv2D/Transpose in `modeling_tf_pytorch_utils.py` OK?
Let me know if there is anything to address in this PR, thank you.<|||||>Failing test is unrelated. Like the new design a lot! Thanks a lot for all the work!<|||||>Let me fix some (minor, I think) issues after applying the suggestions.<|||||>Failed tests are unrelated. Is it necessary or preferable to rebase onto master?
Thank you all for the efforts reviewing this PR!
P.S. I am curious about why new commits could have more unrelated failed tests than previous commits - would be great if some of you can have a short explanation to me (for learning purpose), thanks.<|||||>Hi @ydshieh, I believe most of the errors here are unrelated to your PR: they're related to a recent change in `datasets`. For audio datasets, it now takes care of the audio decoding, but means that you'll need to have `librosa` installed in order to do so.
We updated our tests to reflect this change, but since your PR has not been rebased on `master` for a while, it doesn't use the updated code. Rebasing on `master` would probably clear all of these issues :)<|||||>@LysandreJik Thanks!
After rebasing on master, there is an issue I need to address (I think this is a recent test @patrickvonplaten added).
```
FAILED tests/test_modeling_vit.py::ViTModelTest::test_pt_tf_model_equivalence
```<|||||>@patrickvonplaten Would be great if you can check (when you have time) the modification done in `test_modeling_common.py` in https://github.com/huggingface/transformers/pull/13778/commits/0e546f6b1e2de5877ffd5c84e71d8d4de178d440
This is necessary to fix the issue appeared previously
(https://app.circleci.com/pipelines/github/huggingface/transformers/29656/workflows/d083d4db-6012-47af-b741-51f792b5a892/jobs/298830)
```
> ???
E tensorflow.python.framework.errors_impl.InvalidArgumentError: cannot compute Conv2D as input #1(zero-based) was expected to be a int32 tensor but is a float tensor [Op:Conv2D]
```<|||||>After rebase on master, it's green again :-)<|||||>Thanks again for all your work on this!<|||||>@sgugger There is no tf checkpoint for ViT. The test uses from_pt=True, but usage examples in model script don't specify this currently.
I can upload a checkpoint and make necessary changes in another PR. But maybe it's better to have TF checkpoint under HF rather than under my repo.
Sorry about this.<|||||>We will update TF checkpoints in the repos, then you can make another PR to adapt the tests :-) |
transformers | 13,777 | closed | [Wav2Vec2] Better error message | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Better error messsage when a model with lm head is initialized from a pretranied model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-28-2021 14:33:47 | 09-28-2021 14:33:47 | |
transformers | 13,776 | closed | LED for Seq2Seq output shape mismatch between tensorflow and pytorch | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: Linux-5.4.0-48-generic-x86_64-with-Ubuntu-20.04-focal
- Python version: 3.6.13
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): 'allenai/led-base-16384' via AutoModelForSeq2SeqLM
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the tensorflow version of a simple test script.
```python
from transformers import TFAutoModelForSeq2SeqLM
preloaded_name = 'allenai/led-base-16384'
led = TFAutoModelForSeq2SeqLM.from_pretrained(preloaded_name)
"""
In this example, we have the following shapes:
input_length --> 2234
output_length --> 70
"""
inputs = {...}
print('Inputs...')
for key, value in inputs.items():
print('Key: {0} - Value: {1}'.format(key, value.shape))
"""
Prints:
input_ids - Value: (1, 2234)
attention_mask - Value: (1, 2234)
global_attention_mask - Value: (1, 2234)
labels - Value: (1, 70)
"""
led_output = led(
input_ids=inputs['input_ids'],
attention_mask=inputs['attention_mask'],
labels=inputs['labels'],
global_attention_mask=inputs['global_attention_mask'] if 'global_attention_mask' in inputs else None,
training=False, use_cache=False, return_dict=True, output_hidden_states=True)
print('Outputs...')
for key, value in led_output.items():
if type(value) != tuple:
print('Key: {0} - Value: {1}'.format(key, value.shape))
else:
print('Key: {0} - Length: {1} - value shapes: {2}'.format(key, len(value), [item.shape for item in value]))
"""
Prints:
loss - Value: (17,)
logits - Value: (1, 70, 50265)
encoder_last_hidden_state - Value: (1, 70, 768)
encoder_hidden_states - Length 7 - value shapes : [TensorShape([1, 2234, 768]), ....]
decoder_hidden_states - Length: 7 - value shapes: [TensorShape([1, 70, 768]), ....]
```
2. Run the torch version of the same script.
```python
from transformers import AutoModelForSeq2SeqLM
import torch
preloaded_name = 'allenai/led-base-16384'
led = AutoModelForSeq2SeqLM.from_pretrained(preloaded_name)
"""
#NOTE: same inputs as in the tensorflow example!
In this example, we have the following shapes:
input_length --> 2234
output_length --> 70
"""
inputs = {...}
print('Inputs...')
for key, value in inputs.items():
print('Key: {0} - Value: {1}'.format(key, value.shape))
"""
Prints:
input_ids - Value: (1, 2234)
attention_mask - Value: (1, 2234)
global_attention_mask - Value: (1, 2234)
labels - Value: (1, 70)
"""
led_output = led(
input_ids=inputs['input_ids'],
attention_mask=inputs['attention_mask'],
labels=inputs['labels'],
global_attention_mask=inputs['global_attention_mask'] if 'global_attention_mask' in inputs else None,
use_cache=False, return_dict=True, output_hidden_states=True)
print('Outputs...')
for key, value in led_output.items():
if type(value) != tuple:
print('Key: {0} - Value: {1}'.format(key, value.shape))
else:
print('Key: {0} - Length: {1} - value shapes: {2}'.format(key, len(value), [item.shape for item in value]))
"""
Prints:
loss - Value: torch.Size([])
logits - Value: torch.Size([1, 70, 50265])
encoder_last_hidden_state - Value: torch.Size([1, 2234, 768])
encoder_hidden_states - Length 7 - value shapes : [torch.Size([1, 3072, 768]), ....]
decoder_hidden_states - Length: 7 - value shapes: [torch.Size([1, 70, 768]), ....]
"""
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
If we compare both LED outputs (tensorflow and pytorch), there are some evident shape mismatches.
1. loss: loss seems to be already averaged (not a big problem)
2. encoder_last_hidden_state: it seems to be completely wrong in tensorflow version (it outputs a sequence of 70 instead of 2234 values)
3. encoder_hidden_states: all encoder hidden states from torch model have different sequence length compared to the ones given by the tensorflow model.
We would expect both models (tensorflow and pytorch) to output the same values.
<!-- A clear and concise description of what you would expect to happen. -->
| 09-28-2021 13:58:41 | 09-28-2021 13:58:41 | @Rocketknight1 - do you have some bandwidth for taking a look? Otherwise, I can take a look as well :-)<|||||>I'm investigating this today, will update!<|||||>So, firstly I've reproduced the issues above. They seem to stem from multiple issues within the model code.
1) States in PyTorch are expanded to 3072 because the model uses the `_pad_to_window_size` method to expand input sequences to the next largest multiple of `config.window_size`, which in this case is 1024 (3072 == 1024 * 3).
2) This is supposed to be reversed by the line `hidden_states = hidden_states[:, :-padding_len]`, but this doesn't seem to be working in PyTorch. In this case, I think the TF implementation is correct, and the output shape should be (1, 2234, 768)
3) The Tensorflow implementation of Layerdrop uses `random.uniform` and so will not survive compilation by TF (I don't know if it's the part of the problem here, but it is **a** problem)
4) `encoder_last_hidden_state` in the TF model seems to contain the decoder output states instead, I'm working on figuring that one out.
Hopefully these are just isolated bugs and can be resolved, but I'm worried that there seem to be separate, sizeable bugs in both frameworks' implementations. Will keep investigating!<|||||>@federicoruggeri Can you try the PR and confirm it's working for you now? You can install from the branch with the fix by doing `pip install git+https://github.com/huggingface/transformers.git@fix_led`<|||||>@Rocketknight1 I've tested the above scripts after mentioned PR and shapes are now fine!
Many thanks for the quick fix!<|||||>@federicoruggeri No problem! I'm going to reopen the issue though - it's linked to the PR, so when that PR is merged that will auto-close the issue and keep everything in sync for us. |
transformers | 13,775 | closed | Pretrained Wav2Vec2 Model large robust does not load | @patrickvonplaten
The behavior I notice is shown below. Any help is much appreciated!
Python 3.7.10
>>> from transformers import Wav2Vec2ForCTC
>>> mdl = Wav2Vec2ForCTC.from_pretrained('facebook/wav2vec2-large-robust')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/hltcoe/mwiesner/.conda/envs/nnet_pytorch_env/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1325, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home/hltcoe/mwiesner/.conda/envs/nnet_pytorch_env/lib/python3.7/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1412, in __init__
self.lm_head = nn.Linear(config.hidden_size, config.vocab_size)
File "/home/hltcoe/mwiesner/.conda/envs/nnet_pytorch_env/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 78, in __init__
self.weight = Parameter(torch.Tensor(out_features, in_features))
TypeError: new() received an invalid combination of arguments - got (NoneType, int), but expected one of:
* (*, torch.device device)
didn't match because some of the arguments have invalid types: (NoneType, int)
* (torch.Storage storage)
* (Tensor other)
* (tuple of ints size, *, torch.device device)
* (object data, *, torch.device device) | 09-28-2021 13:54:30 | 09-28-2021 13:54:30 | Hey @m-wiesner,
`'facebook/wav2vec2-large-robust'` is just a pretrained speech recognition model that does not have a lm head defined. So the model should be loaded as follows:
```python
from transformers import Wav2Vec2ForPreTraining
mdl = Wav2Vec2ForPreTraining.from_pretrained('facebook/wav2vec2-large-robust')
```
If you would like to use the model to fine-tune it on a downstream task you have to define the vocabulary size first, *e.g.*:
```python
from transformers import Wav2Vec2ForCTC
mdl = Wav2Vec2ForCTC.from_pretrained('facebook/wav2vec2-large-robust', vocab_size=32)
```
which will then initialize a random linear layer as the lm head that needs to be fine-tuned.
But in any way the error message should have been better here! Let me correct this :-)
<|||||>🎉<|||||>Awesome. Most of the other models have the LM head attached and I've just been discarding that. Would it make sense to update the documentation to reflect the discrepancy between this and how the other wav2vec2 models are loaded?
Thanks for the rapid response!<|||||>Pretrained wav2vec2 models ideally should never have a vocab_size defined in the config. We can't change this anymore though because of backward compatibility. |
transformers | 13,774 | closed | Empty prompts failing in dev sources | It looks like #13308, which otherwise is really inspiring around engaging organization of the code, also introduced a bug around completing empty prompts:
```
transformers.pipeline('text-generation')('')
```
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/text_generation.py", line 150, in __call__
return super().__call__(text_inputs, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/base.py", line 915, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/base.py", line 922, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/base.py", line 871, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/text_generation.py", line 162, in _forward
generated_sequence = self.model.generate(input_ids=input_ids, **generate_kwargs) # BS x SL
File "/home/user/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/transformers/generation_utils.py", line 1016, in generate
return self.sample(
File "/home/user/.local/lib/python3.8/site-packages/transformers/generation_utils.py", line 1529, in sample
outputs = self(
File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 949, in forward
transformer_outputs = self.transformer(
File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user/.local/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 673, in forward
input_ids = input_ids.view(-1, input_shape[-1])
RuntimeError: cannot reshape tensor of 0 elements into shape [-1, 0] because the unspecified dimension size -1 can be any value and is ambiguous
``` | 09-28-2021 13:02:46 | 09-28-2021 13:02:46 | cc @Narsil <|||||>Added the fix (forgot to readd existing code took the opportunity to add the test) |
transformers | 13,773 | closed | Keras callback to push to hub each epoch, or after N steps | null | 09-28-2021 12:46:00 | 09-28-2021 12:46:00 | 🚀🎉 |
transformers | 13,772 | closed | HuggingFace Model Hub (summarisation) - models not working locally (404 not found) | ## Environment info
- `transformers` version: 4.10.0
- Platform: Linux-5.11.0-36-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
-->
## Information
I am using a text summarisation model from HuggingFace Model Hub. However, this issue occurs regardless of what model I use.
The problem arises when using any text summarisation model from HuggingFace Model Hub locally.
The tasks I am working on is dialogue summarisation.
## To reproduce
Steps to reproduce the behavior:
1. Run this code locally with the environment I specified in the beginning:
```
from transformers import pipeline
summarizer = pipeline("summarization", model="lidiya/bart-large-xsum-samsum")
conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker?
Philipp: Sure you can use the new Hugging Face Deep Learning Container.
Jeff: ok.
Jeff: and how can I get started?
Jeff: where can I find documentation?
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face '''
print(summarizer(conversation))
```
2. Output is:
```
2021-09-28 14:20:06.034022: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-09-28 14:20:06.034044: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
404 Client Error: Not Found for url: https://huggingface.co/lidiya/bart-large-xsum-samsum/resolve/main/tf_model.h5
404 Client Error: Not Found for url: https://huggingface.co/lidiya/bart-large-xsum-samsum/resolve/main/tf_model.h5
Traceback (most recent call last):
File "test.py", line 2, in <module>
summarizer = pipeline("summarization", model="lidiya/bart-large-xsum-samsum")
File "/home/teodor/Desktop/test/env/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 429, in pipeline
framework, model = infer_framework_load_model(
File "/home/teodor/Desktop/test/env/lib/python3.8/site-packages/transformers/pipelines/base.py", line 145, in infer_framework_load_model
raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.")
ValueError: Could not load model lidiya/bart-large-xsum-samsum with any of the following classes: (<class 'transformers.models.auto.modeling_tf_auto.TFAutoModelForSeq2SeqLM'>, <class 'transformers.models.bart.modeling_tf_bart.TFBartForConditionalGeneration'>).
```
## Expected behavior
On Google Colab and on HugginFace website a string is outputted containing the summary of the inputted text:
"Jeff wants to train a Transformers model on Amazon SageMaker. He can use the new Hugging Face Deep Learning Container. The documentation is available on HuggingFace.co and on the blog, Jeff can find it here. . . Jeff can train a model on Huging Face.co."
Why is it not working locally? Any help would be much appreciated. I've been trying to solve this problem for the past few days but I couldn't find a working solution so far. Thank you! | 09-28-2021 11:27:23 | 09-28-2021 11:27:23 | It seems that pytorch is not installed, you should install pytorch to be able to use this model.<|||||>@patil-suraj True, but I have TensorFlow 2.6 installed and this works as an equivalent. If I had neither TensorFlow not PyTorch installed, in the terminal it said: "install either Tensorflow or PyTorch". So the problem is somewhere else. Did you replicate my code and it worked for you with PyTorch?<|||||>there is no TF checkpoint for `lidiya/bart-large-xsum-samsum`, so we should pass `from_pt=True` to `pipeline` so it'll convert the PT checkpoint to PyTorch. But that conversion also requires Pytorch to be able to read the `state_dict`, so you should install torch.<|||||>@patil-suraj Makes sense, I will try that right now<|||||>@patil-suraj "from_pt" is an unexpected keyword argument when passed to "pipeline". Am I doing something wrong?<|||||>aah, sorry. It should be passed as
```python
pipeline("...", model_kwargs={"from_pt": True})
```
This is only required if you want to use tensorflow, if you have pytroch installed it should just work.<|||||>@patil-suraj you helped me find the solution. The only thing that worked for me is:
```
from transformers import pipeline, AutoTokenizer, AutoModelForSeq2SeqLM
conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker?
Philipp: Sure you can use the new Hugging Face Deep Learning Container.
Jeff: ok.
Jeff: and how can I get started?
Jeff: where can I find documentation?
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
'''
tokenizer = AutoTokenizer.from_pretrained("lidiya/bart-large-xsum-samsum")
model = AutoModelForSeq2SeqLM.from_pretrained("lidiya/bart-large-xsum-samsum")
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer)
print(summarizer(conversation))
```
Thank you a lot for your patience. Hope many people see this. |
transformers | 13,771 | closed | Connection error, and we cannot find the requested files in the cached path | transformers ver: 4.8.2
torch: 1.9.1
the `pipeline` method does not offer cache_dir option. Therefore, users can only cache the model in default path.
```
from transformers import pipeline
nli_nlp = pipeline("zero-shot-classification", model="facebook/bart-large-mnli", device=0)
```
This happens randomly, with the same code on the same machine.
> ---------------------------------------------------------------------------
> ValueError Traceback (most recent call last)
> <ipython-input-1-e362258c0296> in <module>
> 1 from transformers import pipeline
> ----> 2 nli_nlp = pipeline("zero-shot-classification", model="facebook/bart-large-mnli", device=0)
>
> /usr/local/lib/python3.6/dist-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, model_kwargs, **kwargs)
> 441
> 442 tokenizer = AutoTokenizer.from_pretrained(
> --> 443 tokenizer_identifier, revision=revision, use_fast=use_fast, _from_pipeline=task, **tokenizer_kwargs
> 444 )
> 445
>
> /usr/local/lib/python3.6/dist-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
> 561 tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]
> 562 if tokenizer_class_fast and (use_fast or tokenizer_class_py is None):
> --> 563 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
> 564 else:
> 565 if tokenizer_class_py is not None:
>
> /usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
> 1678 local_files_only=local_files_only,
> 1679 use_auth_token=use_auth_token,
> -> 1680 user_agent=user_agent,
> 1681 )
> 1682
>
> /usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, use_auth_token, local_files_only)
> 1335 user_agent=user_agent,
> 1336 use_auth_token=use_auth_token,
> -> 1337 local_files_only=local_files_only,
> 1338 )
> 1339 elif os.path.exists(url_or_filename):
>
> /usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, use_auth_token, local_files_only)
> 1551 else:
> 1552 raise ValueError(
> -> 1553 "Connection error, and we cannot find the requested files in the cached path."
> 1554 " Please try again or make sure your Internet connection is on."
> 1555 )
>
> ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
For reference, this codes below never break down.
```
from transformers import GPT2Tokenizer, GPT2LMHeadModel
gpt2 = GPT2LMHeadModel.from_pretrained('gpt2', cache_dir="./cache", local_files_only=True)
```
It seems that the first snippet need to download models, or configs from the internet.
Any way to fix this ?
| 09-28-2021 06:46:26 | 09-28-2021 06:46:26 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,770 | closed | Link is broken in Huggingface model zoo | https://huggingface.co/google/bert_uncased_L-2_H-128_A-2

The link is broken for a long time which is very inconvenient for users. It would be very helpful if you can fix this bug. | 09-28-2021 06:06:05 | 09-28-2021 06:06:05 | for @elishowk :)<|||||>Hi, thanks for reporting, it's a duplicate of https://github.com/huggingface/transformers/issues/13558
Do you mind if I close it and tag you on the other issue ? |
transformers | 13,769 | closed | Size mismatch error while changing the T5Config class default parameters | @thomwolf
I want to finetune and reduce my T5 model size. So I tried to edit the **configuration_t5.py** file. This is the code I want to run for model creation
```
import torch
from transformers_master.src.transformers import T5ForConditionalGeneration
model_name = "t5-small"
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = T5ForConditionalGeneration.from_pretrained(model_name).to(torch_device)
```
The default initialization parameters of the **T5Config** class are as follows
```
self,
vocab_size=32128,
d_model=512,
d_kv=64,
d_ff=2048,
num_layers=6,
num_decoder_layers=None,
num_heads=8,
relative_attention_num_buckets=32,
dropout_rate=0.1,
layer_norm_epsilon=1e-6,
initializer_factor=1.0,
feed_forward_proj="relu",
is_encoder_decoder=True,
use_cache=True,
pad_token_id=0,
eos_token_id=1,
**kwargs
```
I changed the **d_model, d_kv,d_ff**, and **num_heads** from this configuration_t5.py file as follows.
```
d_model=256,
d_kv=32,
d_ff=1024,
num_heads=6,
```
But after changing the above parameters, It showing the error given below
```
RuntimeError: Error(s) in loading state_dict for T5ForConditionalGeneration:
size mismatch for shared.weight: copying a param with shape torch.Size([32128, 512]) from checkpoint, the shape in current model is torch.Size([32128, 256]).
size mismatch for encoder.block.0.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.0.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.0.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.0.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 8]) from checkpoint, the shape in current model is torch.Size([32, 6]).
size mismatch for encoder.block.0.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.0.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for encoder.block.0.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for encoder.block.0.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.1.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.1.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.1.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.1.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for encoder.block.1.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.1.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for encoder.block.1.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for encoder.block.1.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.2.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.2.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.2.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.2.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for encoder.block.2.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.2.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for encoder.block.2.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for encoder.block.2.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.3.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.3.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.3.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.3.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for encoder.block.3.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.3.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for encoder.block.3.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for encoder.block.3.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.4.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.4.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.4.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.4.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for encoder.block.4.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.4.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for encoder.block.4.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for encoder.block.4.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.5.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.5.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.5.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.5.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for encoder.block.5.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.5.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for encoder.block.5.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for encoder.block.5.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.final_layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.0.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.0.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.0.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.0.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 8]) from checkpoint, the shape in current model is torch.Size([32, 6]).
size mismatch for decoder.block.0.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.0.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.0.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.0.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.0.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.0.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.0.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for decoder.block.0.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for decoder.block.0.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.1.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.1.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.1.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.1.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.1.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.1.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.1.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.1.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.1.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.1.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.1.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for decoder.block.1.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for decoder.block.1.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.2.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.2.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.2.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.2.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.2.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.2.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.2.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.2.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.2.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.2.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.2.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for decoder.block.2.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for decoder.block.2.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.3.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.3.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.3.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.3.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.3.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.3.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.3.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.3.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.3.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.3.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.3.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for decoder.block.3.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for decoder.block.3.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.4.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.4.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.4.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.4.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.4.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.4.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.4.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.4.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.4.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.4.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.4.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for decoder.block.4.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for decoder.block.4.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.5.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.5.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.5.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.5.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.5.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.5.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.5.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.5.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.5.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.5.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.5.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for decoder.block.5.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for decoder.block.5.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.final_layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
```
So where I missed? How can I change the model configuration parameters- **d_model, d_kv,d_ff** and **num_heads**?
| 09-28-2021 05:16:14 | 09-28-2021 05:16:14 | It's advised to fine-tune T5-small on your dataset, keeping all parameters to have the same shape as during pre-training.
You _can_ change the sizes of the parameters of a pre-trained model, using the new `ignore_mismatched_sizes` argument:
```
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("t5-small", d_model=256, ignore_mismatched_sizes=True)
```
However, this will randomly initialize the parameters of all layers that rely on `d_model` when defining the shape. In other words, you would need to train these layers from scratch on your own data, which is not really feasible.
Hence, it's better to fine-tune T5-small on your dataset, then use techniques like ONNX, quantization, pruning to reduce the size/increase the inference speed. You can also take a look at the [FastT5](https://github.com/Ki6an/fastT5) project if you want fast inference.
<|||||>@NielsRogge Thank you very much. ```ignore_mismatched_sizes=True``` worked for me. I will try the FastT5 project also.
I am planning to change the parameters in the T5 model like this
```
model =T5ForConditionalGeneration.from_pretrained('t5-small',dropout_rate=0.5,d_ff=1024,num_layers=5,d_model=256,d_kv=32,num_heads=8,ignore_mismatched_sizes=True).to(torch_device)
```
**1 - Are the above combinations allowed? is there any rule or relationship between the above parameter values? Or shall I put any random values for the parameters?**
**2 - I would like to change the T5 model like above and fine-tune the changed model in my dataset. Shall I proceed in this way?** <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,768 | closed | [examples `run_glue.py`] missing requirements `scipy`, `sklearn` | `run_glue.py` requires `sklearn` (which installs `scipy`)
When running:
```
RUN_SLOW=1 pytest tests/deepspeed/test_model_zoo.py::TestDeepSpeedModelZoo::test_zero_to_fp32_zero2_clas_xlnet -sv
```
which internally calls `examples/pytorch/text-classification/run_glue.py`
After
```
pip install .[testing]
pip install deepspeed
pip install -r examples/pytorch/text-classification/requirements.txt
```
it fails:
```
metric = load_metric("glue", data_args.task_name)
[...]
ImportError:File "/tmp/actions-runner/gh-runner/lib/python3.6/site-packages/datasets/load.py", line 819, in load_metric
781
E metric = load_metric("glue", data_args.task_name)
[...]
To be able to use this metric, you need to install the following dependencies['scipy', 'sklearn'] using 'pip install scipy sklearn' for instance'
```
it looks like it's enough to require `sklearn` - `scipy` gets installed along.
```
pip install sklearn
Collecting sklearn
Downloading sklearn-0.0.tar.gz (1.1 kB)
Collecting scikit-learn
Downloading scikit_learn-1.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (25.8 MB)
|████████████████████████████████| 25.8 MB 814 kB/s
Requirement already satisfied: joblib>=0.11 in /mnt/nvme1/anaconda3/envs/ds-test/lib/python3.8/site-packages (from scikit-learn->sklearn) (1.0.1)
Collecting scipy>=1.1.0
Using cached scipy-1.7.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl (28.4 MB)
Collecting threadpoolctl>=2.0.0
Using cached threadpoolctl-2.2.0-py3-none-any.whl (12 kB)
Requirement already satisfied: numpy>=1.14.6 in /mnt/nvme1/anaconda3/envs/ds-test/lib/python3.8/site-packages (from scikit-learn->sklearn) (1.21.2)
Building wheels for collected packages: sklearn
Building wheel for sklearn (setup.py) ... done
Created wheel for sklearn: filename=sklearn-0.0-py2.py3-none-any.whl size=1309 sha256=e55ca522573bec0880cf084944cb1e3235d38bf66ea96f54b94fc1088a6e9dde
Stored in directory: /home/stas/.cache/pip/wheels/22/0b/40/fd3f795caaa1fb4c6cb738bc1f56100be1e57da95849bfc897
Successfully built sklearn
Installing collected packages: threadpoolctl, scipy, scikit-learn, sklearn
Successfully installed scikit-learn-1.0 scipy-1.7.1 sklearn-0.0 threadpoolctl-2.2.0
```
but may be it's best to list both explicitly.
Alternatively we should move `sklearn` from `[dev]` to `[testing]` in `setup.py`
context: Deepspeed integrating our deepspeed tests into their test suite:
https://github.com/microsoft/DeepSpeed/blob/e2de528632e122d7f359d9f5dedf4c0150cae072/.github/workflows/main.yml#L70-L98
I think it'd be too much to ask them to install `.[dev]` as it's too much unnecessary overhead.
The reason I'm considering changing this in `setup.py`'s `testing` is because any other example wanting to do metric `glue` will want those deps.
@sgugger, @LysandreJik
| 09-28-2021 01:18:10 | 09-28-2021 01:18:10 | Oh, interesting! No wonder we have missing deps - it's hard to remember to maintain 3 different places I guess (`setup.py` is the 3rd place)
Note that the collected dependencies don't match the file you linked to, which means other local `requirements.txt` are out of sync:
```
$ find examples/pytorch -name requirements.txt -exec cat {} \; | sort | uniq
accelerate
datasets >= 1.12.0
datasets >= 1.8.0
nltk
protobuf
py7zr
rouge-score
sacrebleu >= 1.4.12
sentencepiece != 0.1.92
seqeval
torch >= 1.3
torch >= 1.3.0
torch >= 1.3accelerate
torch >= 1.5
torch>=1.9.0
torchaudio
torchvision>=0.10.0accelerate
```
They had to use:
```
find examples/pytorch -regextype posix-egrep -regex \
'.*(language-modeling|question-answering|summarization|text-classification|translation).*/requirements.txt' \
-exec pip install -r {} \;
```
Because:
```
examples/pytorch/image-classification/requirements.txt:torch>=1.9.0
```
as it forces pt version which breaks their CI env which is `torch==1.8.2`
So unfortunately I have to add a skip rule to the deepspeed test zoo with img classification models, since they use pt 1.8.2. But it'll be tested on our CI just fine.
<|||||>> Note that the collected dependencies don't match the file you linked to, which means other local requirements.txt are out of sync:
I don't understand what you mean by this? For instance `rouge-score` is in the requirements file of summarization and in the test requirements file.<|||||>Let's try diff:
```
$ sort examples/pytorch/_tests_requirements.txt | uniq > together.txt
$ find examples/pytorch -name requirements.txt -exec cat {} \; | sort | uniq > separate.txt
$ diff -u separate.txt together.txt
--- separate.txt 2021-09-28 08:56:15.281309730 -0700
+++ together.txt 2021-09-28 08:56:03.829267889 -0700
@@ -1,17 +1,22 @@
-accelerate
-datasets >= 1.12.0
-datasets >= 1.8.0
+conllu
+datasets >= 1.1.3
+elasticsearch
+faiss-cpu
+fire
+git-python==1.0.3
+jiwer
+matplotlib
nltk
+pandas
protobuf
-py7zr
+psutil
+pytest
rouge-score
sacrebleu >= 1.4.12
+scikit-learn
sentencepiece != 0.1.92
seqeval
-torch >= 1.3
-torch >= 1.3.0
-torch >= 1.3accelerate
-torch >= 1.5
-torch>=1.9.0
-torchaudio
-torchvision>=0.10.0accelerate
+streamlit
+tensorboard
+tensorflow_datasets
+torchvision
```
Can you see how the 2 sets of dependencies are very different from each other?<|||||>OK, so discussing this with Sylvain on slack I have now a better understanding of why the different sets of dependencies don't quite match:
1. `.[testing]` is for dependencies needed to run the test suite itself - pytest, plugins, etc.
2. `examples/pytorch/_tests_requirements.txt` is what HF Transformers CI uses to install when running the CI, which means it covers all dependencies of all example scripts which are actually tested (which means just some features)
3. Individual `examples/pytorch/*/requirements.txt` in theory should contain the exhaustive list of dependencies for all of its features. |
transformers | 13,767 | closed | Fix warning for gradient_checkpointing | # What does this PR do?
The default value in #13734 was in the wrong direction :man_facepalming: This PR fixes that. | 09-27-2021 19:27:53 | 09-27-2021 19:27:53 | |
transformers | 13,766 | closed | Fix filtering in test fetcher utils | # What does this PR do?
The examples tests are never run because there is a bug in the way filtering is applied to the tests to fetch. This PR addresses that. | 09-27-2021 18:58:12 | 09-27-2021 18:58:12 | |
transformers | 13,765 | closed | Add an example of exporting BartModel + BeamSearch to ONNX module. | This PR is aimed to deliver an example to describe how to export BartModel + BeamSearch to ONNX model and inference it by ONNXRuntime.
These new example files were added without updates to existing Huggingface model code. It mainly focuses on:
1. Providing an entry file run_onnx_exporter.py to run this example.
2. In generation_onnx.py file, adding a BARTGenerator model to wrap the native BartModel code. To support Beam Search, we need to convert the whole model to a TorchScript module finally. This new BARTGenerator model will trace the Encoder and Decoder of BartModel to make them be compatible with torch.jit.script() method.
3. In generation_onnx.py file, copying Beam Search code from generation_utils.py and updating these code so that beam_search function could be converted to TorchScript by calling torch.jit.script method.
| 09-27-2021 17:21:57 | 09-27-2021 17:21:57 | Think we just need to run `make style` once and then we can merge - lgtm!<|||||>Thanks for adding this! Could you perhaps add a README to this example, to explain what the example is about and how to use it?<|||||>Hey @fatcat-z,
kindly pinging you for taking into account the comments on this PR (perhaps in a follow-up PR), if possible! <|||||>> Thanks for adding this! Could you perhaps add a README to this example, to explain what the example is about and how to use it?
I'm working on a new PR to address your and Gary's comments. |
transformers | 13,764 | closed | Add attention-mask support for ViTModel | # What does this PR do?
Transformers currently only support head-masks, however recent works
in vision language modeling have used the ViTModel with attention-masks.
Adding attention-masks to ViTSelfAttention and ViTModel allow
transformers modeling_vit to be used in implementing these models.
This commit adds attention-mask implementations from BertSelfAttention
as the forward param attention_mask. This change preserves the existing
behavior by default.
A specific example of how this change will be useful is being able to
implement ViLT https://arxiv.org/abs/2102.03334, using ViTModel as the main trunk.
Unit-tests seemed to pass, though there wasn't a unit-test for head_masks,
so I just added a check for attention_mask in the signature.
Thanks!
src/transformers/models/vit/modeling_vit.py @LysandreJik, @sgugger
| 09-27-2021 16:28:01 | 09-27-2021 16:28:01 | @NielsRogge reordered params!<|||||>Thanks. I think this should also be added for the Flax implementation of ViT, for consistency.
It could perhaps also be added for BEiT.<|||||>@NielsRogge added to flax_vit, beit, and flax_beit.<|||||>Thanks. Can you also edit the corresponding tests? In `test_modeling_vit.py` (and similarly, in test_modeling_deit.py, test_modeling_beit.py and the flax models), we can add the `attention_mask` to the `prepare_config_and_inputs` method, such that the `attention_mask` is also used in all tests.<|||||>@Ryan-Qiyu-Jiang the shape of the `attention_mask` provided to these models is equal to (batch_size, seq_len), with the sequence length equal to the number of patches + 1 for the [CLS] token (and + 2 for DeiT, as this model also adds a special distillation token to the sequence).
This should also be added to the docstrings of the respective models. <|||||>Hi @Ryan-Qiyu-Jiang, after working on another PR (#13874), I now wonder what the use is of an `attention_mask` being defined for vision models. Is this for the case where images in a batch are padded, and one doesn't want to take into account the `attention_scores` for padding pixels?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,763 | closed | Hidden states not available in S2T (and small typo in S2T documentation) | ## Environment info
- `transformers` version: 4.10.0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.11
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten @LysandreJik
## Information
I am using S2T-SMALL-LIBRISPEECH-ASR (or any of the S2T models for that sake) and I am interested in outputting hidden states of the model (input: speech file, output: text), however, this is not returned in the forward (generate) pass.
## To reproduce
Steps to reproduce the behavior:
```
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr", output_hidden_states=True, return_dict=True)
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
input_features = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
).input_features # Batch size 1
generated_ids = model.generate(input_ids=input_features)
transcription = processor.batch_decode(generated_ids)
```
## Expected behavior
I am expecting the model to return output_hidden_states (decoder_hidden_states, encoder_hidden_states), but these are not available. Is there some way of extracting the hidden states?
Small note: processor = ```Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")``` is misspelled in the documentation (https://huggingface.co/facebook/s2t-small-librispeech-asr) (small p in processor).
Thanks so much!!
Greta
| 09-27-2021 15:26:50 | 09-27-2021 15:26:50 | Hi,
The `.generate()` method is meant to generate text, it will not return hidden states. If you want those, you need to call `forward()` with `output_hidden_states=True`:
`outputs = model(input_ids=inputs["input_values"], attention_mask=inputs["attention_mask"], output_hidden_states=True)`
<|||||>Hi Niels, thank you for your quick reply.
Continuing from the initial example, I can then call:
```
input_features = processor(
ds["speech"][0],
sampling_rate=16_000,
return_tensors="pt"
)
outputs = model.forward(input_features=inputs["input_features"], attention_mask=inputs["attention_mask"], output_hidden_states=True)
```
Which gives a ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
Which are listed as optional? <|||||>Update: I just checked, the `.generate()` method also provides an argument `output_hidden_states` which can be set to `True`.<|||||>Great, thank you for this Niels! For reference, I also had to use the ```return_dict_in_generate``` flag to output the states.
```
generated_ids = model.generate(input_ids=input_features, output_hidden_states=True, return_dict_in_generate=True)
```
Again, thanks for all your awesome work! |
transformers | 13,762 | closed | ByT5 tokenizer gives indices of chars instead of bytes | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.2
- Platform: Linux-5.4.0-77-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.9.0.dev20210415+cu101 (True)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help
ByT5: @patrickvonplaten
Documentation: @sgugger
## Information
ByT5 Tokenizer does incorrect tokenization.
In ByT5 documentation, there are two examples of encoding text. One is `torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3` and the other is with dedicated tokenizer. However, the tokenizer approach is giving the wrong indices.
Going deep into ByT5 tokenizer implementation, indices are given by `ord(token)`. However, this returns [an integer representing the Unicode code point of that character. ](https://docs.python.org/3/library/functions.html#ord) But we need an integer representing byte!
From the official python [documentation](https://docs.python.org/3/howto/unicode.html):
>A Unicode string is a sequence of **code points**, which are numbers from 0 through 0x10FFFF (1,114,111 decimal). This sequence of code points needs to be represented in memory as a set of code units, and code units are then mapped to 8-bit bytes.
>UTF-8 uses the following rules:
>
>* If the code point is < 128, it’s represented by the corresponding byte value.
>
>* If the code point is >= 128, it’s turned into a sequence of two, three, or four bytes, where each byte of the sequence is between 128 and 255.
So, in essence, everything works until we have code points up to 128, as integers for code points and bytes are the same. But going further, for example, accented letter *č*, tokenizer misbehaves. Characters with even higher indices can not be mapped into embeddings of size (`Embedding(384, 1472)`.
By the way, why Embedding is not the size of 259 (255 bytes + 3 special characters)??? `T5ForConditionalGeneration.from_pretrained('google/byt5-small').get_input_embeddings()`
## To reproduce
```
accent_leetter = "č"
list([i+3 for i in accent_leetter.encode("utf-8")]), tok.encode(accent_leetter, add_special_tokens=False)
```
output:
```
([199, 144], [272])
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
output:
```
([199, 144], [199, 144])
```
| 09-27-2021 12:06:52 | 09-27-2021 12:06:52 | @LukasStankevicius could you please check that you're using latest `master` or `4.11.0.dev0` :thinking:
I'm using the official tokenization implementation from ByT5 - which comes from `seqio`:
```python
!pip3 install seqio
from seqio import ByteVocabulary
tokenizer = ByteVocabulary()
tokenizer._encode("č")
# Outputs:
# [199, 144]
```
Hugging Face ByT5 implementation also returns:
```bash
from transformers import ByT5Tokenizer
hf_tokenizer = ByT5Tokenizer.from_pretrained("google/byt5-base")
hf_tokenizer.encode("č", add_special_tokens=False)
# Outputs
# [199, 144]
```<|||||>> @LukasStankevicius could you please check that you're using latest `master` or `4.11.0.dev0` 🤔
>
> I'm using the official tokenization implementation from ByT5 - which comes from `seqio`:
>
> ```python
> !pip3 install seqio
>
> from seqio import ByteVocabulary
> tokenizer = ByteVocabulary()
>
> tokenizer._encode("č")
>
> # Outputs:
> # [199, 144]
> ```
>
> Hugging Face ByT5 implementation also returns:
>
> ```shell
> from transformers import ByT5Tokenizer
>
> hf_tokenizer = ByT5Tokenizer.from_pretrained("google/byt5-base")
> hf_tokenizer.encode("č", add_special_tokens=False)
>
> # Outputs
> # [199, 144]
> ```
Yes, that works! I was at 4.10.2 down from 4.11.0.dev0 |
transformers | 13,761 | closed | Unable to match GPT-2 reported perplexity results on CBT, Wikitext-103, and 1BW datasets and doubt on LAMBADA accuracy | I am having the following discrepancies on trying to reproduce the results of GPT-2 small (117M parameters).
On using this huggingface [script for perplexity](https://huggingface.co/transformers/perplexity.html). I am able to match reported perplexity on Wikitext-2 (reported - 29.41, reproduced 29.94). But it also gives the same perplexity on Wikitext-103. Likewise [EleutherAI's language model evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness) gives perplexity that matches Wikitext-103's reported perplexity (reported 37.50, reproduced - 37.37) and gives the same result for Wikitext-2. The methods give same perplexity to Wikitext-2 as well as Wikitext-103 as the test set for the datasets are the same. The huggingface method uses the actual token count and the EleutherAI one uses word count to calculate perplexity. Is the assumption correct that the different perplexities are simply due to the different ways they are computed?
I am also unable to match Children's Book Test (CBT) dataset tasks CBT-CN and CBT-NE accuracies of 87.65 and 83.4. I am able to get up to 84.35 and 73.65 only.
Getting accuracy of 46.67 on LAMBADA dataset for predicting the last token instead of the last word. Is this way correct?
Getting perplexity of 45.88 using token count and 89.35 using word count on one billion words (1BW) dataset instead of the reported 75.20. | 09-27-2021 11:20:43 | 09-27-2021 11:20:43 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>hi @samzabdiel, did you ever find what was going on? <|||||>Not completely, but I was able to closely match accuracies reported on CBT datasets using the appropriate preprocessing. The results seem to be tightly coupled with the preprocessing done. |
transformers | 13,760 | closed | Fix loss computation in Trainer | # What does this PR do?
This PR fixes the loss computation in Trainer, as indicated in #13758 | 09-27-2021 10:47:59 | 09-27-2021 10:47:59 | |
transformers | 13,759 | closed | A version of Trainer that calculates the eval metrics (e.g. accuracy) on the training set as well | # 🚀 Feature request
The `Trainer` class only calculates accuracy (assuming that is one of the eval metrics) on the evaluation dataset. However, I would like these metrics to be computed on the training set as well.
Note that `TensorFlow` automatically calculates accuracy on the training set.
## Motivation
Training set accuracy is *crucial* for some applications/investigations in ML.
## Your contribution
I want to make sure I am not reinventing the wheel before subclassing `Trainer` myself?
| 09-27-2021 08:59:02 | 09-27-2021 08:59:02 | cc @sgugger <|||||>This feature has been asked for a lot, so we will implement it when we have some spare time. Note that in the meantime, you can run `trainer.evaluate(train_dataset)` to get those metrics whenever you like.<|||||>True, but normally the training set gets randomly shuffled. So for precision work one has to ensure that the batches are identical for training and evaluation, at least if the metric isn't additive. <|||||>The metric is computed on the whole predictions/labels at once in the `Trainer`. So the shuffling is irrelevant for this.<|||||>OK, I think I see: First of all the evaluation dataloader does not shuffle. Secondly, even though evaluation feeds the data to the model in batches, the results are first collated together before the metrics are computed on the whole predictions/labels at once.
Still, if instead the metrics where computed on each batch during training (this is the requested feature if I understood correctly), and a metric was not additive, then it might have a different result than when evaluated on the whole training set at once. <|||||>I am not suggesting to implement computing a metric on each batch during training as this is not something that works for NLP where most metric (F1 score, BLEU, ROUGE etc.) are not additive. What we can add is a flag that makes evaluation on the training set on top of the validation set.<|||||>Ah, I see. But then you will not have the TensorFlow behaviour were you can monitor the training set accuracy in "real time" as the epoch progresses :)<|||||>Since that behavior is actually harmful for non-additive metrics, I would rather not have it :-) <|||||>The most obvious way to get the training set metrics for each epoch is to save the model at each epoch, and then do an evaluation afterwards.
On the other hand, it seemed more convenient to instead make a hook that calculates the training metrics at the end of each epoch:
```
class EvaluateTrainingDatasetCallback(TrainerCallback):
"""
A :class:`~transformers.TrainerCallback` that evaluates metrics on the training dataat the end of each epoch.
"""
def __init__(self, trainer) -> None:
super().__init__()
# Abuse of callback mechanism, but not enough is passed to the hooks to avoid this
self._trainer = trainer
def on_epoch_end(self, args, state, control, **kwargs):
if control.should_evaluate:
# Needed to deal with side effects of accessing the trainer
control_copy = deepcopy(control)
self._trainer.evaluate(eval_dataset=self._trainer.train_dataset, metric_key_prefix="train")
return control_copy
```
However, I found that the callback was affecting the results after the first epoch (so different train and eval metrics than without the callback).
I thought that calling `Trainer.evaluate` did not affect the random generator. Any insight into why the callback is affecting the state?<|||||>I'm not sure, since you use `evaluate` there shouldn't be any difference.<|||||>Perhaps I wasn't clear: Without the callback, first training is done on all 3 epochs, and then *afterwards* the training dataset metrics are calculated by loading the respective checkpoints and calling `evaluate`.
However, the callback above calls `evaluate` immediately after training each epoch, respectively. On the first epoch both methods agree, but on *subsequent* epochs I am getting different results, on both the training and validation sets, as a result of using the callback. So it seems that the callback is having "side-effects", which I don't believe it should.
Here is an example without the callback

And with the callback, causing side-effects:

<|||||>What I means is, I have no idea where this comes from since there is no randomness called during the evaluation.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,758 | closed | incorrect loss calculation | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.11.0.dev0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger
## Information
This bug report is based on reading the source code.
The training loss appears to be calculated incorrectly when the batch loss is either nan or infinity:
In trainer.py lines 1314--1318:
```
if args.logging_nan_inf_filter and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step)):
# if loss is nan or inf simply add the average of previous logged losses
tr_loss += tr_loss / 1 + (self.state.global_step - self._globalstep_last_logged)
else:
tr_loss += tr_loss_step
```
I believe a pair of parenthesis was forgotten, and it should be:
```
if args.logging_nan_inf_filter and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step)):
# if loss is nan or inf simply add the average of previous logged losses
tr_loss += tr_loss / (1 + (self.state.global_step - self._globalstep_last_logged))
else:
tr_loss += tr_loss_step
``` | 09-27-2021 08:54:24 | 09-27-2021 08:54:24 | Indeed, thanks a lot for reporting. I would normally suggest to let you write a PR to fix this but we have a release planned today and the fix will need to be inside, so I'll fix it and make you co-author of the commit.<|||||>Fixed by #13760 |
transformers | 13,757 | closed | Update Tatoeba conversion | # What does this PR do?
The Helsinki-NLP / Tatoeba NMT models have gone through various
architectural changes, and the old conversion code fails on them. This
commit is something of a rewrite to remedy this, in particular parsing
supplied yaml files rather than README.md files.
This was previously reviewed by @patrickvonplaten [here](https://github.com/huggingface/transformers/pull/12192), but we didn't get around to merging it because of an unclean commit history, so I made a fresh branch now to be extra sure.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten | 09-27-2021 08:40:14 | 09-27-2021 08:40:14 | Great, thanks, and thanks @patil-suraj, I'll try to come back to those at some point. |
transformers | 13,756 | closed | TypeError: 'LayerNorm' object does not support indexing | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-4.15.0-142-generic-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.13
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. I made pre-trained BERT checkpoint file using google-research official code. (https://github.com/google-research/bert)
2. To using the pre-trainded BERT ckpt file in Pytorch, I run `transformers-cli convert` which you introduced here(https://github.com/huggingface/transformers/blob/master/docs/source/converting_tensorflow_models.rst#id5)
3. But it throw an error "AttributeError: 'BertEmbeddings' object has no attribute 'bias' . And I refer to here(https://stackoverflow.com/questions/63689270/bertembeddings-object-has-no-attribute-bias-while-converting-tf-checkpoint), I changed the variable name of ckpt file. (layer_normalization -> LayerNorm)
4. Finally, I run again `transformers-cli convert`. But It throw an error "TypeError: 'LayerNorm' object does not support indexing".
## Expected behavior
I expected this code acted well.
```
transformers-cli convert --model_type bert\
--tf_checkpoint=./bert_model.ckpt \
--config=./bert_config.json \
--pytorch_dump_output=./ckpt/pytorch_model.bin
```
But it throw out an error
```
Traceback (most recent call last):
File "/home/user/venv3.6/bin/transformers-cli", line 8, in <module>
sys.exit(main())
File "/home/user/venv3.6/lib/python3.6/site-packages/transformers/commands/transformers_cli.py", line 33, in main
service.run()
File "/home/user/venv3.6/lib/python3.6/site-packages/transformers/commands/convert.py", line 92, in run
convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output)
File "/home/user/venv3.6/lib/python3.6/site-packages/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_bert(model, config, tf_checkpoint_path)
File "/home/user/venv3.6/lib/python3.6/site-packages/transformers/modeling_bert.py", line 119, in load_tf_weights_in_bert
pointer = pointer[num]
TypeError: 'LayerNorm' object does not support indexing
```
Please help me.
Thank you! | 09-27-2021 08:25:37 | 09-27-2021 08:25:37 | The version of `transformers` you used seems very old, could you try updating it to the latest and doing conversion again?<|||||>@qqaatw Thank you for response. I tried in transformers 4.11.0, But it threw same error.<|||||>Thanks for reporting. I have tested the official [checkpoint](https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-12_H-768_A-12.zip), which could be converted successfully. Could you provide your checkpoint file that can reproduce this error?<|||||>@qqaatw Thank you very much! I made pre-training model using Korean data.
But I cannot upload model here. I think It is because of the big size of zip file.
How can I provide you my checkpoint file?
I can only upload bert_config, json, vocab.txt and traing data file(wiki_00.txt).
(Because I run pre-training for just test, I'm using very small data.)
[jsonAndVocab.zip](https://github.com/huggingface/transformers/files/7250104/jsonAndVocab.zip)
[wiki_00.txt](https://github.com/huggingface/transformers/files/7250116/wiki_00.txt)
<|||||>We can't inspect variable names without a checkpoint file, maybe using Google Drive to share it?
Before doing so, could you elaborate how you changed the variable names of the ckpt file? As I find that the latest official checkpoints are not using the name `layer_normalization` anymore, they are by default using `LayerNorm`. So could you also check if your training code is as latest as that of Google?<|||||>@qqaatw
I uploaded my checkpoint file in this link: https://drive.google.com/drive/folders/1AGQ4ApxvIe8N3_eEGu60TobNlQTJ6LlF
Additionally, I uploaded the python file which can change `layer_nomralization` to `LayerNorm` in the checkpoint file.
I downloaded that file, and I ran below.
```
python tensorflow_rename_variables_backup.py \
--checkpoint_dir=./model.ckpt \
--replace_from=layer_normalization \
--replace_to=LayerNorm
```
And, I used latest Google code for pre-training(I changed only path of some files). That is like below.
```
python create_pretraining_data.py \
--input_file=./sample_text.txt \
--output_file=/tmp/tf_examples.tfrecord \
--vocab_file=$BERT_BASE_DIR/vocab.txt \
--do_lower_case=True \
--max_seq_length=128 \
--max_predictions_per_seq=20 \
--masked_lm_prob=0.15 \
--random_seed=12345 \
--dupe_factor=5
```
```
python run_pretraining.py \
--input_file=/tmp/tf_examples.tfrecord \
--output_dir=/tmp/pretraining_output \
--do_train=True \
--do_eval=True \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \
--train_batch_size=32 \
--max_seq_length=128 \
--max_predictions_per_seq=20 \
--num_train_steps=20 \
--num_warmup_steps=10 \
--learning_rate=2e-5
```
these code are introduced in README.md of https://github.com/google-research/bert
Thank you very much!<|||||>Hey @kyle-bong,
Sorry for the late reply.
Could you try this [script](https://gist.github.com/qqaatw/82b47c2b3da602fa1df604167bfcb9b0) with the following command to rename variables and then try doing conversion again?
```bash
python tensorflow_rename_variables_backup.py \
--checkpoint_dir=path/to/ckpt \
--replace_from="layer_normalization(_[0-9]+)*" \
--replace_to=LayerNorm
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,755 | closed | [Tests] Cast Hubert model tests to fp16 | **Problem**
Some Hubert integration tests are flaky, throwing GPU OOM on model loads (GPUs are 16GB). CI logs: https://github.com/huggingface/transformers/actions/runs/1274012618
```
FAILED tests/test_modeling_hubert.py::HubertModelIntegrationTest::test_inference_ctc_batched
FAILED tests/test_modeling_hubert.py::HubertModelIntegrationTest::test_inference_emotion_recognition
FAILED tests/test_modeling_hubert.py::HubertModelIntegrationTest::test_inference_intent_classification
FAILED tests/test_modeling_hubert.py::HubertModelIntegrationTest::test_inference_keyword_spotting
FAILED tests/test_modeling_hubert.py::HubertModelIntegrationTest::test_inference_speaker_identification
```
**Solution**
Cast models and reference tensors to fp16.
cc @patrickvonplaten | 09-26-2021 19:38:15 | 09-26-2021 19:38:15 | |
transformers | 13,754 | closed | This problem happend when I train the model | `The expanded size of the tensor (824) must match the existing size (512) at non-singleton dimension 1. Target sizes: [32, 824]. Tensor sizes: [1, 512]`
why this happend and what should I do? | 09-26-2021 19:24:13 | 09-26-2021 19:24:13 | We cannot identify the problem without additional context. Please provide the code that can reproduce this error.<|||||>Hi,
For training-related questions, please refer to the [forum](https://discuss.huggingface.co/). We like to keep Github issues for bugs/feature requests.
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,753 | closed | [WIP] Add Flax FNet | # What does this PR do?
This PR adds FNet model in Flax. | 09-26-2021 19:02:07 | 09-26-2021 19:02:07 | Thanks for that 🤗 I could test it on TPU whenever the PR is ready, I think only the Auto Configuration stuff is missing 😅<|||||>@stefan-it It will take me some time to finish this. I will let you know when it is working!<|||||>I will be continuing this PR after #13752 is merged, just in case any changes are needed afterwards.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,752 | open | Update FNet Fourier Transform | # What does this PR do?
This PR changes the Fourier Transform according to the [latest updates](https://github.com/google-research/google-research/commit/4f7815fe3aaf71b811a0fb82ac8768a4e8ab0428#diff-4a2be83f4d2b702b206d3f64b2aeb209ac7a2d65fd28382916225463b984bc2c) to the FNet repository.
I will update the configurations of the checkpoints if the changes look fine.
@patrickvonplaten @LysandreJik @sgugger @patil-suraj | 09-26-2021 18:54:27 | 09-26-2021 18:54:27 | Thanks for the improved version of the FNet Fourier transform! In order to keep backwards compatibility, we should try to leave all existing arguments and simply add new ones - this would definitely be possible :-)<|||||>@sgugger @patrickvonplaten
Does this mean I cannot remove the custom `fftn` function?
Or just the configuration arguments?
If I keep the old config arguments, won't I have to keep the relevant code as well?<|||||>No, the old function can't be removed for backward compatibility, sadly. We can only add new arguments that trigger some new behavior.<|||||>@sgugger I will fix the test according to the new change.
Also, wanted to ask if there is a possiblity of ever deprecating the old version? Just out of curiosity.<|||||>For modeling code, it's tougher to go through the usual deprecation cycle as we might have models on the Hub using the old code. They would then stop working if we ever removed the old code, which is something we try very hard to avoid.<|||||>@sgugger Also, what do you think about [this comment](https://github.com/huggingface/transformers/issues/13684#issuecomment-924472556)?
In his notebook, he makes the `dft_mat_hidden` and `dft_mat_seq` into parameters so they get pushed to the GPU. This might help speed things up?
Also there is one minor change which allows `model.half()`.
Should I also add the changes here?
CC @patrickvonplaten <|||||>@gchhablani the `model.half` support would be cool to have, I was missing it in my last FNet GPU training 😅<|||||>Can you put the fix for `model.half()` in a new PR @gchhablani ? For the other part of making the attributes parameters, I'm not opposed to do it as long as it's backward compatible.<|||||>@sgugger I checked by doing `model.cuda()` on the current model. Refer to [this colab notebook](https://colab.research.google.com/drive/1fB5GOGuH3xClyTXJ_zagR_G-3Qd7QzMQ?usp=sharing) if needed. The buffers are also pushed to the `cuda:0` device. I don't think any change is needed regarding this. Wdyt?<|||||>@stefan-it Can you please explain the issue you face when you do `model.half()`? In the notebook linked in the previous comment, I have tried `model.half()` as well. Except the buffers, everything becomes `torch.float16`. Should the buffers also move to `torch.complex32` or `torch.complex16`?<|||||>If the buffers are properly pushed, then there is no need to change anything I believe.<|||||>Looks good to me as well! @gchhablani - do you think the new version yields a significant speed-up? In this case it might be worth adding a deprecation warning to the current config.<|||||>@patrickvonplaten With the current changes, there is almost negligible modification with `use_latest`. With default conditions (for GPU), we would be using the `torch.fft.fftn` method as before.
For TPU long sequences (>4096), there is a difference. Instead of using the custom fftn method, the torch fftn method is used. Not sure how much difference is there between the custom fftn (fft in loop) and the torch method, speed-wise. Maybe I can `timeit` on a CPU for long sequences for the different fft methods?
I just thought that we might need to update to the latest changes as on the original repository.<|||||>CC @ontocord<|||||>Ha! Great to see you all are adding some of this stuff... <|||||>Hi @gchhablani , some feedback:
```bash
Traceback (most recent call last):
File "run_mlm.py", line 552, in <module>
main()
File "run_mlm.py", line 501, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/mnt/turkish-fnet/transformers-fnet-refactoring/src/transformers/trainer.py", line 1312, in train
tr_loss_step = self.training_step(model, inputs)
File "/mnt/turkish-fnet/transformers-fnet-refactoring/src/transformers/trainer.py", line 1839, in training_step
loss = self.compute_loss(model, inputs)
File "/mnt/turkish-fnet/transformers-fnet-refactoring/src/transformers/trainer.py", line 1873, in compute_loss
outputs = model(**inputs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/turkish-fnet/transformers-fnet-refactoring/src/transformers/models/fnet/modeling_fnet.py", line 792, in forward
outputs = self.fnet(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/turkish-fnet/transformers-fnet-refactoring/src/transformers/models/fnet/modeling_fnet.py", line 636, in forward
encoder_outputs = self.encoder(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/turkish-fnet/transformers-fnet-refactoring/src/transformers/models/fnet/modeling_fnet.py", line 334, in forward
layer_outputs = layer_module(hidden_states)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/turkish-fnet/transformers-fnet-refactoring/src/transformers/models/fnet/modeling_fnet.py", line 293, in forward
self_fourier_outputs = self.fourier(hidden_states)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/turkish-fnet/transformers-fnet-refactoring/src/transformers/models/fnet/modeling_fnet.py", line 246, in forward
self_outputs = self.self(hidden_states)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/turkish-fnet/transformers-fnet-refactoring/src/transformers/models/fnet/modeling_fnet.py", line 225, in forward
outputs = self.fourier_transform(hidden_states).real
RuntimeError: Unsupported dtype Half
```
Occurs when using the (PyTorch) language modeling example with `--fp16` option - even with the latest `update_fnet` branch in a `nvcr.io/nvidia/pytorch:21.04-py3` container, see [detailed library versions](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel_21-04.html#rel_21-04).
<|||||>@ontocord Can you please explain why you think that the DFT matrices need to made into parameters? If it is just to push the params onto the device, using buffers is doing that.
@patrickvonplaten @sgugger Any idea on how can I fix the issue @stefan-it just mentioned?<|||||>They do not convert to cuda or half automatically unless they are in the same parameter list of a module. I made them no_grad Params.
> On Oct 1, 2021, at 2:26 PM, Gunjan Chhablani ***@***.***> wrote:
>
>
> @ontocord Can you please explain why you think that the DFT matrices need to made into parameters?
>
> @patrickvonplaten @sgugger Any idea on how can I fix the issue @stefan-it just mentioned?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub, or unsubscribe.
> Triage notifications on the go with GitHub Mobile for iOS or Android.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale |
transformers | 13,751 | closed | Scrolling through the docs has become very slow | The problem is the same as the title states.
While scrolling through the docs, say [Pipelines](https://huggingface.co/transformers/main_classes/pipelines.html), it takes a lot of time to scroll down, which I think was not the case before.
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people. -->
@sgugger
| 09-26-2021 16:38:08 | 09-26-2021 16:38:08 | Not sure what you mean, I can scroll through that page at any speed I want.<|||||>> Not sure what you mean, I can scroll through that page at any speed I want.
Okay.

A screengrab while scrolling through the website.
What might be causing this?<|||||>Maybe restarting chrome is necessary? I don't have it installed locally, but everything is fine on Edge.<|||||>I also have this problem on chrome. Restarted it and my computer but it didn't fix it.
<|||||>That's interesting - I use chrome and an array of chromium-based browsers but have never encountered this issue. Could this be linked to one of your extensions? Does it happen on older versions of the docs? Does it also happen on other docs, for example, our datasets docs, or the PyTorch docs which also use the same backend?<|||||>I've disabled my extensions and tried but still doesn't work. Sorry I am pretty new to programming so I don't know how to open up older versions of the documents. Do I go to a branch that is at an older release?
Also I did notice that when I used my trackpad on my laptop it works fine. But I have a mouse connected to my laptop and when I use the scroll on that, it causes the problem. Are you using a mouse or the trackpad?<|||||>I am using an external mouse in the video, not the trackpad.
So I'm not exactly sure why it works with a trackpad but not a mouse, but here is a video of what seems to be happening. When I scroll over on the left hand side with the table of contents info, it works fine. The left scrollbar is separate from the right scrollbar. But when I scroll where the actual information is, the left and right scrollbar scroll at the same time. But once the left scrollbar hits the end, the right scrollbar scrolls without it slowing down. I hope that makes sense from the video. I'm not sure how big of an issue this is but maybe having the 2 scroll bars work separately might help?
https://user-images.githubusercontent.com/70382249/135101581-158d869b-5cfe-42ff-b4fb-a5c48147c7a1.mp4
Edit: @LysandreJik Sorry if I am not supposed to ping, but was just wondering if this is a big enough issue to be fixed or it's only an issue for a very small amount of people <|||||>We're in the process of moving away from Sphinx in the next few weeks, so as it seems to affect only a small number of people *and* we'll have a new frontend in a couple of weeks, we'll focus our efforts on getting the new frontend out as early as possible. Hope that's fine!<|||||>I experienced it as well on Chrome 94. I realized if I turn off smooth scrolling the stuttering scroll issue is gone.

_Disabling Javascript also seems to fix the stuttering scroll issue_
Although another issue appears i.e scrolling the main content will scroll the sidebar as well.
And it seems another person experienced that as well https://discuss.huggingface.co/t/docs-sidebar-scrolls-while-scrolling-main-content/10387<|||||>Thank you for the heads up @redwizard100!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Is it still the case @ekdnam, @DeepP2667?<|||||>It works great now. Thank you!<|||||>AFAICT, we didn't change anything :smile: <|||||>Oh, I'm not sure but it works now<|||||>It might have been an issue with a chrome or Sphinx version, but glad to hear that it works now!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this as stale but please give us feedback on the new doc frontend!
Announcement at https://twitter.com/huggingface/status/1466462283576533003 |
transformers | 13,750 | closed | wav2vec 2.0 loaded from a fairseq checkpoint isn't strictly equivalent | ## Environment info
- `transformers` version: 4.9.2
- Platform: Linux-4.18.0-147.51.2.el8_1.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
## Who can help
@patrickvonplaten @anton-l
## Information
This is related to wav2vec 2.0 loaded with Wav2Vec2Model
The problem arises when using:
* [x] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. train a base model with fairseq (official base config) and convert the checkpoint with the official HF script.
2. Load the converted model with Wav2Vec2Model
3. Load the fairseq checkpoint with fairseq.checkpoint_utils.load_model_ensemble_and_task
4. Print a sum of all parameters and the output of identical input wav file.
## Expected behavior
In practice, we should see two identical sum as the models should be exactly identical. This is not the case in my test. Then, if you pass the same wav file in the two models (HF and fairseq), you'll get different outputs as well.
What I did to compute the sum:

The output:

As you can see, we have the same number of parameters, but not the same values as they differ slightly.
Then, I let both models train on the exact same downstream task (everything runs within SpeechBrain). The downstream setups are **identical**. At the end, the fairseq models as a WER 2% lower while we should see the exact same results.
| 09-26-2021 16:34:26 | 09-26-2021 16:34:26 | Hey @TParcollet,
Thanks for the issue. Could you try to make a fully reproducible code snippet? I image that one wouldn't have to train a whole new model, but could also just use one of the already pretrained wav2vec2 models from fairseq to see this difference no?
Also it would be great if you could replace the screenshots by code snippets which would make it easier to reproduce the error for us. Thanks!<|||||>I'll do this as soon as I have 5 minutes of spare time :-P <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,749 | closed | Add a method for substring(tokens / ids) checking to tokenizers. | # 🚀 Feature request
Adds a method to tokenizer to check whether a substring(tokens / ids) is included in evidence strings, or furthermore, get the starting and ending positions of the substring(tokens / ids).
The following code is an example for it:
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
input_str = ["There is a cute dog.", "There is a cute cat."]
substr = "cute dog"
input_ids = tokenizer(input_str).input_ids
sub_ids = tokenizer.encode(substr, add_special_tokens=False)
print(input_ids, sub_ids) # [[101, 2045, 2003, 1037, 10140, 3899, 1012, 102], [101, 2045, 2003, 1037, 10140, 4937, 1012, 102]] [10140, 3899]
# Added method
is_sub_tokens = tokenizer.is_sub_tokens(input_ids, sub_ids)
print(is_sub_tokens) # [True, False]
```
## Motivation
Some of question answering models require a label indicating whether an answer is included in the evidence strings, by adding this method, we can conveniently acquire this label.
## Your contribution
If no other concern and, I have some time, I'm willing to open a PR for it.
Any thoughts are welcomed :) | 09-26-2021 12:35:24 | 09-26-2021 12:35:24 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,748 | closed | FNet model card | Hi guys,
I think that the current FNet model cards both for [fnet-base](https://huggingface.co/google/fnet-base) and [fnet-large](https://huggingface.co/google/fnet-large) are a bit misleading:
```text
This model is uncased: it does not make a difference between english and English.
```
After manually inspecting the original sentencepiece vocab from:
```
https://storage.googleapis.com/gresearch/f_net/vocab/c4_bpe_sentencepiece.model
```
and the version on model hub, it seems that the vocab is cased instead:

And the tokenizer itself also produces a cased output:

So I think the model card should be updated then :hugs:
| 09-26-2021 08:52:24 | 09-26-2021 08:52:24 | /cc @gchhablani<|||||>@stefan-it I made a mistake there. Sorry, I'll fix it.
EDIT: I have fixed it.<|||||>Many thanks for the fix 🤗<|||||>Thanks @stefan-it for pointing it out :) |
transformers | 13,747 | closed | I want to understand the source code of transformers. Where should I start? Is there a tutorial link? thank you very much! | I want to understand the source code of transformers. Where should I start? Is there a tutorial link? thank you very much! | 09-26-2021 08:27:24 | 09-26-2021 08:27:24 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,746 | closed | Fix type annotations for `distributed_concat()` | # What does this PR do?
Fix the type annotations for [`trainer_pt_utils.distributed_concat()`](https://github.com/huggingface/transformers/blob/91df45516c9c21283df21d564bc352067d8c5f62/src/transformers/trainer_pt_utils.py#L160).
The input of this function could be a `torch.Tensor` or `list` / `tuple`, so I guess the type annotations:
https://github.com/huggingface/transformers/blob/91df45516c9c21283df21d564bc352067d8c5f62/src/transformers/trainer_pt_utils.py#L160
should be:
```python
def distributed_concat(
tensor: Union[torch.Tensor, Any], num_total_examples: Optional[int] = None
) -> Union[torch.Tensor, Any]:
```
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
## Who can review?
@sgugger | 09-26-2021 08:00:24 | 09-26-2021 08:00:24 | Done. |
transformers | 13,745 | closed | Update requirements for speech example | # What does this PR do?
Seems like this is also necessary for the example tests to run. | 09-26-2021 02:36:37 | 09-26-2021 02:36:37 | Yes! Thank you |
transformers | 13,744 | closed | using translate notebook for my dataset I get this error : ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. | I'm trying to finetune the model in translate notebook for my data.. in trainer.train() part I receive this error:

I added 'padding=True' 'truncation=True' to the tokenizer but it's still the same.

any idea/suggestion to fix the problem would be appreciated.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: translate notebook
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 09-26-2021 01:53:17 | 09-26-2021 01:53:17 | Hello, could you please provide a reproducible code example, library version, and everything required by the issue template? Thanks.<|||||>any update on this topic? |
transformers | 13,743 | closed | [Tests] Add decorator to FlaxBeit | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-25-2021 17:34:32 | 09-25-2021 17:34:32 | |
transformers | 13,742 | closed | Can't save model in saved_model format when finetune bert in tensorflow2 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.3
- Platform: centos7
- Python version: 3.8
- PyTorch version (GPU?): -
- Tensorflow version (GPU?): 2.4.3
- Using GPU in script?: Error exists in CPU and GPU
- Using distributed or parallel set-up in script?: Error exists with or without Tensorflow MirroredStrategy
### Who can help
@LysandreJik @Rocketknight1
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): roberta
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
class TFBertForMultilabelClassification(TFBertPreTrainedModel):
def __init__(self, config, *inputs, **kwargs):
super(TFBertForMultilabelClassification, self).__init__(config, *inputs, **kwargs)
self.num_labels = config.num_labels
self.bert = TFBertMainLayer(config, name='bert')
self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob)
self.classifier = tf.keras.layers.Dense(config.num_labels,
kernel_initializer=get_initializer(config.initializer_range),
name='classifier',
activation='sigmoid')#--------------------- sigmoid激活函数
def call(self, inputs, **kwargs):
outputs = self.bert(inputs, **kwargs)
pooled_output = outputs[1]
pooled_output = self.dropout(pooled_output, training=kwargs.get('training', False))
logits = self.classifier(pooled_output)
outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here
return outputs # logits, (hidden_states), (attentions)
model = TFBertForMultilabelClassification.from_pretrained("bert-base-uncased")
model.save("/tmp/model")
```
Error messages:
```
Some layers from the model checkpoint at bert-base-uncased were not used when initializing TFBertForMultilabelClassification: ['nsp___cls', 'mlm___cls']
- This IS expected if you are initializing TFBertForMultilabelClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFBertForMultilabelClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some layers of TFBertForMultilabelClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['dropout_2326', 'classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-222-0c8afc02744c> in <module>
20
21 model = TFBertForMultilabelClassification.from_pretrained("bert-base-uncased")
---> 22 model.save("test")
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in save(self, filepath, overwrite, include_optimizer, save_format, signatures, options, save_traces)
1994 """
1995 # pylint: enable=line-too-long
-> 1996 save.save_model(self, filepath, overwrite, include_optimizer, save_format,
1997 signatures, options, save_traces)
1998
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options, save_traces)
154 model, filepath, overwrite, include_optimizer)
155 else:
--> 156 saved_model_save.save(model, filepath, overwrite, include_optimizer,
157 signatures, options, save_traces)
158
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/save.py in save(model, filepath, overwrite, include_optimizer, signatures, options, save_traces)
87 with distribution_strategy_context._get_default_replica_context(): # pylint: disable=protected-access
88 with utils.keras_option_scope(save_traces):
---> 89 save_lib.save(model, filepath, signatures, options)
90
91 if not include_optimizer:
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py in save(obj, export_dir, signatures, options)
1030 meta_graph_def = saved_model.meta_graphs.add()
1031
-> 1032 _, exported_graph, object_saver, asset_info = _build_meta_graph(
1033 obj, signatures, options, meta_graph_def)
1034 saved_model.saved_model_schema_version = constants.SAVED_MODEL_SCHEMA_VERSION
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py in _build_meta_graph(obj, signatures, options, meta_graph_def)
1196
1197 with save_context.save_context(options):
-> 1198 return _build_meta_graph_impl(obj, signatures, options, meta_graph_def)
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py in _build_meta_graph_impl(obj, signatures, options, meta_graph_def)
1145 # Note we run this twice since, while constructing the view the first time
1146 # there can be side effects of creating variables.
-> 1147 _ = _SaveableView(checkpoint_graph_view, options)
1148 saveable_view = _SaveableView(checkpoint_graph_view, options,
1149 wrapped_functions)
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py in __init__(self, checkpoint_view, options, wrapped_functions)
223 # variables on first run.
224 concrete_functions = (
--> 225 function._list_all_concrete_functions_for_serialization()) # pylint: disable=protected-access
226 else:
227 concrete_functions = [function]
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _list_all_concrete_functions_for_serialization(self)
1160 A list of instances of `ConcreteFunction`.
1161 """
-> 1162 concrete_functions = self._list_all_concrete_functions()
1163 seen_signatures = []
1164 for concrete_function in concrete_functions:
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _list_all_concrete_functions(self)
1142 """Returns all concrete functions."""
1143 if self.input_signature is not None:
-> 1144 self.get_concrete_function()
1145 concrete_functions = []
1146 # pylint: disable=protected-access
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in get_concrete_function(self, *args, **kwargs)
1297 ValueError: if this object has not yet been called on concrete values.
1298 """
-> 1299 concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
1300 concrete._garbage_collector.release() # pylint: disable=protected-access
1301 return concrete
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _get_concrete_function_garbage_collected(self, *args, **kwargs)
1203 if self._stateful_fn is None:
1204 initializers = []
-> 1205 self._initialize(args, kwargs, add_initializers_to=initializers)
1206 self._initialize_uninitialized_variables(initializers)
1207
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
723 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)
724 self._concrete_stateful_fn = (
--> 725 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
726 *args, **kwds))
727
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2967 args, kwargs = None, None
2968 with self._lock:
-> 2969 graph_function, _ = self._maybe_define_function(args, kwargs)
2970 return graph_function
2971
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3359
3360 self._function_cache.missed.add(call_context_key)
-> 3361 graph_function = self._create_graph_function(args, kwargs)
3362 self._function_cache.primary[cache_key] = graph_function
3363
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3194 arg_names = base_arg_names + missing_arg_names
3195 graph_function = ConcreteFunction(
-> 3196 func_graph_module.func_graph_from_py_func(
3197 self._name,
3198 self._python_function,
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
988 _, original_func = tf_decorator.unwrap(python_func)
989
--> 990 func_outputs = python_func(*func_args, **func_kwargs)
991
992 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
632 xla_context.Exit()
633 else:
--> 634 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
635 return out
636
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/eager/function.py in bound_method_wrapper(*args, **kwargs)
3885 # However, the replacer is still responsible for attaching self properly.
3886 # TODO(mdan): Is it possible to do it here instead?
-> 3887 return wrapped_fn(*args, **kwargs)
3888 weak_bound_method_wrapper = weakref.ref(bound_method_wrapper)
3889
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
975 except Exception as e: # pylint:disable=broad-except
976 if hasattr(e, "ag_error_metadata"):
--> 977 raise e.ag_error_metadata.to_exception(e)
978 else:
979 raise
TypeError: in user code:
/usr/local/Caskroom/miniconda/base/envs/tms/lib/python3.8/site-packages/transformers/modeling_tf_utils.py:682 serving *
return self.serving_output(output)
TypeError: tf__serving_output() takes 1 positional argument but 2 were given
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
model can save in saved_model format without error
<!-- A clear and concise description of what you would expect to happen. -->
| 09-25-2021 16:12:41 | 09-25-2021 16:12:41 | cc @Rocketknight1
The models from `transformers` should be saved using the `save_pretrained` method. For TensorFlow models, you can obtain the result as a `SavedModel` by using the `saved_model` keyword argument:
```
model.save_pretrained("/tmp/model", saved_model=True)
```<|||||>@LysandreJik
Thx for your reply. I still have some questions, please take a look.
1. What is the diff between `model.save_pretrained(path, saved_model=True)` and `model.save(path)`? If I can save mode with the latter, should I change to the former?
2. Besides, I notice that when saving model, warning occurs. Is it a warning that can be ignored safely? What can I do to fix the warning?
```
WARNING:absl:Found untraced functions such as embeddings_layer_call_and_return_conditional_losses, embeddings_layer_call_fn, encoder_layer_call_and_return_conditional_losses, encoder_layer_call_fn, pooler_layer_call_and_return_conditional_losses while saving (showing 5 of 1055). These functions will not be directly callable after loading.
WARNING:absl:Found untraced functions such as embeddings_layer_call_and_return_conditional_losses, embeddings_layer_call_fn, encoder_layer_call_and_return_conditional_losses, encoder_layer_call_fn, pooler_layer_call_and_return_conditional_losses while saving (showing 5 of 1055). These functions will not be directly callable after loading.
```<|||||>Hi @SysuJayce yes, you can ignore that warning. That warning can also pop up when saving a large model using `model.save`, it's just telling you that the model has some methods that weren't saved/traced, which is normal. Don't worry about fixing it.
Also, in general we don't support `model.save` because our 'standard' way of saving/loading models is to use `save_pretrained` and then `from_pretrained` to load it again, like `model = TFBertForMultilabelClassification.from_pretrained("/tmp/model")`.
The reasons we do this instead of using `SavedModel` are bit long and confusing - the key issue is that `SavedModel` saves the model graph but not necessarily all the code, and so you won't necessarily have all the capabilities of the model when you reload the `SavedModel` file - this is basically what the warning is telling you. If you just want to call the model with new data, then `SavedModel` should work fine for you, but try passing the file path to the `from_pretrained` method if you want to load it perfectly and have it work just like it did originally.<|||||>> Hi @SysuJayce yes, you can ignore that warning. That warning can also pop up when saving a large model using `model.save`, it's just telling you that the model has some methods that weren't saved/traced, which is normal. Don't worry about fixing it.
>
> Also, in general we don't support `model.save` because our 'standard' way of saving/loading models is to use `save_pretrained` and then `from_pretrained` to load it again, like `model = TFBertForMultilabelClassification.from_pretrained("/tmp/model")`.
>
> The reasons we do this instead of using `SavedModel` are bit long and confusing - the key issue is that `SavedModel` saves the model graph but not necessarily all the code, and so you won't necessarily have all the capabilities of the model when you reload the `SavedModel` file - this is basically what the warning is telling you. If you just want to call the model with new data, then `SavedModel` should work fine for you, but try passing the file path to the `from_pretrained` method if you want to load it perfectly and have it work just like it did originally.
Hello @Rocketknight1 ,
In my situation, I'd like to train with huggingface transformers, and serve with tensorflow serving.
Therefore, I want to save and load the trained model with `model.save()`.
Now I know that the preferred way to save and load trained model is `from_pretrained()` and `save_pretrained()`, but we can try tensorflow's original `model.save()` and `model.load()`.
Maybe I can **save trained model with `model.save_pretrained(path, saved_model=True)` and serve with tensorflow serving**? What's your advice?
Thanks for your reply.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,741 | closed | Adding target language token in mBART model. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.3
- Platform: Google colab
- Python version: 3.9
- PyTorch version (GPU?): 1.9.0+cu102
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?:
### Who can help
@patrickvonplaten @patil-suraj @sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using : mBART
The problem arises when using:
* the official example scripts: (give details below)
`>>> from transformers import MBartTokenizer
>>> tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-en-ro', src_lang="en_XX", tgt_lang="ro_RO")
>>> example_english_phrase = " UN Chief Says There Is No Military Solution in Syria"
>>> expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
>>> inputs = tokenizer(example_english_phrase, return_tensors="pt)
>>> with tokenizer.as_target_tokenizer():
... labels = tokenizer(expected_translation_romanian, return_tensors="pt")
>>> inputs["labels"] = labels["input_ids"]`
## To reproduce
Steps to reproduce the behavior:
1. Documentation mentions "The tokenization method is <tokens> <eos> <language code> for source language documents, and <language code> <tokens> <eos>` for target language documents." [https://huggingface.co/transformers/model_doc/mbart.html#transformers.MBartTokenizer.as_target_tokenizer](url)
2. Were as the source code adds only suffix when "as_target_tokenizer" is called to tokenize target sentence in the format:
` @contextmanager
def as_target_tokenizer(self):
"""
Temporarily sets the tokenizer for encoding the targets. Useful for tokenizer associated to
sequence-to-sequence models that need a slightly different processing for the labels.
"""
self.set_tgt_lang_special_tokens(self.tgt_lang)
yield
self.set_src_lang_special_tokens(self.src_lang)
def set_src_lang_special_tokens(self, src_lang) -> None:
"""Reset the special tokens to the source lang setting. No prefix and suffix=[eos, src_lang_code]."""
self.cur_lang_code = self.lang_code_to_id[src_lang]
self.prefix_tokens = []
self.suffix_tokens = [self.eos_token_id, self.cur_lang_code]
def set_tgt_lang_special_tokens(self, lang: str) -> None:
"""Reset the special tokens to the target language setting. No prefix and suffix=[eos, tgt_lang_code]."""
self.cur_lang_code = self.lang_code_to_id[lang]
self.prefix_tokens = []
self.suffix_tokens = [self.eos_token_id, self.cur_lang_code]`
3. It should add a target language code as prefix and eos token as suffix. [https://huggingface.co/transformers/_modules/transformers/models/mbart/tokenization_mbart.html#MBartTokenizer.as_target_tokenizer](url)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
It should add a target language code as prefix and eos token as suffix as per documentation.
<!-- A clear and concise description of what you would expect to happen. -->
| 09-25-2021 15:47:08 | 09-25-2021 15:47:08 | |
transformers | 13,740 | closed | Fix bug in DebertaForMaskedLM | # 🚀 Feature request
## Motivation
Authors of DeBERTa have released their pre-training code.
https://github.com/microsoft/DeBERTa/blob/771f5822798da4bef5147edfe2a4d0e82dd39bac/DeBERTa/deberta/bert.py#L269-L289
Could you please update the MLM head classifier in DebertaForMaskedLM?
## Your contribution
I refine the code according to the code of DeBERTa. But the MLM loss of the example code only decreases from 11+ to 3.86, which is still large:
```
from transformers import RobertaTokenizer, RobertaForMaskedLM
import torch
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaForMaskedLM.from_pretrained('roberta-base')
inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
outputs = model(**inputs, labels=labels)
loss = outputs.loss # about 3.86 after I refine the code
```
Here is my modification:
```
class DebertaLMPredictionHead(nn.Module):
def __init__(self, config):
super().__init__()
# self.transform = DebertaPredictionHeadTransform(config)
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
if isinstance(config.hidden_act, str):
self.transform_act_fn = ACT2FN[config.hidden_act]
else:
self.transform_act_fn = config.hidden_act
self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
# The output weights are the same as the input embeddings, but there is
# an output-only bias for each token.
self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
self.bias = nn.Parameter(torch.zeros(config.vocab_size))
# Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings`
self.decoder.bias = self.bias
def forward(self, hidden_states):
hidden_states = self.dense(hidden_states)
hidden_states = self.transform_act_fn(hidden_states)
hidden_states = self.LayerNorm(hidden_states)
hidden_states = self.decoder(hidden_states)
return hidden_states
....
class DebertaForMaskedLM(DebertaPreTrainedModel):
_keys_to_ignore_on_load_unexpected = [r"pooler"]
_keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"]
def __init__(self, config):
super().__init__(config)
self.deberta = DebertaModel(config)
self.lm_predictions = DebertaOnlyMLMHead(config)
self.init_weights()
def get_output_embeddings(self):
return self.lm_predictions.lm_head.decoder
def set_output_embeddings(self, new_embeddings):
self.lm_predictions.lm_head.decoder = new_embeddings
```
Best,
Deming | 09-25-2021 15:27:59 | 09-25-2021 15:27:59 | Pinging @BigBird01 for advice!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,739 | closed | Model inside docker gives different results | - `transformers` version: 4.9.2
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.8.11
- PyTorch version (GPU?): 1.9.0 (True)
- Tensorflow version (GPU?): 2.4.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
Models:
- Bert : @LysandreJik
I am using a bert model I fine-tuned and everything runs fine. I do not get any error etc. However , inside of a docker it gives different scores(from pipeline). For example, I fine-tuned a model and I use example scripts and use a pipeline. I get 0.94 score in my local machine. However , when i run same model inside a docker with exact same inputs , it gives 0.75 score.
You can see a related issue here : https://stackoverflow.com/questions/66797173/issue-while-using-transformers-package-inside-the-docker-image
I am trying this model with a flask endpoints , and in the link they say it is something related to threading , however i could not solve my problem.
Scripts I use to load model :
```
tokenizer = BertTokenizer.from_pretrained("models/local_tokenizer")
model = BertForSequenceClassification.from_pretrained("models/model-v6")
logger.info("Model yüklendi")
print("Model Yüklendi")
productionPipeline = pipeline(
"text-classification", model=model.to("cpu"), tokenizer=tokenizer
)
```
Scripts used while training (slightly edited IMDB tutorial):
```
class SentimentDataset(torch.utils.data.Dataset):
def __init__(self, dataframe, tokenizer, max_len):
self.tokenizer = tokenizer
self.data = dataframe
self.text = dataframe.text
self.targets = self.data.label
self.max_len = max_len
def __len__(self):
return len(self.text)
def __getitem__(self, index):
text = str(self.text[index])
text = " ".join(text.split())
inputs = self.tokenizer.encode_plus(
text,
None,
add_special_tokens=True,
max_length=self.max_len,
pad_to_max_length=True,
return_token_type_ids=True,
truncation=True
)
ids = inputs['input_ids']
mask = inputs['attention_mask']
token_type_ids = inputs["token_type_ids"]
return {
'input_ids': torch.tensor(ids, dtype=torch.long),
'attention_mask': torch.tensor(mask, dtype=torch.long),
'token_type_ids': torch.tensor(token_type_ids, dtype=torch.long),
'labels': torch.tensor(self.targets[index], dtype=torch.float)
}
train_dataset = SentimentDataset(train_df, tokenizer, 50)
test_dataset = SentimentDataset(test_df, tokenizer, 50)
from transformers import AdamW
from torch.utils.data import DataLoader
from tqdm import tqdm
model.train()
optim = AdamW(model.parameters(), lr=5e-6)
for epoch in range(0,2):
step = epoch + 1
train_loss = 0
print("Epoch starts")
for batch in tqdm(train_loader):
optim.zero_grad()
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
targets = batch['labels'].to(device)
outputs = model(input_ids, attention_mask=attention_mask, labels=targets)
loss = outputs[0]
loss.backward()
optim.step()
train_loss += loss.item()
```
| 09-25-2021 11:08:50 | 09-25-2021 11:08:50 | I solved my problem , for anyone who experiences this problem :
My local transformers version was : 4.9.2 , and i was installing the latest transformers version which was : 4.10.3.
Installing 4.9.2 inside my docker resolved the issue however I think it can be added a warning. That we can experience problems related to model results. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,738 | closed | Update test dependence for torch examples | # What does this PR do?
Updates the `.circle` config file to install the speech dependency for the example test run. | 09-25-2021 02:17:27 | 09-25-2021 02:17:27 | Thanks! |
transformers | 13,737 | closed | Add failing test for ProphetBet batching | Following suggestion by @patrickvonplaten I created a failing test for this issue https://github.com/huggingface/transformers/issues/13612. It's a little over my head but I think this is what he meant.
I dunno if putting it as a `@slow` test under integration tests was the right choice. But, it seems to work. I sanity checked that this same test passes with T5 and BART (but didn't add the test for them). | 09-24-2021 22:00:00 | 09-24-2021 22:00:00 | Hey @deklanw - thanks for the test! It currently fails no? <|||||>@patrickvonplaten yep, it fails
<|||||>Ok, I think we should try to fix it in this PR then as well :-) I'm a bit under-water at the moment, but I can try to fix it in like 2 weeks<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,736 | closed | Obfuscated text classification error when using CANINE Transformers | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.3
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): 2.2.0-rc3 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@NielsRogge @patrickvonplaten
Models:
- CANINE Transformer
## Information
Model I am using is CANINE
I have a obfuscated documents consists around 30000 sentences and each has some labels too (in total 11 labels) - Multi Class Classification problem
(The data has been obfuscated, however the patterns in them are preserved)
A single record look like this:
`satwamuluhqgulamlrmvezuhqvkrpmletwulcitwskuhlemvtwamuluhiwiwenuhlrvimvqvkruhulenamuluhqgqvtwvimviwuhtwamuluhulqvkrenamcitwuhvipmpmqvuhskiwkrpmdfuhlrvimvskvikrpmqvuhskmvgzenleuhqvmvamuluhulenamuluhqvletwtwvipmpmgzleenamuhtwamuluhtwletwdfuhiwkrxeleentwxeuhpmqvuhtwiwmvamdfuhpkeztwamuluhvimvuhqvtwmkpmpmlelruhgztwtwskuhtwlrkrpmlruhpmuluhqvenuhtwyplepmxeuhenuhamypkrqvuhamulmvdfuhqvskentwamletwlrlrpmiwuhtwamul `
So I am decided to try the CANINE since its works on the character encoding principle. But i am facing some issues, I have attached the code and exceptions.
```
with open('xtrain_obfuscated.txt') as f:
x = f.read().splitlines()
with open('ytrain.txt') as f:
y = f.read().splitlines()
import torch
from transformers import CanineConfig, CanineForSequenceClassification, CanineForMultipleChoice, CanineForTokenClassification
from sklearn.model_selection import train_test_split
x_train, x_val, y_train, y_val = train_test_split(x, y, test_size=0.2)
from transformers import CanineTokenizer, CanineModel
from transformers import Trainer, TrainingArguments, CanineForMultipleChoice
tokenizer = CanineTokenizer(model_max_length=512)
tokens_train = tokenizer(x_train, padding='longest', return_tensors='pt')
tokens_val = tokenizer(x_val, padding='longest', return_tensors='pt')
class NovelClassificationDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: val[idx] for key, val in self.encodings.items()}
item['labels'] = torch.tensor(int(self.labels[idx]))
return item
def __len__(self):
#print(len(self.labels))
return len(self.labels)
train_dataset = NovelClassificationDataset(tokens_train, y_train)
val_dataset = NovelClassificationDataset(tokens_val, y_val)
model = CanineForSequenceClassification.from_pretrained("google/canine-s", num_labels=12, problem_type="multi_label_classification")
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=10, # total number of training epochs
per_device_train_batch_size=13, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
)
trainer.train()
```
**Exception is**
```
~/opt/anaconda3/envs/task/lib/python3.8/site-packages/torch/nn/functional.py in binary_cross_entropy_with_logits(input, target, weight, size_average, reduce, reduction, pos_weight)
2578
2579 if not (target.size() == input.size()):
-> 2580 raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
2581
2582 return torch.binary_cross_entropy_with_logits(input, target, weight, pos_weight, reduction_enum)
ValueError: Target size (torch.Size([13])) must be the same as input size (torch.Size([13, 12]))
```
## Expected behavior
Expected output is a multiclass classification model using CANINE Model where I should be able to get a prediction from test data set (obfuscated test data set).
Please advice.
| 09-24-2021 21:29:03 | 09-24-2021 21:29:03 | You're initializing `CanineModel`, which doesn't accept a `labels` argument.
You probably want to use `CanineForSequenceClassification`, which is CANINE with a sequence classification head on top.
Also, please use the [forum](https://discuss.huggingface.co/) for training-related questions. |
transformers | 13,735 | closed | [megatron gpt checkpoint conversion] causal mask requires pos_embed dimension | this is a follow up to https://github.com/huggingface/transformers/pull/13508 - where I tried to fix the wrong side of the bug :(, this one hopefully is the correct one.
causal mask uses the positional emb dimensions / `seqlen` and not `n_emb` (hidden_size) as it was originally coded and happened to work because the original meg-gpt2 model had the same `n_emb` and `seqlen` size.
I re-tested that the original `megatron_lm_345m/release/mp_rank_00/model_optim_rng.pt` still produces the same converted output.
@sgugger, @LysandreJik | 09-24-2021 21:28:29 | 09-24-2021 21:28:29 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.