repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 8,002 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-23-2020 10:11:11 | 10-23-2020 10:11:11 | |
transformers | 8,001 | closed | do_lower_case not saved/loaded correctly for Tokenizers | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-5.4.0-52-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@mfuntowicz
## Information
The `do_lower_case` property of BertTokenizer is not correctly restored after saving / loading.
## To reproduce
```python
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
print(tokenizer.do_lower_case)
tokenizer.save_pretrained("debug_tokenizer")
tokenizer_loaded = BertTokenizer.from_pretrained("debug_tokenizer")
print(tokenizer_loaded.do_lower_case)
```
returns
```
False
True
```
## Expected behavior
Same object attributes after saving / loading | 10-23-2020 10:03:15 | 10-23-2020 10:03:15 | Oh! I'll take a look, thanks for the report @tholor <|||||>Thanks for the fast fix @thomwolf ! Very much appreciated! |
transformers | 8,000 | closed | How to load tokenizer for models without vocab.txt? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I want to use xlm-roberta-large model, but "https://huggingface.co/"just give the file named "xlm-roberta-large-tokenizer.json", and have no "vocab.txt", so how to use the package “XLMRobertaTokenizer” to load the the file "xlm-roberta-large-tokenizer.json"?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 10-23-2020 08:44:59 | 10-23-2020 08:44:59 | Hello! In order to use the xlm-roberta-large model, why don't you use the `from_pretrained` method?
```py
from transformers import XLMRobertaModel
model = XLMRobertaModel.from_pretrained("xlm-roberta-large")
```<|||||>> Hello! In order to use the xlm-roberta-large model, why don't you use the `from_pretrained` method?
>
> ```python
> from transformers import XLMRobertaModel
>
> model = XLMRobertaModel.from_pretrained("xlm-roberta-large")
> ```
Thanks for your answer!
I tried do it like what you said, but I couldn't linked the URL to download the model, so I try to download the model、cofig、tokenizer to local and load it.
so, The question what I said was I have not find the vocab.txt to generate the tokenizer.<|||||>You don't need the URL to download the model, you can just use the identifier as its shown in my message. Or is there a reason why you want to have the URLs? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,999 | closed | Add model cards for DynaBERT | # What does this PR do?
Add model cards for DynaBERT. | 10-23-2020 06:37:01 | 10-23-2020 06:37:01 | |
transformers | 7,998 | closed | update version for scipy | # What does this PR do?
updating version requirement for scipy in 'examples\\distillation\\requirements.txt'.
fix [https://github.com/huggingface/transformers/issues/7967](url)
| 10-23-2020 05:06:40 | 10-23-2020 05:06:40 | Let's wait for Victor to answer on that issue before merging.<|||||>looks good! |
transformers | 7,997 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Add model card about DynaBERT in HuggingFace.io.
## Before submitting
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-23-2020 03:23:46 | 10-23-2020 03:23:46 | |
transformers | 7,996 | closed | Added model cards for Tagalog ELECTRA models | # What does this PR do?
Added model cards for eight ELECTRA Tagalog models (four generators and four discriminators). | 10-23-2020 02:22:08 | 10-23-2020 02:22:08 | @jcblaisecruz02 looks like your config.json files do not contain a `architectures` field nor a `model_type`, so your models might be incorrectly categorized – any way you could add those? Thank you!<|||||>> @jcblaisecruz02 looks like your config.json files do not contain a `architectures` field nor a `model_type`, so your models might be incorrectly categorized – any way you could add those? Thank you!
Ah gotcha! I'll add those. Thanks!<|||||>(easiest way should be to just call `.save_pretrained` again) |
transformers | 7,995 | closed | [CI] generate separate report files as artifacts | This PR solves https://github.com/huggingface/transformers/issues/7887 to produce easier to use reports on CIs.
* [x] adds an optional `--make_reports=id` to `pytest`- e.g. `--make_reports=examples`. It then uses that id to generate `report_{id}_{reports}.txt` - this was needed since some jobs like on scheduled jobs have multiple pytest runs, so a unique string is required. W/o this new flag everything remains as is - i.e. no reports get generated
* [x] the generated reports are all saved under `reports` to simplify the upload and are at the moment (assuming `id` was `tests`):
- report_tests_durations.txt
- report_tests_errors.txt
- report_tests_failures.txt
- report_tests_passes.txt
- report_tests_short_summary.txt
- report_tests_stats.txt
- report_tests_warnings.txt
We no longer need any `pytests` flags to generate these - e.g. no need for `-rA` or `-durations=` - they are all done internally.
The code itself is a bit of a hack, that borrows a lot of `pytest` internals - but that's a start - I will see if I can find a public API to accomplish the same later if this new functionality catches on. Actually, it's pretty safe since it calls the same report functions `pytest` uses, so it's unlikely to break.
* [x] added the reporting to:
- CirlcleCI `run_examples_torch` and `run_tests_torch` jobs
- GitHub workflow `run_all_tests_torch_and_tf_gpu` job. (this one generates 3 (!) groups of reports)
Once these are tested on `master` and the results are satisfactory, I will add this new functionality to the rest of the jobs.
**This is what you want to review**:
- the latest [report](https://app.circleci.com/pipelines/github/huggingface/transformers/14586/workflows/21a114bc-c65b-4b62-b747-a0056923479a/jobs/106843)
- the corresponding [artifacts](https://app.circleci.com/pipelines/github/huggingface/transformers/14586/workflows/21a114bc-c65b-4b62-b747-a0056923479a/jobs/106843/artifacts)
Fixes: #7887
@sshleifer, @sgugger, @LysandreJik | 10-23-2020 02:16:49 | 10-23-2020 02:16:49 | This looks good but I only see `test_output.txt` in the artifacts, for some reason?<|||||>you must be looking at the wrong job? As I said I only did it for one job at the moment - this one:
https://app.circleci.com/pipelines/github/huggingface/transformers/14359/workflows/b38c8e8c-2867-4366-a907-a202da9bc9ee/jobs/104410/steps
```
~/transformers/output.txt
~/transformers/tests_durations.txt
~/transformers/tests_failures.txt
~/transformers/tests_passes.txt
~/transformers/tests_short_summary.txt
~/transformers/tests_stats.txt
```<|||||>Ah my bad, I miscklicked!<|||||>@sshleifer or @sgugger - I configured github artifacts in `self-push.yaml` of this PR - would one of you be able to start that job for me as I have no perms to do so. Thank you very much!
I hope I did it right, I added:
```
- name: test suite reports artifacts
uses: actions/upload-artifact@v2
with:
name: tests_results
path: tests_*
```
I'm not sure whether this should be `path: ~/transformers/tests_*` like it was on circle_ci config - it should pick it up from the cwd.
I currently added it only to `run_tests_torch_and_tf_gpu` - so in theory it should upload the reports to the workflow results.
For reference, the information on this setup is at this 2 pages:
* https://docs.github.com/en/free-pro-team@latest/actions/guides/storing-workflow-data-as-artifacts
* https://github.com/actions/upload-artifact#usage
<|||||>I can't figure out how to run a github actions workflow against a branch. It looks good enough that we I'm happy to just acknowledge that this could break on merge, in which case we'd send a follow up PR.<|||||>Thank you for trying, @sshleifer
Ah, it's not finished yet, merge-wise - it's very rough on edges.
* I just want to figure out how to make the results available on github actions in parallel with
* waiting on you guys to hear what reports do you want and which not before finalizing this.
Can you suggest a different way of testing this? This was your recommendation in first place - to test it on a PR branch - except I can't test it since I don't have permissions to access the runners. Surely there must be a way of testing this?
Alternatively, we could go as simple as creating a new github workflow job that simply runs a job of `echo test > tests_1.txt; echo test2 > tests_2.txt` and then uploads `tests_*` as an artifact and checking that it is what you want. It should just work, since the docs suggest that as an example. Once we know it's working then the rest is easy.
Earlier you were talking about some possible problems with this - something about the job being always successful, I can't find that comment - but I am pretty sure there is no such issue with the approach I implemented - where `pytest` generates all the report files and we don't need to do anything about its log parsing.<|||||>> waiting on you guys to hear what reports do you want and which not before finalizing this.
Don't wait, just make a sensible choice that's easy to change. Lean towards fewer reports.
> Can you suggest a different way of testing this?
I don't know a good way of testing github actions. [act](https://github.com/nektos/act) looks promising, but I've never used it. The issue is not permissions it is that github workflows, afaict, cannot be run against arbitrary branches. There is a "rerun all jobs" button, but it will just rerun on master. Would be incredibly valuable if you figured out how to test github actions locally.
Here is everything I can see for self-push at https://github.com/huggingface/transformers/actions/runs/326336555/workflow
<|||||>I agree with Sam that we can merge to test and iterate if the reports look wrong (as soon as we're sure that the circleCI part is good to go, which we can test on this PR). From what I understand, the PR adds a new job, so it does not break the existing ones/reports.<|||||>I will work on completing this and we can put it in for one circle-ci and one github workflow and see how it goes - thank you for your feedback, @sshleifer and @sgugger <|||||>This is good to merge. |
transformers | 7,994 | closed | BertTokenizer's add_token won't add token | `add_token` actually won't add token. Please refer to the code below:
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
tokenizer.vocab_size
>>30522
tokenizer.add_tokens(new_tokens=['[SUBJ]', '[OBJ]'], special_tokens=True)
>>2
tokenizer.vocab['[OBJ]']
>> KeyError: '[OBJ]'
tokenizer.vocab_size
>>30522 # not changed
tokenizer.tokenize('[OBJ]')
>>['[', 'ob', '##j', ']'] # expected: '[OBJ]'
``` | 10-23-2020 00:26:46 | 10-23-2020 00:26:46 | It seems not work until I save the new tokenizer. I close the issue.
```
tokenizer.save_pretrained('/path/to/tokenizer')
tokenizer = BertTokenizer.from_pretrained('/path/to/tokenizer')
```<|||||>looks like tokenizer.vocab_size does not update after add tokens. but len(tokenizer) shows correct number <|||||>Yes, the `vocab_size` only contains the information relative to the initial vocabulary. You can find the added tokens either in `tokenizer.get_added_vocab()`, which returns the dictionary, or `tokenizer.added_tokens_encoder`, which returns the amount of added tokens. |
transformers | 7,993 | closed | [docs] [testing] distributed training | We figured out how to support distributed training with `pytest`, this is a preliminary doc snippet to help those in need to find the current implementation. I'm sure it will evolve as we have more tests with varying needs, but for now that's all we have.
@sgugger | 10-22-2020 21:25:36 | 10-22-2020 21:25:36 | One followup is to update the `test_trainer_distributed` to work with pytest. Then ideally, if we could have one command to run all those tests, that would be awesome (maybe we can use a pytest marker to mark all distributed-specific tests so it's easy to select them all?)<|||||>I will port `test_trainer_distributed` - thank you for flagging that, @sgugger
Tracking it here: https://github.com/huggingface/transformers/issues/8058 |
transformers | 7,992 | closed | update zero shot default widget example | # What does this PR do?
Just changing bart's zero shot widget example. | 10-22-2020 21:19:15 | 10-22-2020 21:19:15 | |
transformers | 7,991 | closed | [Reformer] remove reformer pad_token_id | # What does this PR do?
The `crime-and-punishment` tokenizer actually does not have a `pad_token_id` - check with this [notebook](https://colab.research.google.com/github/google/trax/blob/master/trax/models/reformer/text_generation.ipynb#scrollTo=iDgvKNa_DDIq). Since this is our only tokenizer for Reformer, we should remove the `pad_token` completely from the Reformer Tokenizer script (otherwise `tokenizer.pad_token_id` get's an id >= tokenizer.max_len`).
Since `crime-and-punishment` runs on causal attention, any token can be set to the padding token during inference.
Thus before padding one should do `tokenizer.pad_token = tokenizer.eos_token`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7929
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-22-2020 20:59:50 | 10-22-2020 20:59:50 | |
transformers | 7,990 | closed | Handling longformer model_type | Updating the run_squad training script to handle the "longformer" `model_type`. The longformer is trained in the same was as RoBERTa, so I've added the "longformer" `model_type` (that's the right hugginface name for the LongFormer model, right?) everywhere there was a "roberta" `model_type` reference. The longformer (like RoBERTa) doesn't use `token_type_ids` (as I understand from looking at the [longformer notebook](https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb), which is what gets updated after this change.
This fix might be related to [this issue](https://github.com/huggingface/transformers/issues/7249) with SQuAD training when using run_squad.py
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-22-2020 20:18:15 | 10-22-2020 20:18:15 | cc @patil-suraj @patrickvonplaten @xixiaoyao |
transformers | 7,989 | closed | [gh ci] less output ( --durations=50) | Way too much output in [this](https://github.com/huggingface/transformers/pull/7989)
This will make it slightly better.
cc @stas00
| 10-22-2020 19:59:22 | 10-22-2020 19:59:22 | Thank you for the heads up - I will be working on all these related issues shortly - too much data indeed, but not just that.<|||||>I'd say remove them complete for now and also -rA - I need to experiment and see how to make this data available w/o making logs unusable. <|||||>More context on github actions:
if we can somehow catch the return value of
bash
```
x= python -m pytest -n 1 --dist=loadfile -s examples --durations=50 | tee test_output.txt
save test_output.txt # always succeeds
sys.exit(x)
```
or something like that, we can make huge progress on the github actions issue and start making artifacts files.
The reason artifacts files broke was that even in the below code, even if line 1 raises an error, line 2 succeeds so github actions thinks the job succeeded
```bash
python -m pytest -n 1 --dist=loadfile -s examples --durations=50 | tee test_output.txt
save test_output.txt # always succeeds
```<|||||>oh, unless I'm missing something, we don't need any of the workarounds.
I already have the first requested component (failures) working, see: https://github.com/huggingface/transformers/pull/7995
Check out the resulting artifacts:
https://app.circleci.com/pipelines/github/huggingface/transformers/14354/workflows/1ccd616e-218f-4ae1-b413-91d2faa0e942/jobs/104363/artifacts
this is what we want right?
`pytest` provides hooks for doing this kind of work, so it's just figuring out which hooks to call.
In your example instead of `x = cmd` what you need to save is `$?` which is the exit status of the command.
<|||||>That's great, but note that this is all much easier in circleci. My ask is to make it work in github actions.
The failures are already pretty easy to find in circleci.
2) you mean
```bash
x= python -m pytest -n 1 --dist=loadfile -s examples --durations=50 | tee test_output.txt
save test_output.txt # always succeeds
sys.exit($x)
```
?<|||||>`test_failures.txt` is really nice!<|||||>Ah, good point. let me see what other handy reports I can squeese in circle-ci and then I will move to github actions.
I'm not following your question yet, let me get to github actions and then it'll probably make sense, but yes I'm referring to that example when I said:
> In your example instead of `x = cmd` what you need to save is `$?` which is the exit status of the command.
i.e. `sys.exit($?)` but you must save it right away upon `pytest` completion, since the next command will overwrite it. |
transformers | 7,988 | closed | [Good first issue] Documentation links in older docs versions | # 🚀 Feature request
This is a documentation request in order to make it easier to find corresponding examples in the documentation.
Good first issue if you want to get acquainted with the docs and how to build docs using Sphinx!
## Current issue
Here's the issue: currently, if one goes to an older documentation version to check the "examples" page, for example, [v2.6.0](https://huggingface.co/transformers/v2.6.0/examples.html), all links point towards the `master` branch.
For example, the link towards `run_tf_glue.py` is the following: https://github.com/huggingface/transformers/blob/master/examples/run_tf_glue.py
As this points towards the `master` branch, it is prone to breaking as files can (and probably will) be moved around as versions come out. It is the case for this example, as the `run_tf_glue.py` script is not in `examples/` anymore, but in `examples/text-classification/`.
I think we need a way to ensure that all links point toward their appropriate version, and the easiest would be to point to a given tag. Since we're looking at the version `v2.6.0`, it makes sense to point towards the tag v2.6.0: https://github.com/huggingface/transformers/blob/v2.6.0/examples/run_tf_glue.py
This way links get frozen in time and redirect to actual files corresponding to their description and behaviour as stated in the docs.
## Resolution
I believe the easiest change would be to use sphinx variables in order to do this. Probably either [rst_epilog](https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-rst_epilog) or [rst_prolog](https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-rst_prolog) could be useful here.
Some useful links: [rst_epilog](https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-rst_epilog), [rst_prolog](https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-rst_prolog) | 10-22-2020 19:36:34 | 10-22-2020 19:36:34 | Hi, has anyone picked this up? I can give it a go in the coming week or two when I have some let off from releases at work if no-one is doing this. <|||||>Nobody has picked this up yet, would love to see such a contribution!<|||||>Awesome, have our last release till year end freeze this week at work. I'll get in there afterwards. Would like to learn about some of the libraries involved in this project this seems like a good intro. |
transformers | 7,987 | closed | TFMarian, TFMbart, TFPegasus, TFBlenderbot | ### Notes:
- add `TFSinusoidalPositionalEmbeddings`.
- Code structure identical to the corresponding pytorch code -- same classes, implementations differ only slightly.
- Integration tests and common tests, rst updates, for all 4 children. All 4 children run the same common tests as TFBart and at least 1 integration test.
- For pegasus, generations are not identical to PT because Linear layers are slightly different in tf/pt. For Marian, generations are identical.
- Loading will generate 0 warnings.
| 10-22-2020 19:26:42 | 10-22-2020 19:26:42 | @patrickvonplaten
+ I deleted the `_force_token_id` function, replaced with faster `tf.where` one-liner. (+ added regression test).
+ Replaced unneeded `TFSharedEmbedding` with `tf.keras.layers.Embedding`
+ switched all `.shape` to `shape_list`
WDYT? |
transformers | 7,986 | closed | T5 Decoder Inputs | # ❓ Questions & Help
Just confirming that my data preprocessing is perfect for T5. I added a print statement in `T5ConditionalGeneration` for the `decoder_input_ids` and `decoder_attention_mask` just before they're passed to the decoder. Which of these is right?
```
# pad_token prepended, eos_token is not in the sequence
decoder_input_ids: tensor([[0, 2018, 55, 0, 0, 0, 0]])
decoder_attention_mask: tensor([[1, 1, 1, 0, 0, 0, 0]])
# pad_token prepended, eos_token unmasked in attention
decoder_input_ids: tensor([[0, 2018, 55, 1, 0, 0, 0]])
decoder_attention_mask: tensor([[1, 1, 1, 1, 0, 0, 0]])
# pad_token prepended, eos_token masked in attention
decoder_input_ids: tensor([[0, 2018, 55, 1, 0, 0, 0]])
decoder_attention_mask: tensor([[1, 1, 1, 0, 0, 0, 0]])
``` | 10-22-2020 19:17:40 | 10-22-2020 19:17:40 | your 2nd option is correct here:
```python
# pad_token prepended, eos_token unmasked in attention
decoder_input_ids: tensor([[0, 2018, 55, 1, 0, 0, 0]])
decoder_attention_mask: tensor([[1, 1, 1, 1, 0, 0, 0]])
```
1) You have to start with `decoder_start_token_id = pad_token_id` in T5 and
2) the last EOS token should be attended to because the model "should learn" when the sentence is finished.<|||||>@patrickvonplaten Thanks, Patrick! That makes perfect sense. You're awesome!<|||||>@patrickvonplaten I noticed that when you pass this to the model:
```
decoder_input_ids: tensor([[2018, 55, 1, 0]])
decoder_attention_mask: tensor([[1, 1, 1, 0])
```
`T5ConditionalGeneration` changes it to this before passing it to the decoder:
```
# Masks the eos_token
# Correctly prepends an extra pad_id to inputs BUT appends a pad_token to attention_mask
decoder_input_ids: tensor([[0, 2018, 55, 1, 0]])
decoder_attention_mask: tensor([[1, 1, 1, 0, 0])
```
So that you have to actually pass this initially:
```
# Pass this to model
decoder_input_ids: tensor([[2018, 55, 1, 0]])
decoder_attention_mask: tensor([[1, 1, 1, 1])
# Which turns into this
decoder_input_ids: tensor([[0, 2018, 55, 1, 0]])
decoder_attention_mask: tensor([[1, 1, 1, 1, 0])
```<|||||>Hey @alexorona - sorry I don't quite follow here...could you provide a code example that I can run to see what you mean? :-) |
transformers | 7,985 | closed | [setup] require torch>=1.4 | I run the non-slow test suite on lower torch versions:
* torch-1.2 and below is definitely a no go - a gazillion of errors in the test suite.
* torch-1.3+ is mostly OK, but:
```
FAILED tests/test_modeling_bart.py::BartHeadTests::test_generate_fp16 - RuntimeError: "argmax_cuda" not implemented for 'Half'
FAILED tests/test_modeling_funnel.py::FunnelModelIntegrationTest::test_inference_tiny_model - OSError: Unable to load weights from pytorch checkpoi...
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_model_outputs_equivalence - RuntimeError: Expected object of scalar type Float but got scal...
FAILED tests/test_modeling_lxmert.py::LxmertModelTest::test_lxmert_pretraining - RuntimeError: Expected object of scalar type Float but got scalar ...
FAILED tests/test_modeling_openai.py::OpenAIGPTModelTest::test_model_outputs_equivalence - RuntimeError: Expected object of scalar type Float but g...
FAILED tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_reformer_model_fp16_generate - RuntimeError: "argmax_cuda" not implemented...
FAILED tests/test_modeling_reformer.py::ReformerLSHAttnModelTest::test_reformer_model_fp16_forward - RuntimeError: "argmax_cuda" not implemented fo...
FAILED tests/test_modeling_reformer.py::ReformerLSHAttnModelTest::test_reformer_model_fp16_generate - RuntimeError: "argmax_cuda" not implemented f...
```
which could be fixed in the core if desired, but it won't pass as is right now.
* torch-1.4 - has mostly serialization issues (files saved with new pytorch can't be read by the old one)
Hence changing to `torch>=1.4`
@sgugger, @LysandreJik | 10-22-2020 18:57:16 | 10-22-2020 18:57:16 | @patrickvonplaten @sshleifer @sgugger Could we solve the errors here to have `torch>=1.3` instead?<|||||>May be it's simpler to set it to 1.4+ and only if someone asks for it to bother with 1.3?<|||||>After discussion with the team, we'll take a look at supporting v1.3+ in the coming days, and if it requires too many efforts we'll stick with v1.4+. We'll take this as an opportunity to test the versions we say we support as well (1.3, 1.4, 1.5, 1.6, 1.7) so that the README isn't full of empty promises :slightly_smiling_face:.
Could you keep your PR as-is for the coming days, and let me come back to you when we've reached a consensus?<|||||>That's an excellent and clear proposition, @LysandreJik - thank you!<|||||>> We'll take this as an opportunity to test the versions we say we support as well
If I may propose a scheduled CI that runs all tests for each of the supported versions, say, once a week or so. Probably `tf` too.
I trust you will have the best plan. <|||||>ping<|||||>will try to work on it today - are we sticking to torch 1.3 @LysandreJik ?
Maybe we could discuss also whether we can do some more general optimizations in the lib then (I think we can safely change the attention masks to bools then)<|||||>Yes, we are! There's a branch in progress here: https://github.com/huggingface/transformers/tree/previous-torch
Feel free to push fixes onto it directly. I've been planning on doing so right after the TAPAS merge.<|||||>The branch tests out torch versions going back to v1.3. It's not setup for slow tests right now, and it tests on every commit. I haven't really thought about if this is the best way to do so, but it's certainly easier to debug the failing tests this way.<|||||>I don't suppose there is a point at resolving the conflict, right? <|||||>ping<|||||>I'd like to get to it as soon as I have some availability.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>ping<|||||>It's on the roadmap!<|||||>It seems wasteful trying to keep up with the conflicts for more than 6 months. Since it's on the roadmap I think it's safe to close this one now. |
transformers | 7,984 | closed | Reload checkpoint | # What does this PR do?
This PR fixes a few bugs linked to resuming training from a checkpoint, mainly:
- the progress was not properly displayed (beginning at 0 instead of the step from the checkpoint)
- reloading the optimizer state and scheduler state on TPU was causing an error
Tested on TPU, single-GPU and multi-GPU env.
Fixes #4963
Fixes #7976
| 10-22-2020 18:49:48 | 10-22-2020 18:49:48 | |
transformers | 7,983 | closed | add zero shot pipeline tags & examples | # What does this PR do?
Adds the zero shot pipeline tag as well as default examples for a selection of pre-trained MNLI models. cc @patil-suraj | 10-22-2020 17:59:28 | 10-22-2020 17:59:28 | |
transformers | 7,982 | closed | [s2s test] examples/seq2seq/test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow fails on GPU | This works (cpu / any pytorch):
```
CUDA_VISIBLE_DEVICES="" RUN_SLOW=1 pytest -sv examples/seq2seq/test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow
```
This fails torch-1.5/gpu or 1.6, or nightly:
```
CUDA_VISIBLE_DEVICES="0" RUN_SLOW=1 pytest -sv examples/seq2seq/test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow
```
Same with pytorch-nightly. same with py37 and py38.
Error:
```
{'eval_loss': 5223.2333984375, 'eval_bleu': 0.0, 'eval_gen_len': 1.0, 'epoch': 1.0}
{'eval_loss': 5064.154296875, 'eval_bleu': 0.0, 'eval_gen_len': 1.0, 'epoch': 2.0}
{'eval_loss': 4966.837890625, 'eval_bleu': 0.0, 'eval_gen_len': 3.8, 'epoch': 3.0}
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:03<00:00, 1.55it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 4.69it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:01<00:00, 3.04it/s]FAILED
====================================================================== FAILURES ======================================================================
___________________________________________________ TestFinetuneTrainer.test_finetune_trainer_slow ___________________________________________________
self = <seq2seq.test_finetune_trainer.TestFinetuneTrainer testMethod=test_finetune_trainer_slow>
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:01<00:00, 2.78it/s]
@slow
def test_finetune_trainer_slow(self):
# There is a missing call to __init__process_group somewhere
output_dir = self.run_trainer(eval_steps=2, max_len="128", model_name=MARIAN_MODEL, num_train_epochs=3)
# Check metrics
logs = TrainerState.load_from_json(os.path.join(output_dir, "trainer_state.json")).log_history
eval_metrics = [log for log in logs if "eval_loss" in log.keys()]
first_step_stats = eval_metrics[0]
last_step_stats = eval_metrics[-1]
> assert first_step_stats["eval_bleu"] < last_step_stats["eval_bleu"] # model learned nothing
E AssertionError: assert 0.0 < 0.0
examples/seq2seq/test_finetune_trainer.py:36: AssertionError
```
env:
```
- `transformers` version: 3.4.0
- Platform: Linux-4.15.0-118-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
@sshleifer | 10-22-2020 17:15:52 | 10-22-2020 17:15:52 | I would fix this by training on much more data (like 1000 obs) and getting the loss down much further.<|||||>I used more iterations - 6 was enough for 1 gpu, 10 for 2, so I went with 10.
This issue will be resolved by https://github.com/huggingface/transformers/pull/7965 |
transformers | 7,981 | closed | Only log total_flos at the end of training | # What does this PR do?
This PR removes the addition of `total_flos` at each (and every) log, since this kind of pollutes them, and only logs it once and for all at the end of training. Users can still define their own callbacks and do more with that value if they really want to, but from what I understood. @TevenLeScao, that value is mainly necessary at the end.
Also, now that it's not in the metrics anymore, I've reverted the default compute metrics to its previous behavior (sum of all metrics) since it's the documented behavior. (cc @madlag) If we want to really change it, we need to put more examples out there. | 10-22-2020 15:50:14 | 10-22-2020 15:50:14 | |
transformers | 7,980 | closed | 'DistributedDataParallel' object has no attribute 'save_pretrained' | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
Hey, I want to use EncoderDecoderModel for parallel trainging. When I save my model, I got the following questions. How can I fix this ?
'DistributedDataParallel' object has no attribute 'save_pretrained'
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 10-22-2020 15:21:41 | 10-22-2020 15:21:41 | Could you provide the information related to your environment, as well as the code that outputs this error, like it is asked in the issue template?<|||||>I am facing same issue as the given issu 'DistributedDataParallel' is custom class created by coder that is having base model available in Transformer repo
Where in below code that class is "SentimentClassifier"
class SentimentClassifier(nn.Module):
def __init__(self, n_classes):
super(SentimentClassifier, self).__init__()
self.bert = BertModel.from_pretrained("bert-base-multilingual-cased")
self.drop = nn.Dropout(p=0.3)
self.out = nn.Linear(self.bert.config.hidden_size, n_classes)
def forward(self, input_ids, attention_mask):
_, pooled_output = self.bert(
input_ids=input_ids,
attention_mask=attention_mask
)
output = self.drop(pooled_output)
return self.out(output)`
that is why it is giving error -
SentimentClassifier object has no attribute 'save_pretrained'
which is correct but I also want to know how can I save that model with my trained weights just like the base model so that I can Import it in few lines and use it.
only thing I am able to obtaine from this finetuning is a .bin file
and I am not able to load state dict also
I am looking for way to save my finetuned model with "save_pretrained"<|||||>Instead of inheriting from `nn.Module` you could inherit from `PreTrainedModel`, which is the abstract class we use for all models, that contains `save_pretrained`. Can you try that?<|||||>fine-tuning codes I seen on hugging face repo itself shows the same way to do that...so I did that...
bdw I will try as you said and will update here
here is the link i refered that from
https://huggingface.co/transformers/notebooks.html
<|||||>Hey, My code just like this
```
from transformers import EncoderDecoderModel, BertTokenizer
import torch
import argparse
import os
import argparse
import torch.multiprocessing as mp
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.distributed as dist
def main():
parser = argparse.ArgumentParser()
args = parser.parse_args()
args.max_src_len = 512
args.max_dst_len = 128
args.gpus = 4
args.world_size = args.gpus
args.epoches = 30
mp.spawn(train, nprocs=args.gpus, args=(args,))
def train(gpu, args):
rank = gpu
dist.init_process_group(
backend='nccl',
init_method='tcp://127.0.0.1:23456',
world_size=args.world_size,
rank=rank
)
torch.manual_seed(0)
model = EncoderDecoderModel.from_pretrained("bert2bert")
torch.cuda.set_device(gpu)
model = model.to(gpu)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu])
dataset_path = 'dataset/example.json'
vocab_path = 'dataset/vocab.txt'
dataset = CNNDataset(dataset_path, vocab_path, args)
train_sampler = torch.utils.data.distributed.DistributedSampler(
dataset,
num_replicas=args.world_size,
rank=rank
)
dataloader = DataLoader(dataset, batch_size=32, shuffle=False,
num_workers=0,
pin_memory=True,
sampler=train_sampler)
cnt = 0
for epoch in range(args.epoches):
for src, dst in dataloader:
src = torch.stack(src).to(gpu)
dst = torch.stack(dst).to(gpu)
mask = (src!=0)
mask = mask.long()
outputs = model(input_ids=src, attention_mask=mask, decoder_input_ids=dst, labels=dst, return_dict=True)
loss, logits = outputs.loss, outputs.logits
optimizer.zero_grad()
loss.backward()
optimizer.step()
if cnt % 1000 == 0 and gpu == 0 :
model.save_pretrained("bert2bert")
cnt = cnt + 1
if __name__ == '__main__':
main()
```
@LysandreJik ,@ganeshkharad2<|||||>I can save this with state_dict. But how can I load it again with from_pretrained method ? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> I can save this with state_dict. But how can I load it again with from_pretrained method ?
Hi, i meet the same problem, have you solved this problem? or?<|||||>> I can save this with state_dict. But how can I load it again with from_pretrained method ?
Hi, Did you find any workaround for this? Thanks in advance.<|||||>Any solution for this? |
transformers | 7,979 | closed | How to make some structural changes to the EncoderDecoderModel ? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
Hey , I use EncoderDecoderModel for abstractive summarization. I load the bert2bert model like this
model=EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')
And I want to make some structural changes to the output layer of decoder model.
For example, in one decoder step, the output hidden state of bert-decoder is a vector (s). I use another network and I get a vector (w) to make the summarization more accurate. I want to concatenate the two vectors in the output layer and use the final vector to generate a word in the vocabulary.
How can I do this ?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 10-22-2020 13:48:15 | 10-22-2020 13:48:15 | Hey @yhznb,
We try to mainly use the github issues for bugs in the library. For more customized questions it would be great if you could use https://discuss.huggingface.co/ instead.
Regarding your question I would just add a layer to `BertLMHeadModel` wherever you want to and then build your `EncoderDecoderModel` from `BertModel` (encoder) & your use-case speciifc `BertLMHeadModel` (decoder).<|||||>Hey, @patrickvonplaten, I have the same question. Can you provide a example of building the EncoderDecoderModel from BertModel (encoder) & use-case speciifc BertLMHeadModel ? I can't find this in the official document. Thank you very much .<|||||>I think the model(EncoderDecoderModel) outputs all the hidden states at once . And I want to control it step by step. For example , I want to change the LMhead of Decoder by concatenating another vector. The problem is that the DecoderModel outputs all the hidden states at once. I want to control it for step by step decoding. In other words. I want to use the concatenated vector as the hidden state for generation and use the generated word vector for next step's input. How can I change the model or call the interface properly ? Is it possible under the framework of huggingface ?
Thank you very much ! @patrickvonplaten<|||||>I also raised this in the forum. Does this issue need to be closed ?
The link is here :
https://discuss.huggingface.co/t/control-encoderdecodermodel-to-generate-tokens-step-by-step/1756<|||||>thank you very much ! @patrickvonplaten <|||||>Have you solved your question ? @AI678 I think it is all about changing the LMhaed and the calculation of logits. But I don't know how to change it .<|||||>Yes , you are right. @yhznb<|||||>> Hey @yhznb,
>
> We try to mainly use the github issues for bugs in the library. For more customized questions it would be great if you could use https://discuss.huggingface.co/ instead.
>
> Regarding your question I would just add a layer to `BertLMHeadModel` wherever you want to and then build your `EncoderDecoderModel` from `BertModel` (encoder) & your use-case speciifc `BertLMHeadModel` (decoder).
Sorry, I misunderstood what you meant. This is a feature to be developed. So, how long can this feature be developed ? thank you for your response.<|||||>Hey , I have similar demands. Because I think using only vanilla bert2bert or roberta2roberta is not sufficient for abstractive summarization . For fluency and information richness, we should consider to change the top layer of decoder for further learning.<|||||>Hey, @patrickvonplaten, when do you want to release that ? <|||||>@nlpLover123 , you can control it step by step. But I think it is too slow for a large dataset like cnn-dailymail.
And I also want to ask when do you want to release that ? @patrickvonplaten
If that needs too much time, maybe I would write a encoder_decoder_model from scratch. Because I have little time to wait for that.
Thank you very much .
<|||||>that is too difficult @AI678 .Maybe it is slower that step by step generation.<|||||>so I just want to make a specific change at the LMhead layer @moonlightarc <|||||>@AI678 , I don't think we are planning on releasing such a feature into the library. It's a very specific request and I'd suggest that you try to fork the repo and make the changes according to your needs |
transformers | 7,978 | closed | Disable inference API for t5-11b | 10-22-2020 12:39:29 | 10-22-2020 12:39:29 | ||
transformers | 7,977 | closed | GPT2 - Remove else branch adding 0 to the hidden state if token_type_embeds is None. | Currently, when `token_type_embeds` is `None` we set its value to `0` and add it to the triplet `inputs_embeds + position_embeds + token_type_embeds`.
This can be simplified to:
- avoid summing 0 over many elements
- avoid using raw Python scalar value which cannot be traced by TorchScript / ONNX when exporting.
Leading to:
> [ONNXRuntimeError] : 1 : FAIL : TensorRT input: 200 has no shape specified. Please run shape inference on the onnx model first.
Signed-off-by: Morgan Funtowicz <[email protected]> | 10-22-2020 12:36:02 | 10-22-2020 12:36:02 | |
transformers | 7,976 | closed | [XLA] Cannot restore from checkpoint on TPU | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-4.9.0-13-amd64-x86_64-with-debian-9.13
- Python version: 3.6.10
- PyTorch version (GPU?): 1.8.0a0+e5ed037 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No (but using TPU)
- Using distributed or parallel set-up in script?: using xla_spawn.py
### Who can help
@LysandreJik @sgugger @TevenLeScao
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: examples/language-modeling/run_language_modeling.py but with HF datasets
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: Text generation
## To reproduce
Steps to reproduce the behavior:
1. Modify examples/language-modeling/run_language_modeling.py to below
```
import logging
import math
import os
import glob
import datasets
from dataclasses import dataclass, field
from typing import Optional
from datasets import list_datasets, load_dataset
from transformers import (
CONFIG_MAPPING,
MODEL_WITH_LM_HEAD_MAPPING,
AutoConfig,
AutoModelWithLMHead,
AutoTokenizer,
DataCollatorForLanguageModeling,
DataCollatorForPermutationLanguageModeling,
HfArgumentParser,
LineByLineTextDataset,
PreTrainedTokenizer,
TextDataset,
Trainer,
TrainingArguments,
set_seed,
)
logger = logging.getLogger(__name__)
MODEL_CONFIG_CLASSES = list(MODEL_WITH_LM_HEAD_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch.
"""
model_name_or_path: Optional[str] = field(
default=None,
metadata={
"help": "The model checkpoint for weights initialization. Leave None if you want to train a model from scratch."
},
)
model_type: Optional[str] = field(
default=None,
metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)},
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"}
)
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
train_data_file: Optional[str] = field(
default=None, metadata={"help": "The input training data file (a text file)."}
)
eval_data_file: Optional[str] = field(
default=None,
metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
)
line_by_line: bool = field(
default=False,
metadata={"help": "Whether distinct lines of text in the dataset are to be handled as distinct sequences."},
)
mlm: bool = field(
default=False, metadata={"help": "Train with masked-language modeling loss instead of language modeling."}
)
mlm_probability: float = field(
default=0.15, metadata={"help": "Ratio of tokens to mask for masked language modeling loss"}
)
plm_probability: float = field(
default=1 / 6,
metadata={
"help": "Ratio of length of a span of masked tokens to surrounding context length for permutation language modeling."
},
)
max_span_length: int = field(
default=5, metadata={"help": "Maximum length of a span of masked tokens for permutation language modeling."}
)
block_size: int = field(
default=-1,
metadata={
"help": "Optional input sequence length after tokenization."
"The training dataset will be truncated in block of this size for training."
"Default to the model max input length for single sentence inputs (take into account special tokens)."
},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
arrow: bool = field(
default=True,
metadata={
"help": "Use Arrow-based HF NLP for optimization."
},
)
def get_dataset(
args: DataTrainingArguments,
tokenizer: PreTrainedTokenizer,
evaluate: bool = False,
cache_dir: Optional[str] = "./cache",
):
tokenizer.pad_token = "<|endoftext|>"
tokenizer._pad_token = "<|endoftext|>"
#tokenizer.pad_token_id = 50256
file_path = args.eval_data_file if evaluate else args.train_data_file
if True:
dataset = datasets.load_from_disk(file_path)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
if False:
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
dataset.save_to_disk(file_path+'.arrow')
return dataset
if args.line_by_line:
return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)
else:
return TextDataset(
tokenizer=tokenizer,
file_path=file_path,
block_size=args.block_size,
overwrite_cache=args.overwrite_cache,
cache_dir=cache_dir,
)
"""
dataset = load_dataset("text", data_files=file_path, split="train")
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
"""
def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
if data_args.eval_data_file is None and training_args.do_eval:
raise ValueError(
"Cannot do evaluation without an evaluation data file. Either supply a file to --eval_data_file "
"or remove the --do_eval argument."
)
if (
os.path.exists(training_args.output_dir)
and os.listdir(training_args.output_dir)
and training_args.do_train
and not training_args.overwrite_output_dir
):
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome."
)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,
)
logger.warning(
"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
training_args.local_rank,
training_args.device,
training_args.n_gpu,
bool(training_args.local_rank != -1),
training_args.fp16,
)
logger.info("Training/evaluation parameters %s", training_args)
# Set seed
set_seed(training_args.seed)
# Load pretrained model and tokenizer
#
# Distributed training:
# The .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
if model_args.config_name:
config = AutoConfig.from_pretrained(model_args.config_name, cache_dir=model_args.cache_dir)
elif model_args.model_name_or_path:
config = AutoConfig.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)
else:
config = CONFIG_MAPPING[model_args.model_type]()
logger.warning("You are instantiating a new config instance from scratch.")
if model_args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)
elif model_args.model_name_or_path:
tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)
else:
raise ValueError(
"You are instantiating a new tokenizer from scratch. This is not supported, but you can do it from another script, save it,"
"and load it from here, using --tokenizer_name"
)
tokenizer.pad_token = "<|endoftext|>"
tokenizer._pad_token = "<|endoftext|>"
if model_args.model_name_or_path:
model = AutoModelWithLMHead.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
)
else:
logger.info("Training new model from scratch")
model = AutoModelWithLMHead.from_config(config)
model.resize_token_embeddings(len(tokenizer))
if config.model_type in ["bert", "roberta", "distilbert", "camembert"] and not data_args.mlm:
raise ValueError(
"BERT and RoBERTa-like models do not have LM heads but masked LM heads. They must be run using the"
"--mlm flag (masked language modeling)."
)
if data_args.block_size <= 0:
data_args.block_size = tokenizer.max_len
# Our input block size will be the max possible for the model
else:
data_args.block_size = min(data_args.block_size, tokenizer.max_len)
# Get datasets
train_dataset = (
get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None
)
eval_dataset = (
get_dataset(data_args, tokenizer=tokenizer, evaluate=True, cache_dir=model_args.cache_dir)
if training_args.do_eval
else None
)
if config.model_type == "xlnet":
data_collator = DataCollatorForPermutationLanguageModeling(
tokenizer=tokenizer,
plm_probability=data_args.plm_probability,
max_span_length=data_args.max_span_length,
)
else:
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=data_args.mlm, mlm_probability=data_args.mlm_probability
)
# Initialize our Trainer
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
prediction_loss_only=True,
)
# Training
if training_args.do_train:
model_path = (
model_args.model_name_or_path
if model_args.model_name_or_path is not None and os.path.isdir(model_args.model_name_or_path)
else None
)
trainer.train(model_path=model_path)
trainer.save_model()
# For convenience, we also re-save the tokenizer to the same directory,
# so that you can share your model easily on huggingface.co/models =)
if trainer.is_world_master():
tokenizer.save_pretrained(training_args.output_dir)
# Evaluation
results = {}
if training_args.do_eval:
logger.info("*** Evaluate ***")
eval_output = trainer.evaluate()
perplexity = math.exp(eval_output["eval_loss"])
result = {"perplexity": perplexity}
output_eval_file = os.path.join(training_args.output_dir, "eval_results_lm.txt")
if trainer.is_world_master():
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results *****")
for key in sorted(result.keys()):
logger.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
results.update(result)
return results
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
if __name__ == "__main__":
main()
```
2. set torch-xla-nightly Conda & set env
3. run script from checkpoint (replace dataset, since I cannot upload 48 GB worth of arrow files)
```
XLA_USE_BF16=1 python3 examples/xla_spawn.py --num_cores 8 examples/language-modeling/run_language_modeling.py --output_dir=kogpt1 --model_type=gpt2 --do_train --train_data_file=/home/ksjcom0705_gmail_com/NEWS_ARROW --overwrite_output_dir --per_device_train_batch_size=6 --save_steps 10000 --num_train_epochs=1 --block_size 2048 --eval_steps 10000 --logging_steps=10000 --tokenizer_name /home/ksjcom0705_gmail_com/kotok --model_name_or_path=kogpt1/checkpoint-1000
```
The error is this:
```
Exception in device=TPU:3: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Exception in device=TPU:5: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Exception in device=TPU:6: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Exception in device=TPU:1: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Exception in device=TPU:0: don't know how to restore data location of torch.FloatStorage (tagged with xla:1)
Exception in device=TPU:4: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Exception in device=TPU:7: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Exception in device=TPU:2: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Traceback (most recent call last):
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/home/ksjcom0705_gmail_com/transformers/examples/language-modeling/run_language_modeling.py", line 332, in _mp_fn
main()
File "/home/ksjcom0705_gmail_com/transformers/examples/language-modeling/run_language_modeling.py", line 300, in main
trainer.train(model_path=model_path)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/trainer.py", line 629, in train
torch.load(os.path.join(model_path, "optimizer.pt"), map_location=self.args.device)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 592, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 851, in _load
result = unpickler.load()
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 843, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 832, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 812, in restore_location
return default_restore_location(storage, str(map_location))
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 180, in default_restore_location
+ location + ")")
RuntimeError: don't know how to restore data location of torch.FloatStorage (tagged with xla:0)
Traceback (most recent call last):
Traceback (most recent call last):
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/home/ksjcom0705_gmail_com/transformers/examples/language-modeling/run_language_modeling.py", line 332, in _mp_fn
main()
File "/home/ksjcom0705_gmail_com/transformers/examples/language-modeling/run_language_modeling.py", line 300, in main
trainer.train(model_path=model_path)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/transformers/trainer.py", line 629, in train
torch.load(os.path.join(model_path, "optimizer.pt"), map_location=self.args.device)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 592, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 851, in _load
result = unpickler.load()
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/serialization.py", line 843, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "/home/ksjcom0705_gmail_com/transformers/examples/language-modeling/run_language_modeling.py", line 332, in _mp_fn
```
(More of same thing below)
## Expected behavior
Run normally
| 10-22-2020 12:16:14 | 10-22-2020 12:16:14 | Pinging @sgugger |
transformers | 7,975 | closed | Fixing the "translation", "translation_XX_to_YY" pipelines. | # What does this PR do
Actually make the "translation", "translation_XX_to_YY" task behave correctly.
Background:
- Currently "translation_cn_to_ar" does not work. (only 3 pairs are
supported)
- Some models, contain in their config the correct values for the (src,
tgt) pair they can translate. It's usually just one pair, and we can
infer it automatically from the `model.config.task_specific_params`. If
it's not defined we can still probably load the TranslationPipeline
nevertheless.
Proposed fix:
- A simplified version of what could become more general which is
a `parametrized` task. "translation" + (src, tgt) in this instance
it what we need in the general case. The way we go about it for now
is simply parsing "translation_XX_to_YY". If cases of parametrized task arise
we should preferably go in something closer to what `datasets` propose
which is having a secondary argument `task_options`? that will be close
to what that task requires.
- Should be backward compatible in all cases for instance
`pipeline(task="translation_en_to_de") should work out of the box.
- Should provide a warning when a specific translation pair has been
selected on behalf of the user using
`model.config.task_specific_params`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | 10-22-2020 11:38:48 | 10-22-2020 11:38:48 | But If 1 models implies 1 task, how should we cope with models that are able to do multiple things (like `t5-base`) ?
I think `datasets` does it correctly in that it does not make any choice on your behalf, but instead raises an Exception with your available choices.<|||||>> But If 1 models implies 1 task, how should we cope with models that are able to do multiple things (like `t5-base`) ?
>
> I think `datasets` does it correctly in that it does not make any choice on your behalf, but instead raises an Exception with your available choices.
I think T5Base should default to a `ForConditionalPipeline` (which is more or less the `Text2TextPipeline` we have right now). Then the user could either provide a pipeline config that makes sure "summarization" or "translation" params are used for the pipeline. Note: In the end of the day all Seq2Seq pipelines are exactly the same -> they are all based on `.generate()` and they only differ on which params (`max_length`, `prefix`, ...) are used. Or/And we create a very shallow "alias" pipeline named `class TranslationPipeline(ConditionalGenerationPipeline)` that only overwrites the config params similar to what we do here: https://github.com/huggingface/transformers/blob/901e9b8eda2fe88af717f960ddc05cac1803679b/src/transformers/pipelines.py#L568 right now.
Maybe @mfuntowicz can also give a bit more context on the Pipeline v2 vision here. |
transformers | 7,974 | closed | TrainingArguments error : TypeError: __init__() got an unexpected keyword argument 'evaluation_strategy' | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
when I use TrainingArguments (transformer 3,3,1) , it emerge the error TypeError: __init__() got an unexpected keyword argument 'evaluation_strategy'. I wonder why I 've got this error.
these are my code:
> training_args = TrainingArguments(
> output_dir="./no_num_pretrain_model",
> overwrite_output_dir=True,
> num_train_epochs=epochs,
> per_device_train_batch_size=16,
> per_device_eval_batch_size=32,
> do_train = True,
> do_eval = True,
> evaluation_strategy="steps",
> logging_steps = 10,
> save_steps=2000,
> eval_steps=10,
> )
>
>
> trainer = Trainer(
> model=model,
> args=training_args,
> data_collator=data_collator,
> train_dataset=train_dataset,
> eval_dataset=val_dataset, # evaluation dataset
> optimizers =(optimizer,scheduler)
> )
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 10-22-2020 11:31:00 | 10-22-2020 11:31:00 | It was recently added, so you may need to upgrade your version of transformers.<|||||>> It was recently added, so you may need to upgrade your version of transformers.
Thanks,it works!
I use transformers in
> kaggle notebook
, maybe there is some bug in such online applications. I once tried to update transformer to see whether this error would not appear, but it was not work.
This time I close my browser and restart the kaggle notebook. Everything going well ! <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I've just successfully solved this problem using this command.
`pip install transformers --upgrade`
|
transformers | 7,973 | closed | support relative path for best_model_checkpoint | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7431
When I give a relative path to `output_dir`, it raises some error at line 1222.
https://github.com/huggingface/transformers/blob/83481056921296fadbdc86cd51c157a9a9327946/src/transformers/trainer.py#L1205-L1227
If I give './path1/path2' to `output_dir`, `checkpoints_sorted` becomes 'path1/path2' by Path library on line 1208.
But `self.state.best_model_checkpoint` is still './path1/path2'.
So it will raise error as below.
```
Traceback (most recent call last):
File "../finetune.py", line 369, in <module>
main(**vars(args))
File "../finetune.py", line 315, in main
device)
File "../finetune.py", line 120, in train
trainer.train(optim_pretrained_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 803, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 860, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 918, in _save_checkpoint
self._rotate_checkpoints(use_mtime=True)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1235, in _rotate_checkpoints
checkpoints_sorted = self._sorted_checkpoints(use_mtime=use_mtime)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1223, in _sorted_checkpoints
best_model_index = checkpoints_sorted.index(self.state.best_model_checkpoint)
ValueError: './results/use_pretrained_test/checkpoint-1162' is not in list
```
So I resolve this error by using Path for `self.state.best_model_checkpoint`.
Any other idea is also welcome. please check it
| 10-22-2020 11:11:02 | 10-22-2020 11:11:02 | |
transformers | 7,972 | closed | Unable to load UnifiedQA models, tf throws DataLossError | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0a0+b31f58d (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
Probably
T5: @patrickvonplaten
tensorflow: @jplu
## Information
Model I am using UnifiedQA (based on T5):
The problem arises when Using the model loading code provided in the [UnifiedQA Readme](https://github.com/allenai/unifiedqa#using-the-models-in-pytorchhuggingface), shown below. Unable to load models throws DataLossError.
Code:
```
from transformers import T5Config, T5Tokenizer, T5ForConditionalGeneration
from transformers.modeling_t5 import load_tf_weights_in_t5
base_model = "t5-small"
tokenizer = T5Tokenizer.from_pretrained(base_model)
model = T5ForConditionalGeneration(T5Config.from_pretrained(base_model))
load_tf_weights_in_t5(model, None, "./models/unifiedqa-small/")
```
Error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/py_checkpoint_reader.py in NewCheckpointReader(filepattern)
94 try:
---> 95 return CheckpointReader(compat.as_bytes(filepattern))
96 # TODO(b/143319754): Remove the RuntimeError casting logic once we resolve the
RuntimeError: Unable to open table file /content/base: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
During handling of the above exception, another exception occurred:
DataLossError Traceback (most recent call last)
5 frames
<ipython-input-27-b28dfb350abf> in <module>()
7
8 model_path = './base/' #@param ['./unifiedqa-base/', './base/']
----> 9 load_tf_weights_in_t5(model, None, model_path)
10
11 # tokenizer = T5Tokenizer.from_pretrained('t5-base')
/usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py in load_tf_weights_in_t5(model, config, tf_checkpoint_path)
78 logger.info("Converting TensorFlow checkpoint from {}".format(tf_path))
79 # Load weights from TF model
---> 80 init_vars = tf.train.list_variables(tf_path)
81 names = []
82 tf_weights = {}
/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/checkpoint_utils.py in list_variables(ckpt_dir_or_file)
96 List of tuples `(name, shape)`.
97 """
---> 98 reader = load_checkpoint(ckpt_dir_or_file)
99 variable_map = reader.get_variable_to_shape_map()
100 names = sorted(variable_map.keys())
/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/checkpoint_utils.py in load_checkpoint(ckpt_dir_or_file)
65 raise ValueError("Couldn't find 'checkpoint' file or checkpoints in "
66 "given directory %s" % ckpt_dir_or_file)
---> 67 return py_checkpoint_reader.NewCheckpointReader(filename)
68
69
/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/py_checkpoint_reader.py in NewCheckpointReader(filepattern)
97 # issue with throwing python exceptions from C++.
98 except RuntimeError as e:
---> 99 error_translator(e)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/py_checkpoint_reader.py in error_translator(e)
42 raise errors_impl.InvalidArgumentError(None, None, error_message)
43 elif 'Unable to open table file' in error_message:
---> 44 raise errors_impl.DataLossError(None, None, error_message)
45 elif 'Failed to find the saved tensor slices' in error_message:
46 raise errors_impl.InternalError(None, None, error_message)
DataLossError: Unable to open table file /path/to/models/unifiedqa-small: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
```
Perhaps might be an issue in the code for loading from tensorflow, [related issue](https://github.com/tensorflow/models/issues/2676).
Also, put an [issue](https://github.com/allenai/unifiedqa/issues/6) on the unifiedQA repo. | 10-22-2020 10:29:59 | 10-22-2020 10:29:59 | As far as I can see, the model you are trying load doesn't have a compliant format. We cannot help more without the full error stack.<|||||>Edit, added the full error stack.
Also, this is fairly easy to reproduce. |
transformers | 7,971 | closed | FillMaskPipeline: support passing top_k on __call__ | Also change name from topk to top_k for more consistency with the TextGenerationPipeline | 10-22-2020 10:21:13 | 10-22-2020 10:21:13 | Ok so should be ready to merge after a quick review @LysandreJik @sgugger! |
transformers | 7,970 | closed | [tests|tokenizers] Refactoring pipelines test backbone - Small tokenizers improvements - General tests speedups | # What does this PR do?
This PR refactor the pipeline tests to split them in smaller parts more easy to iterate on.
There is now:
- one common backbone for testing pipelines in `test_pipeline_common.py` with two Mixin that can be used depending on the test: `CustomInputPipelineCommonMixin` and `MonoInputPipelineCommonMixin`. The later provide a standard `_test_pipeline(nlp: Pipeline)` method while the former require to write a custom test pipeline method.
- one test file per specific pipeline inheriting fro the above backbone.
Small fixes:
- the special token ids can now be set in the tokenizers
- added `convert_tokens_to_string(List[str]) -> str` to the Fast Tokenizers
- added a `tokenizer.vocab` property in Fast Tokenizers (alias to `tokenizer.get_vocab()`)
- `tokenizer.decode()` now accept also PyTorch, TensorFlow, Numpy Tensors/arrays as input
- gathered a few docstring in the parent class for tokenizers
- also fix the Dialog Pipeline #5516
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-22-2020 10:05:22 | 10-22-2020 10:05:22 | Tagging a few people working on the pipelines and @sshleifer because I've added additional imports of TF Bart in `modeling_tf_auto` to make the pipelines happy.<|||||>Ok. great, thanks @sgugger and @sshleifer.
I took the occasion to speed up the CI test suite (cc @stas00) by:
- spinning out the pipeline test in a separate job
- reducing the dual framework (tf+pt) tests overhead by focusing them on the PT+TF cross interactions (adding new tests at the same time) and removing the double testing with the tf and pytorch standalone tests.
Ready to merge imo.<|||||>Whoah! this is amazing - the slowest job is now the fastest! Thank you!!!
I think there is only one potential issue with it - the half-baked until now codecov report is now completely useless since it no longer covers all tests so just as well remove it completely.<|||||>@stas00 Did just that https://github.com/huggingface/transformers/commit/829b9f8cc321aa28396e6203e0f21eed26b132f7
Removed codecov from the repo as well.<|||||>It's still there ;)
```
.circleci/config.yml: - run: pip install codecov pytest-cov
.circleci/config.yml: - run: RUN_PT_TF_CROSS_TESTS=1 python -m pytest -n 8 --dist=loadfile -rA -s ./tests/ -m is_pt_tf_cross_test --cov --durations=0 | tee output.txt
.circleci/config.yml: - run: codecov
``` |
transformers | 7,969 | closed | Add model_cards | Add model cards for new German language models. Paper [here](https://arxiv.org/abs/2010.10906) | 10-22-2020 09:59:58 | 10-22-2020 09:59:58 | I think we should also add some meta-information (see [here](https://github.com/huggingface/model_card)) to the models:
```
---
language: de
license: mit
datasets:
- wikipedia
---
```
And maybe we need to add `masked-lm` to the `tags` array, so that we can use the inference widget on the model page to do some nice masking experiments :)<|||||>Awesome collaboration btw :heart: :hugs: <|||||>> And maybe we need to add `masked-lm` to the `tags` array, so that we can use the inference widget on the model page to do some nice masking experiments :)
Shouldn't need to (in theory)<|||||> ```json
"architectures": [
"BertForMaskedLM"
],
```
is currently missing in our BERT configs -> @brandenchan would it be possible that you add it :hugs: <|||||>> ```json
> "architectures": [
> "BertForMaskedLM"
> ],
> ```
>
> is currently missing in our BERT configs -> @brandenchan would it be possible that you add it 🤗
@stefan-it Done! Out of interest, what's the difference between BertForMaskedLM and BertForPretraining?<|||||>If I remember correctly BertForPretraining loads a LM head and a NSP head (both heads trained during pretraining). Am I correct @LysandreJik? |
transformers | 7,968 | closed | Herbert tokenizer auto load | Adding HerbertTokenizer imports for autoloading proper tokenizer.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik.
@julien-c | 10-22-2020 09:33:41 | 10-22-2020 09:33:41 | |
transformers | 7,967 | closed | Should update version requirement for scipy in 'examples\\distillation\\requirements.txt'? | There are inconsistent version requirements for scipy in 'examples\\distillation\\requirements.txt' and 'examples\\movement-pruning\\requirements.txt'. Fixed version **1.3.1** in 'examples\\distillation\\requirements.txt' is not in the version range in **'>=1.4.1'** in 'examples\\movement-pruning\\requirements.txt'.

**Solution**
I am wondering if it is necessary to update the version requirement in 'examples\\distillation\\requirements.txt' to be consistent. Fixed version can often cause conflict. | 10-22-2020 08:30:18 | 10-22-2020 08:30:18 | @VictorSanh what do you think? |
transformers | 7,966 | closed | T5 with allowing model changes | Hi
I would like to be able to change the model of T5, in addition to training on multiple tasks, I think currently the script can work for summarization only. Do you know possible other scripts allowing training on multiple tasks?
One more question that in T5 in tensorflow repo, they have a small example for using your repo, does it work for large-scale? Is training on TPU working? could it allow model changes so one can change the model architecture?
thanks a lot | 10-22-2020 06:28:24 | 10-22-2020 06:28:24 | Hey @rabeehk - for more specific cases we recommend that you fork master and tweak the model however you would like to so that it fits your purpose.
I didn't understand 100% what kind of script you are looking for, but here you can browse some of the T5 scripts, we have collected: https://github.com/huggingface/transformers/tree/master/notebooks#-transformers-notebooks<|||||>Hi
thanks for getting back to me. I am looking for a script showing to train
T5 with multiple tasks. this is when they create a T5 registery mixture
dataset in their original code and train one model on a mixture of several
dataset.
Do you know if huggingface version works fine for handling multiple
datasets? and do you know how performance is different from JAX
implementation? is this more or less the same? can one train the base T5
with a mixture of datasets with huggingface code?
thank you very much.
Best
Rabeeh
On Fri, Oct 23, 2020, 8:20 AM Patrick von Platen <[email protected]>
wrote:
> Hey @rabeehk <https://github.com/rabeehk> - for more specific cases we
> recommend that you fork master and tweak the model however you would like
> to so that it fits your purpose.
> I didn't understand 100% what kind of script you are looking for, but here
> you can browse some of the T5 scripts, we have collected:
> https://github.com/huggingface/transformers/tree/master/notebooks#-transformers-notebooks
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7966#issuecomment-714945934>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABP4ZCEVU5IEVMEYHK4VLSTSMEOELANCNFSM4S2XT3OA>
> .
>
|
transformers | 7,965 | closed | [s2s trainer] tests to use distributed on multi-gpu machine | This PR:
* [x] abstracts the async forking io into local `utils.py`
* [x] deploys distributed training with special async io forking for `examples/seq2seq/test_finetune_trainer.py`
So now this works (2 gpus):
```
CUDA_VISIBLE_DEVICES="0,1" RUN_SLOW=1 pytest -sv examples/seq2seq/test_finetune_trainer.py
```
and this still works (1 gpu)
```
CUDA_VISIBLE_DEVICES="0" RUN_SLOW=1 pytest -sv examples/seq2seq/test_finetune_trainer.py
```
Fixes: #7833
Fixes: #7982
@sshleifer | 10-22-2020 05:47:49 | 10-22-2020 05:47:49 | @sshleifer, the slow test isn't working for me prior to this PR with 0 or 1 gpu:
```
CUDA_VISIBLE_DEVICES="" RUN_SLOW=1 pytest -sv examples/seq2seq/test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow
[...]
self = <seq2seq.test_finetune_trainer.TestFinetuneTrainer testMethod=test_finetune_trainer_slow>
@slow
def test_finetune_trainer_slow(self):
# There is a missing call to __init__process_group somewhere
> output_dir = self.run_trainer(eval_steps=2, max_len="128", model_name=MARIAN_MODEL, num_train_epochs=3)
examples/seq2seq/test_finetune_trainer.py:35:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
examples/seq2seq/test_finetune_trainer.py:113: in run_trainer
main()
examples/seq2seq/finetune_trainer.py:199: in main
model = AutoModelForSeq2SeqLM.from_pretrained(
src/transformers/modeling_auto.py:1118: in from_pretrained
return MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING[type(config)].from_pretrained(
src/transformers/modeling_utils.py:947: in from_pretrained
model = cls(config, *model_args, **model_kwargs)
src/transformers/modeling_bart.py:964: in __init__
base_model = BartModel(config)
src/transformers/modeling_bart.py:843: in __init__
self.encoder = BartEncoder(config, self.shared)
src/transformers/modeling_bart.py:315: in __init__
self.embed_positions = SinusoidalPositionalEmbedding(
src/transformers/modeling_bart.py:1331: in __init__
self.weight = self._init_weight(self.weight)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
out = Parameter containing:
tensor([[ 2.2244e+00, -1.2380e+00, -3.5307e-01, ..., -1.0924e+00,
-1.3130e+00, 1.7737... [-3.8906e-01, 9.2203e-01, 1.7887e-01, ..., -1.7493e-01,
-1.6993e+00, 2.0896e-01]], requires_grad=True)
@staticmethod
def _init_weight(out: nn.Parameter):
"""Identical to the XLM create_sinusoidal_embeddings except features are not interleaved.
The cos features are in the 2nd half of the vector. [dim // 2:]
"""
n_pos, dim = out.shape
position_enc = np.array(
[[pos / np.power(10000, 2 * (j // 2) / dim) for j in range(dim)] for pos in range(n_pos)]
)
> out[:, 0 : dim // 2] = torch.FloatTensor(np.sin(position_enc[:, 0::2])) # This line breaks for odd n_pos
E RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation
```
**edit**: After messing around with my conda env due to ever-breaking tf, this went away, but a new thing came instead: https://github.com/huggingface/transformers/issues/7982
<|||||>interesting, I can't reproduce that. What's your `transformers-cli env`? Does it fail after the change?<|||||>No, it fails on master.
```
- `transformers` version: 3.4.0
- Platform: Linux-4.15.0-118-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0.dev20201020 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```<|||||>I'd open an issue "sinusoidal positional embedding broken on torch 1.8".
Reasoing: [these](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_bart.py#L606) fast tests pass CI and I can't replicate on on torch 1.5.
Is this ready to merge otherwise?<|||||>Need to sort out this first: https://github.com/huggingface/transformers/issues/7982
It's mostly ready otherwise, but the slow test will fail as it has nothing to do with this PR.
**edit** resolved in this PR.<|||||>Out of curiosity, how did you resolve the bleu issue?<|||||>Great work, btw! This is awesome. Now we can tell people to run these tests before they break things :)
Apparently there is multi-gpu ci running src/ tests at some frequency FYI, I think through gh actions.<|||||>I replied in the other issue: I used more iterations - 6 was enough for 1 gpu, 10 for 2, so I went with 10.
I think if someone tries it on more than 2 gpus it might need even more iterations - could probably codify this with a factor of n_gpus.
<|||||>Documenting it now https://github.com/huggingface/transformers/pull/7993
If you think anything needs to be added please let me know.
It will get better over time.<|||||>> Apparently there is multi-gpu ci running src/ tests at some frequency FYI, I think through gh actions.
Once a day yes:
https://github.com/huggingface/transformers/blob/master/.github/workflows/self-scheduled.yml#L74<|||||>Later I want to add the non-interactive IO pipe options - in case it hangs for someone - by default it could be non-interactive - always works, and only make it interactive for debug purposes. |
transformers | 7,964 | closed | adding beginner-friendly notebook on text classification with DistilBERT/TF | # What does this PR do?
Looking at the current community notebooks, it seems that few are targeted for absolute beginners and even fewer are written with TensorFlow. This notebook describes absolutely everything a beginner would need to know, including how to save/load their model and use it for new predictions (this is often omitted in tutorials)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-22-2020 05:26:56 | 10-22-2020 05:26:56 | |
transformers | 7,963 | closed | Load tuned model without downloading from huggingface | I have the following model which I have tuned for a classification task.
```python
BASE_MODEL = "distilbert-base-multilingual-cased"
class Model(nn.Module):
def __init__(self, nc, p=0.1):
super().__init__()
self.base = AutoModel.from_pretrained(BASE_MODEL)
in_features = 768 # self.base.pooler.dense.out_features
self.dropout = nn.Dropout(p=p)
self.fc = nn.Linear(in_features, nc, bias=False)
def forward(self, x):
out = self.base(**x)[0]
out = out[:, 0, :]
out = self.dropout(out)
return self.fc(out)
```
However, the way that I load the model currently is by doing:
```python
model = Model(nc)
model.load_state_dict(torch.load(TUNED_MODEL_PATH))
```
The first line above causes `distibert` to be downloaded again and then my weights overwrite the model. I was hoping that there is a way of just getting the base architecture without downloading any weights.
I tried doing `self.base = DistilBertModel(DistilBertConfig())`. However, when loading the tuned model, it gives the error `size mismatch for base.embeddings.word_embeddings.weight: copying a param with shape torch.Size([119547, 768]) from checkpoint, the shape in current model is torch.Size([30522, 768]).`. I believe this is due to the fact that I am using **a multilingual** model.
Side questions:
- Where does the AutoTokenizer/ AutoModel download the relevant files to?
- Also apologies for posting here instead of the forum. For some reason it won't let me login with the huggingface credentials.
Version: transformers==3.1.0
| 10-22-2020 02:56:27 | 10-22-2020 02:56:27 | Instead of doing `self.base = DistilBertModel(DistilBertConfig())` you can do `self.base = DistilBertModel(DistilBertConfig(vocab_size=119547))`.
|
transformers | 7,962 | closed | xla_spawn and run_language_modeling slow on TPUs | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-4.9.0-13-amd64-x86_64-with-debian-9.13
- Python version: 3.6.10
- PyTorch version (GPU?): 1.8.0a0+e5ed037 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No (but using TPU)
- Using distributed or parallel set-up in script?: using xla_spawn.py
### Who can help
@LysandreJik @sgugger
or the writer of examples/language-modeling/run_language_modeling.py or a TPU master
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: examples/language-modeling/run_language_modeling.py but with HF datasets
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: Text generation
## To reproduce
Steps to reproduce the behavior:
1. Modify examples/language-modeling/run_language_modeling.py to below
```
import logging
import math
import os
import glob
import datasets
from dataclasses import dataclass, field
from typing import Optional
from datasets import list_datasets, load_dataset
from transformers import (
CONFIG_MAPPING,
MODEL_WITH_LM_HEAD_MAPPING,
AutoConfig,
AutoModelWithLMHead,
AutoTokenizer,
DataCollatorForLanguageModeling,
DataCollatorForPermutationLanguageModeling,
HfArgumentParser,
LineByLineTextDataset,
PreTrainedTokenizer,
TextDataset,
Trainer,
TrainingArguments,
set_seed,
)
logger = logging.getLogger(__name__)
MODEL_CONFIG_CLASSES = list(MODEL_WITH_LM_HEAD_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch.
"""
model_name_or_path: Optional[str] = field(
default=None,
metadata={
"help": "The model checkpoint for weights initialization. Leave None if you want to train a model from scratch."
},
)
model_type: Optional[str] = field(
default=None,
metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)},
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"}
)
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
train_data_file: Optional[str] = field(
default=None, metadata={"help": "The input training data file (a text file)."}
)
eval_data_file: Optional[str] = field(
default=None,
metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
)
line_by_line: bool = field(
default=False,
metadata={"help": "Whether distinct lines of text in the dataset are to be handled as distinct sequences."},
)
mlm: bool = field(
default=False, metadata={"help": "Train with masked-language modeling loss instead of language modeling."}
)
mlm_probability: float = field(
default=0.15, metadata={"help": "Ratio of tokens to mask for masked language modeling loss"}
)
plm_probability: float = field(
default=1 / 6,
metadata={
"help": "Ratio of length of a span of masked tokens to surrounding context length for permutation language modeling."
},
)
max_span_length: int = field(
default=5, metadata={"help": "Maximum length of a span of masked tokens for permutation language modeling."}
)
block_size: int = field(
default=-1,
metadata={
"help": "Optional input sequence length after tokenization."
"The training dataset will be truncated in block of this size for training."
"Default to the model max input length for single sentence inputs (take into account special tokens)."
},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
arrow: bool = field(
default=True,
metadata={
"help": "Use Arrow-based HF NLP for optimization."
},
)
def get_dataset(
args: DataTrainingArguments,
tokenizer: PreTrainedTokenizer,
evaluate: bool = False,
cache_dir: Optional[str] = "./cache",
):
tokenizer.pad_token = "<|endoftext|>"
tokenizer._pad_token = "<|endoftext|>"
#tokenizer.pad_token_id = 50256
file_path = args.eval_data_file if evaluate else args.train_data_file
if True:
dataset = datasets.load_from_disk(file_path)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
if False:
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
dataset.save_to_disk(file_path+'.arrow')
return dataset
if args.line_by_line:
return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)
else:
return TextDataset(
tokenizer=tokenizer,
file_path=file_path,
block_size=args.block_size,
overwrite_cache=args.overwrite_cache,
cache_dir=cache_dir,
)
"""
dataset = load_dataset("text", data_files=file_path, split="train")
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
"""
def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
if data_args.eval_data_file is None and training_args.do_eval:
raise ValueError(
"Cannot do evaluation without an evaluation data file. Either supply a file to --eval_data_file "
"or remove the --do_eval argument."
)
if (
os.path.exists(training_args.output_dir)
and os.listdir(training_args.output_dir)
and training_args.do_train
and not training_args.overwrite_output_dir
):
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome."
)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,
)
logger.warning(
"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
training_args.local_rank,
training_args.device,
training_args.n_gpu,
bool(training_args.local_rank != -1),
training_args.fp16,
)
logger.info("Training/evaluation parameters %s", training_args)
# Set seed
set_seed(training_args.seed)
# Load pretrained model and tokenizer
#
# Distributed training:
# The .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
if model_args.config_name:
config = AutoConfig.from_pretrained(model_args.config_name, cache_dir=model_args.cache_dir)
elif model_args.model_name_or_path:
config = AutoConfig.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)
else:
config = CONFIG_MAPPING[model_args.model_type]()
logger.warning("You are instantiating a new config instance from scratch.")
if model_args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)
elif model_args.model_name_or_path:
tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)
else:
raise ValueError(
"You are instantiating a new tokenizer from scratch. This is not supported, but you can do it from another script, save it,"
"and load it from here, using --tokenizer_name"
)
tokenizer.pad_token = "<|endoftext|>"
tokenizer._pad_token = "<|endoftext|>"
if model_args.model_name_or_path:
model = AutoModelWithLMHead.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
)
else:
logger.info("Training new model from scratch")
model = AutoModelWithLMHead.from_config(config)
model.resize_token_embeddings(len(tokenizer))
if config.model_type in ["bert", "roberta", "distilbert", "camembert"] and not data_args.mlm:
raise ValueError(
"BERT and RoBERTa-like models do not have LM heads but masked LM heads. They must be run using the"
"--mlm flag (masked language modeling)."
)
if data_args.block_size <= 0:
data_args.block_size = tokenizer.max_len
# Our input block size will be the max possible for the model
else:
data_args.block_size = min(data_args.block_size, tokenizer.max_len)
# Get datasets
train_dataset = (
get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None
)
eval_dataset = (
get_dataset(data_args, tokenizer=tokenizer, evaluate=True, cache_dir=model_args.cache_dir)
if training_args.do_eval
else None
)
if config.model_type == "xlnet":
data_collator = DataCollatorForPermutationLanguageModeling(
tokenizer=tokenizer,
plm_probability=data_args.plm_probability,
max_span_length=data_args.max_span_length,
)
else:
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=data_args.mlm, mlm_probability=data_args.mlm_probability
)
# Initialize our Trainer
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
prediction_loss_only=True,
)
# Training
if training_args.do_train:
model_path = (
model_args.model_name_or_path
if model_args.model_name_or_path is not None and os.path.isdir(model_args.model_name_or_path)
else None
)
trainer.train(model_path=model_path)
trainer.save_model()
# For convenience, we also re-save the tokenizer to the same directory,
# so that you can share your model easily on huggingface.co/models =)
if trainer.is_world_master():
tokenizer.save_pretrained(training_args.output_dir)
# Evaluation
results = {}
if training_args.do_eval:
logger.info("*** Evaluate ***")
eval_output = trainer.evaluate()
perplexity = math.exp(eval_output["eval_loss"])
result = {"perplexity": perplexity}
output_eval_file = os.path.join(training_args.output_dir, "eval_results_lm.txt")
if trainer.is_world_master():
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results *****")
for key in sorted(result.keys()):
logger.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
results.update(result)
return results
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
if __name__ == "__main__":
main()
```
2. set torch-xla-nightly Conda & set env
3. run script (replace dataset, since I cannot upload 48 GB worth of arrow files)
```
XLA_USE_BF16=1 python3 examples/xla_spawn.py --num_cores 8 examples/language-modeling/train.py --output_dir=kogpt1 --model_type=gpt2 --do_train --train_data_file=/home/ksjcom0705_gmail_com/NEWS_ARROW --overwrite_output_dir --per_device_train_batch_size=6 --save_steps 10000 --num_train_epochs=1 --block_size 2048 --eval_steps 10000 --logging_steps=10000 --tokenizer_name /home/ksjcom0705_gmail_com/kotok tpu_num_cores=8
```
The progress bar will show something like 250s/it, while on GPUs(2 V100s) it's about 1.2 it/s
## Expected behavior
At least give speed similar to GPUs, since my [home-brew code](https://github.com/ksjae/KoGPT2-train) works with similar speed.
Also, MXU utilization is stuck at near-zero.
<img width="2177" alt="image" src="https://user-images.githubusercontent.com/17930170/96816530-663ec780-1458-11eb-9595-be91de708d03.png">
| 10-22-2020 02:19:39 | 10-22-2020 02:19:39 | The only difference I'm seeing between our script and yours is the data loading, here using `datasets`. Are we aware of TPU slowdown when using `datasets` @lhoestq, @thomwolf?<|||||>I haven't tested `datasets` with TPU yet. I know @sgugger tried once, did you notice slowdowns ?<|||||>Not for the new run_glue script introduced in #7917<|||||>Is there any metrics/debug information I can provide?<|||||>I have similar issues:
* mxu utilization mostly 0%
* around 150 s/it
setup:
* n1-highmem-16
* TPU v2-8
I tried it both with a map-style dataset using `datasets`' wiki dump and an iterable-style dataset with the `Trainer` adapted. Same result. The slowdown is not on behalf of the data loading but of the forward-pass, loss computation and backpropagation on the tpu.<|||||>Are you sure all your batches of inputs have the exact same shape? I tested thoroughly the script on TPUs with the datasets library and:
- when inputs are not all of the same size, the training is excruciatingly slow (which is expected because XLA recompiles the code at each training step in this case)
- when inputs are all of the same size, it runs smoothly and fast.
This is independent of using the datasets library or not, and this is expected behavior on TPUs, as XLA does not handle well dynamic shapes. <|||||>How can I check whether inputs are the same size? Or how can I pad the inputs so it has fixed size?<|||||>Your dataset is hidden inside the `load_dataset` function, so I can't advise you on how to add padding. There are examples of this in the new `run_mlm.py` script.
As for checking your inputs are all of the same size, it's just a pass through your dataset:
```
shapes = []
for x in dataset:
shapes.append(x["input_ids"].shape)
print(set(shapes))
``` <|||||>Keeping shapes constant fixed the speed issue for me. After few iteration (~ 5), the graph stabilized and the iteration speed went down from 150s/it to few seconds per batch.
Further readings:
* https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#known-performance-caveats
* https://github.com/pytorch/xla/issues/2383
* https://github.com/pytorch/xla/issues/2368<|||||>Fixed it, closing. |
transformers | 7,961 | closed | A question about shift_tokens_right in BART model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.10
- Platform: Ubuntu 16.04
- Python version: 3.7.3
- PyTorch version (GPU?): 1.6 GPU-version
### Who can help
@TevenLeScao @sshleifer
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Generally speaking, the decoder input of seq2seq model is `<sos> tok1 tok2 … tokn` and the target is `tok1 tok2 … tokn <eos>` (shift right). But in BART, the `shift_tokens_right` function will produce the following result:
decoder input: `<eos><sos> tok1 tok2 … tokn`
target:`<sos> tok1 tok2 … tokn <eos>`
Is this a bug or correct?
## To reproduce
```python
tokenizer = BartTokenizer.from_pretrained(model_path)
model = BartForConditionalGeneration.from_pretrained(model_path)
inputs = tokenizer.prepare_seq2seq_batch(
src_texts=['good morning.', ],
tgt_texts=['good bye.'],
max_length=100, return_tensors='pt'
)
# This function is copied from modeling_bart.py
def shift_tokens_right(input_ids, pad_token_id):
"""Shift input ids one token to the right, and wrap the last non pad token (usually <eos>)."""
prev_output_tokens = input_ids.clone()
index_of_eos = (input_ids.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1)
prev_output_tokens[:, 0] = input_ids.gather(1, index_of_eos).squeeze()
prev_output_tokens[:, 1:] = input_ids[:, :-1]
return prev_output_tokens
tgt = inputs['labels'][0].tolist()
decoder_inputs = shift_tokens_right(inputs['labels'], tokenizer.pad_token_id)[0].tolist()
print("decoder inputs:", tokenizer.decode(decoder_inputs))
print("target:", tokenizer.decode(tgt))
```
The expected output is
```
decoder inputs: <s>good bye.
target: good bye.</s>
```
but it actually outputs
```
decoder inputs: </s><s>good bye.
target: <s>good bye.</s>
```
| 10-22-2020 02:13:12 | 10-22-2020 02:13:12 | It's correct, copied from fairseq (authors code).
you can think of `shift_tokens_right` as `shift_tokens_right_and_wrap_eos_to_position0`.
If you have empirical evidence that there is a change that improves fine-tuning, I'd be happy to incorporate it.
<|||||>After I updated my version for this one, I'm also confused by the decoder input.
With this new encoding, after finetuning, bart-large is outputting:
Example 1:
```
"labels": "<s> There are too many traitors among our compatriots,</s><pad><pad><pad>",
"decoder_input_ids": "</s><s> There are too many traitors among our compatriots,</s><pad><pad>",
"generated_ids": "</s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s></s>"
```
Example 2:
```
"labels": "<s> Freedom of speech is relative. We need take national conditions into account.</s>",
"decoder_input_ids": "</s><s> Freedom of speech is relative. We need take national conditions into account.",
"generated_ids": "</s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s></s>"
```
I'm trying to figure out why.
I'm using the code provided by the examples/seq2seq folder. T5 works fine, but bart does not.<|||||>@leoribeiro if you are reporting a bug, could you explain what you did and what you expected more clearly in a separate issue?
Otherwise, I don't understand why you say the encoding is new. `shift_tokens_right` hasn't changed.<|||||>@sshleifer thank you for your reply. What I mean by new encoding is adding `</s>` at the beginning of the decoder inputs. I think that in a previous transformer version (2.11.0), the code for BART did not use `</s>` at the beginning of the decoder inputs, correct? In the 2.11.0 version, my experiments with `facebook/bart-large` were working with the following code:
```
def _step(self, batch):
pad_token_id = self.tokenizer.pad_token_id
source_ids, source_mask, y = batch["source_ids"], batch["source_mask"], batch["target_ids"]
y_ids = y[:, :-1].contiguous()
lm_labels = y[:, 1:].clone()
lm_labels[y[:, 1:] == pad_token_id] = -100
outputs = self(source_ids, attention_mask=source_mask, decoder_input_ids=y_ids, lm_labels=lm_labels,)
loss = outputs[0]
return loss
```
But now, with the following code, my experiments with `facebook/bart-large` are not working:
```
def _step(self, batch: dict) -> Tuple:
pad_token_id = self.tokenizer.pad_token_id
src_ids, src_mask = batch["input_ids"], batch["attention_mask"]
tgt_ids = batch["labels"]
if isinstance(self.model, T5ForConditionalGeneration):
decoder_input_ids = self.model._shift_right(tgt_ids)
else:
decoder_input_ids = shift_tokens_right(tgt_ids, pad_token_id)
if not self.already_saved_batch: # This would be slightly better if it only happened on rank zero
batch["decoder_input_ids"] = decoder_input_ids
self.save_readable_batch(batch)
outputs = self(src_ids, attention_mask=src_mask, decoder_input_ids=decoder_input_ids, use_cache=False)
lm_logits = outputs[0]
if self.hparams.label_smoothing == 0:
# Same behavior as modeling_bart.py, besides ignoring pad_token_id
ce_loss_fct = torch.nn.CrossEntropyLoss(ignore_index=pad_token_id)
assert lm_logits.shape[-1] == self.vocab_size
loss = ce_loss_fct(lm_logits.view(-1, lm_logits.shape[-1]), tgt_ids.view(-1))
else:
lprobs = torch.nn.functional.log_softmax(lm_logits, dim=-1)
loss, nll_loss = label_smoothed_nll_loss(
lprobs, tgt_ids, self.hparams.label_smoothing, ignore_index=pad_token_id
)
return (loss,)
```
I'm trying to understand if those things are related. The weird thing is that the exact same code works for `facebook/bart-base`. Please, see #8005. <|||||>Moved to #8005, problem seems to be config related. |
transformers | 7,960 | closed | RoBERTa convert-script modified to support mapping of bpe tokens | # Reordering of embeddings possible when converting roberta with dict file
When RoBERTa is trained using fairseq the preprocessing pipeline bpe encodes tokens and stores a mapping of token ids to bpe-token ids in a dict.txt file. This is mostly just a re-ordering of the tokens (potentially with a few missing ones if they were not found in the training data). The earlier version of the conversion script maps the fairseq model directly creating a need for downstream processing to recover the original tokens, this removes that need by allowing a shuffling of the embedding tensors using the dict.txt file. This enables e.g. the "fill-mask" pipeline out of the box on a converted model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@julien-c (since prominent in blame)
| 10-22-2020 01:03:05 | 10-22-2020 01:03:05 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 7,959 | closed | T5-large on multiple gpus. | Hi! I'm trying to fine-tune a T5-large on multiple gpus, so I basically use `torch.nn.DataParallel`, but when I get the output of the model which contains the loss and I do `loss.mean().backward()` I run into `cuda out of memory` as I think the loss is just on the first gpu and that's taken. What should I do? | 10-22-2020 00:50:10 | 10-22-2020 00:50:10 | For OOM errors, I guess you need to reduce batch_size or parallelize over more GPUs.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,958 | closed | added qg evaluation notebook | @patrickvonplaten, @TevenLeScao
I added a notebook to evaluate question generation models. Could you please take a look ? | 10-22-2020 00:38:10 | 10-22-2020 00:38:10 | Thanks Patrick :)
Thank you for sharing this @zolekode !<|||||>@patrickvonplaten thanks for the correction |
transformers | 7,957 | closed | dropping "," in date because of Tokenization | I had my model where "May 7, 2020" is split like this
`['▁may', b'\xe2\x96\x818', ',', '▁2017']`
I saw that this is a problem here. Any reason why this code was thrown in for bytes string insertion where we have a comma in the token of len > 1?
```
for piece in pieces:
if len(piece) > 1 and piece[-1] == str(",") and piece[-2].isdigit():
cur_pieces = self.sp_model.EncodeAsPieces(piece[:-1].replace(SPIECE_UNDERLINE, ""))
if piece[0] != SPIECE_UNDERLINE and cur_pieces[0][0] == SPIECE_UNDERLINE:
if len(cur_pieces[0]) == 1:
cur_pieces = cur_pieces[1:]
else:
cur_pieces[0] = cur_pieces[0][1:]
cur_pieces.append(piece[-1])
new_pieces.extend(cur_pieces)
else:
new_pieces.append(piece)
return new_pieces
``` | 10-21-2020 23:14:31 | 10-21-2020 23:14:31 | Hi, I'm sorry but I really don't understand what the issue is. Could you clarify?<|||||>I think this comes from the original Albert tokenization, maybe you want to ask on google repository about this?
https://github.com/google-research/albert/blob/master/tokenization.py#L67<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,956 | closed | T5 on multiple datasets | Hi Everyone,
Is there an example showing how to run T5 on multiple datasets? Greatly appreciated.
thanks.
Best
Rabeeh | 10-21-2020 22:22:53 | 10-21-2020 22:22:53 | pinging the T5 master @patrickvonplaten <|||||>Hey @rabeehk - not sure about that. You can check the T5 notebooks we provide or `https://discuss.huggingface.co/`.<|||||>Hi
I could not find notebooks in the link you said.
I am looking for a way to train multiple tasks at once in T5, similar to
this script:
https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/models/hf_model.py
thanks for your help.
Best
Rabeeh
On Thu, Oct 22, 2020 at 10:51 PM Patrick von Platen <
[email protected]> wrote:
> Hey @rabeehk <https://github.com/rabeehk> - not sure about that. You can
> check the T5 notebooks we provide or https://discuss.huggingface.co/.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7956#issuecomment-714754836>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABP4ZCFCMFQC5USZT3FRAFLSMCLNZANCNFSM4S2MCL7Q>
> .
>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,955 | closed | [pip/setup.py] target management and updates | This PR
* [x] does a major revamp to how targets are specified and merged into groups. it lists all the dependencies vertically and allows you to write comments next to them, if needed - or completely comment them out
* [x] adds `docs` to `dev`, since we need to have the tools to run `make docs`
* [x] adds `flax` to `dev`, since we need to have the libs to run flax tests - except when on windows - it skips it then
* [x] brings `all` up-to-date
note, I removed the hardcoded `+ ["scikit-learn", "tensorflow", "torch", "sentencepiece!=0.1.92"]` and replaced with the full targets for `tf`, `torch`, etc., which include other deps. I'm not sure why we don't want all of them.
@LysandreJik, @sgugger, @thomwolf | 10-21-2020 21:58:57 | 10-21-2020 21:58:57 | Also what does `extras["all"]` stand for? Is it a vestige of something?
https://github.com/huggingface/transformers/blob/master/setup.py#L94
I'd put it last and really put everything into `all` unless I'm missing a special purpose here.<|||||>Does `transformers` really work `torch==1.0` or would we realistically need to set some higher 1.x minimum? I'm curious whether anybody tested this. But I guess there is no need to waste time on this - someone will flag this in time if it's a problem.
<|||||>I think I prefer the long version because I can understand it without thinking, this one... not so much.
I was also going to take a stab at the setup because we have a recurring complaint from Windows user they can't make a dev install (some of those dependencies like `faiss` should be dropped if on Windows).
Also `flax` is still experimental. Not sure it should be in dev just yet, especially since I'm doubtful about its Windows support. It shouldn't prevent people from developing and making PRs to the library.<|||||>Well, we could make it into a wrapper function so it'd be easier to read, but either way works.
Then at the very least can we add `docs` to `dev`?
with hardcoded targets in `dev` - let's have just one definition where we put numerical requirements (==, !=, etc.)<|||||>Yes adding `docs` to `dev` is definitely useful.<|||||>would this be easier to read:
```
def combine_targets(names):
return list(chain(*map(extras.get, names)))
extras["dev"] = combine_targets("testing quality docs ja sklearn flax tf torch sentencepiece".split())
# or:
extras["dev"] = combine_targets(["testing", "quality", "docs", "ja", "sklearn", "flax", "tf", "torch", "sentencepiece"])
```
or you'd rather keep:
```
extras["dev"] = extras["testing"] + extras["quality"] + extras["docs"] + extras["flax"] + extras["ja"] + \
extras["sklearn"] + extras["tf"] + extras["torch"] + extras["sentencepiece"]
```
<|||||>I agree with @sgugger that the `list(chain(*map(...` is :dizzy_face:.
The way it's currently setup is fine by me, but your proposed fix:
```py
extras["dev"] = combine_targets(["testing", "quality", "docs", "ja", "sklearn", "flax", "tf", "torch", "sentencepiece"])
```
is also fine by me.<|||||>@LysandreJik, please have another look - we discussed this with @sgugger on slack and expanded this further to be even more flexible - easy to read vertical listing plus built-in comments are now supported.<|||||>If it looks too busy we can merge the base groups into a dict, so it'll look less busy and will be more compact:
So instead of this:
```
extras["serving"] = to_list("""
fastapi
pydantic
starlette
uvicorn
""")
extras["sentencepiece"] = to_list("""
sentencepiece!=0.1.92
""")
extras["retrieval"] = to_list("""
datasets
faiss-cpu
""")
extras["testing-base"] = to_list("""
parameterized
psutil
pytest
pytest-xdist
timeout-decorator
""")
```
it'd be:
```
extras = dict(
serving=to_list("""
fastapi
pydantic
starlette # some explanation
uvicorn
"""),
sentencepiece=to_list("""
sentencepiece!=0.1.92
"""),
retrieval=to_list("""
datasets
faiss-cpu
"""),
testing-base=to_list("""
parameterized
psutil
pytest
pytest-xdist # some comment
timeout-decorator
"""),
)
```
Actually, if we decide to go the dict way we can do all the processing later, why repeat the same function all the time, so it'd just leave:
```
extras = dict(
serving="""
fastapi
pydantic
starlette # some explanation
uvicorn
""",
sentencepiece="""
sentencepiece!=0.1.92
""",
retrieval="""
datasets
faiss-cpu
""",
testing-base="""
parameterized
psutil
pytest
pytest-xdist # some comment
timeout-decorator
""",
)
extras = process(extras) # not written yet.
```<|||||>It's different, but it's consistent. You never need to read everything at once - you only would care about reading one entry - a group or a subgroup - this is not code but a table of definitions - like a spreadsheet. You can always squash the vertical entries into a horizontal line, by losing the readability and functionality offered by the spreadsheet-type of data.
I proposed here a much more compact way: https://github.com/huggingface/transformers/pull/7955#issuecomment-714845771
Also the idea is to have just one base definition with the specific version if any and a comment why it is so if needed including the non-optional requirements. Otherwise it's too easy to forget to update multiple definitions of the same.
These are just different suggestions, please feel free to cherry pick some, all or none and close this PR as well. No hard feelings.<|||||>As I personally don't see any improvements regarding readability in the offered solutions, I would vote to keep the original approach. It's a personal preference choice so I'm willing to compromise if others disagree!
All the other changes in the PR look good to me.<|||||>Thank you for indicating that the proposed change is not fitting, @LysandreJik and @sgugger. |
transformers | 7,954 | closed | TF: Faster to way to set one column/all but one column of a tensor to -inf | in `_force_token_id_to_be_generated` we have much simpler torch code:
```python
scores[:, [x for if x != token_id]] = -float("inf")
```
Is it possible to make the TF Code simpler? TF doesn't support assignment, but maybe to and from `numpy` could be faster. Would definitely be simpler.
```python
@staticmethod
def _force_token_id_to_be_generated(scores, token_id) -> None:
"""force one of token_ids to be generated by setting prob of all other tokens to 0 (logprob=-float("inf"))"""
output_list = []
# Is there a better way to do in TF?
bs, vocab_size = scores.shape
inf_tensor = tf.convert_to_tensor([-float("inf")] * bs, dtype=scores.dtype)
for x in range(vocab_size):
if x != token_id:
output_list.append(inf_tensor)
else:
output_list.append(scores[:, x])
scores = tf.stack(output_list, axis=1, name="scores")
assert scores.shape == (bs, vocab_size)
return scores
```
| 10-21-2020 20:16:33 | 10-21-2020 20:16:33 | Solution: https://stackoverflow.com/questions/64575346/tensorflow-set-column-of-tensor-to-infinity |
transformers | 7,953 | closed | 'EncoderDecoderModel' object has no attribute '_init_weights' after `model.resize_token_embeddings(len(tokenizer))` | ## Details
Version transformers==3.4.0
torch==1.6.0
torchvision==0.7.0
>>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
>>>tokenizer.add_special_tokens({"additional_special_tokens": ['extra1', 'extra2']})
1
>>> model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')
>>> model.resize_token_embeddings(len(tokenizer))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/xxia/anaconda3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 607, in resize_token_embeddings
model_embeds = base_model._resize_token_embeddings(new_num_tokens)
File "/Users/xxia/anaconda3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 622, in _resize_token_embeddings
new_embeddings = self._get_resized_embeddings(old_embeddings, new_num_tokens)
File "/Users/xxia/anaconda3/lib/python3.8/site-packages/transformers/modeling_utils.py", line 659, in _get_resized_embeddings
self._init_weights(new_embeddings)
File "/Users/xxia/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 771, in __getattr__
raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
torch.nn.modules.module.ModuleAttributeError: 'EncoderDecoderModel' object has no attribute '_init_weights'
Thanks for help! | 10-21-2020 20:03:49 | 10-21-2020 20:03:49 | Hello @XinXia2019, sadly `resize_token_embeddings` is not supported yet for `EncoderDecoderModel`. Instead you could just manually instantiate the encoder and decoder and apply `resize_token_embeddings` on each part before wrapping them into the `EncoderDecoderModel` framework.<|||||>Thanks!<|||||>(for anyone that might stumble upon this)
I believe you can access the encoder and decoder models from the EncoderDecoderModel instance and resize their corresponding token embeddings, e.g.:
```
...
tokenizer_length = len(tokenizer)
model.encoder.resize_token_embeddings(tokenizer_length)
model.decoder.resize_token_embeddings(tokenizer_length)
...
``` |
transformers | 7,952 | closed | model card for German Sentence Embeddings V2 | - new model card for "German RoBERTa for Sentence Embeddings V2"
- marked old model as outdated | 10-21-2020 19:59:35 | 10-21-2020 19:59:35 | Heya - Are there any concerns to merge this PR? Please let me know.
Many thanks, Philip |
transformers | 7,951 | open | Unexpected/wrong handling of added special tokens in special_tokens_mask (GPT1, BERT, possibly others) | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: **No**
- Using distributed or parallel set-up in script?: **No**
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
Most appropriate seems @mfuntowicz (tokenization), blame says @thomwolf.
## Information
Model I am using (Bert, XLNet ...): OpenAI GPT (also BERT)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
I am adding special tokens (`BOS`, `SEP` and `EOS`) to GPT1 tokenizer in order to format and fine-tune a GPT model a bit differently. I am also making use of the convenient `return_special_tokens_mask` argument in `encode_plus()`, though it does not seem to mark the added custom special tokens as special in the returned mask.
The same is also true when adding custom special tokens to BERT tokenizer. I did not check beyond these two.
The problem for GPT seems to be that `get_special_tokens_mask()` in `tokenization_utils.py` does not seem to take into account any special tokens.
```python
def get_special_tokens_mask(
self, token_ids_0: List, token_ids_1: Optional[List] = None, already_has_special_tokens: bool = False
) -> List[int]:
return [0] * ((len(token_ids_1) if token_ids_1 else 0) + len(token_ids_0))
```
For BERT, it only seems to take into account `[CLS]` and `[SEP]`.
## To reproduce
```python
from transformers import OpenAIGPTTokenizer
tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt")
tokenizer.add_special_tokens({
"bos_token": "<bos>",
"sep_token": "<sep>",
"eos_token": "<eos>"
})
# Does not work this way either
# tokenizer.add_special_tokens({
# "additional_special_tokens": ["<bos>", "<sep>", "<eos>"]
# })
encoded = tokenizer.encode_plus("<bos> State your name, rank and intention <sep> The Doctor, doctor, fun. <eos>",
return_special_tokens_mask=True)
print(encoded["input_ids"])
print(encoded["special_tokens_mask"]) # This returns all zeros
```
## Expected behavior
I would expect that the additional special tokens also get marked as special, i.e. that the `special_tokens_mask` in above snippet returns `[1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1]`
| 10-21-2020 19:40:41 | 10-21-2020 19:40:41 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
>
Keep it open<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
>
> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
👋<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
>
> Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
Bump |
transformers | 7,950 | closed | Code bug in tokenization_utils.py? | In the function split_on_tokens in tokenization_utils.py, it contains the following logic:
if not text.strip():
return []
if not tok_list:
return self._tokenize(text)
So if the text contains just white space, '[]' will be returned. However, if the text contains white space and visible characters, the white space will be used in tokenization. For example, for the following code:
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
print(tokenizer("\nNorth")['input_ids']) # output [198, 14157], since 198 <-> \n and 14157 <-> North
print(tokenizer("\n")['input_ids']) # output [], even if 198 <-> \n
Is this behavior that we expected? | 10-21-2020 18:21:47 | 10-21-2020 18:21:47 | Pinging @mfuntowicz, @n1t0 for their opinions<|||||>I think we should delegate to the underlying tokenization algorithm and never strip like we do here. This what is done in `tokenizers` and thus one of the source of discrepancies between fast and slow tokenizers.
For most tokenizers this is not a problem since these white space get removed later anyway, but for some others (like gpt2) it is.
Note: it is impossible to build tokenizers relying on the formatting with such rules. For example it would be impossible to tokenize some Python.
(Cc @thomwolf)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,949 | closed | fix 'encode_plus' docstring for 'special_tokens_mask' (0s and 1s were reversed) | # What does this PR do?
Fixes the docstring for `encode_plus`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger or anyone really.
| 10-21-2020 17:54:08 | 10-21-2020 17:54:08 | |
transformers | 7,948 | closed | Error: should have a 'get_encoder' function defined when running model.generate() | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Windows-10-10.0.17134-SP0
- Python version: 3.8.4
- PyTorch version (GPU?): 1.5.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: don't know
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.-->
@sshleifer
## Information
Model I am using (Bert, XLNet ...): facebook/bart-large-cnn
The problem arises when using:
* [X ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the code below
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
model = AutoModel.from_pretrained("facebook/bart-large-cnn")
ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt')
# Generate Summary
summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=5, early_stopping=True)
```
Error i get:
>>> summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=5, early_stopping=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\scbtoto\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\autograd\grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "C:\Users\scbtoto\AppData\Local\Programs\Python\Python38\lib\site-packages\transformers\generation_utils.py", line 401, in generate
assert hasattr(self, "get_encoder"), "{} should have a 'get_encoder' function defined".format(self)
AssertionError: BartModel(
(shared): Embedding(50264, 1024, padding_idx=1)
(encoder): BartEncoder(
(embed_tokens): Embedding(50264, 1024, padding_idx=1)
(embed_positions): LearnedPositionalEmbedding(1026, 1024, padding_idx=1)
(layers): ModuleList(
(0): EncoderLayer(
(self_attn): Attention(
(k_proj): Linear(in_features=1024, out_features=1024, bias=True)
(v_proj): Linear(in_features=1024, out_features=1024, bias=True)
(q_proj): Linear(in_features=1024, out_features=1024, bias=True)
(out_proj): Linear(in_features=1024, out_features=1024, bias=True)
)
...
...
...
)
(layernorm_embedding): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
)
) should have a 'get_encoder' function defined
>>>
## Expected behavior
Im trying to run the basic example from the docs found here:
https://huggingface.co/transformers/model_doc/bart.html
i can run the code below without a problem so transformers should be prorperly installed according to the installations docs.
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))"
| 10-21-2020 16:53:24 | 10-21-2020 16:53:24 | Don't see that in the docs.
the docs use `BartForConditionalGeneration`.
You could also use `AutoModelForSeq2SeqLM`.

|
transformers | 7,947 | closed | [GPT2 batch generation] Make test clearer. `do_sample=True` is not deterministic. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7745
Small fix that deleted an unnecessary line from the test
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-21-2020 16:53:08 | 10-21-2020 16:53:08 | |
transformers | 7,946 | closed | EncoderDecoderModel loss function | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
Hey , I want to ask the following questions.
How is the loss calculated in DecoderEncoderModel. What is the mathematic formula of the loss function ?
I just wrote the code like this
outputs = model(input_ids=src, attention_mask=mask, decoder_input_ids=dst, labels=dst, return_dict=True)
loss, logits = outputs.loss, outputs.logits
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 10-21-2020 16:02:43 | 10-21-2020 16:02:43 | Hello, this depends on the decoder you use to initialize the encoder-decoder model. What decoder do you use?<|||||>I use 'bert-base-uncased'. just like this
model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')<|||||>I'm not sure this is the recommended way to load the models as it gives the following result:
```
Some weights of BertLMHeadModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['bert.encoder.layer.0.crossattention.self.query.weight', 'bert.encoder.layer.0.crossattention.self.query.bias', [...]
```
with pretty much all model weights.
Will ping @patrickvonplaten for advice.<|||||>Hey @AI678,
1) The model should be initialized just as you did with
```python
model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')
```
It's normal that `None` of the cross-attention layers are initialized because BERT does not have any and they have to be fine-tuned down the road.
2) To Train a Bert2Bert, you are also correct in doing:
```python
outputs = model(input_ids=src, attention_mask=mask, decoder_input_ids=dst, labels=dst, return_dict=True)
loss, logits = outputs.loss, outputs.logits
```
because BERT automatically shifts the labels for you, see: https://github.com/huggingface/transformers/blob/901e9b8eda2fe88af717f960ddc05cac1803679b/src/transformers/modeling_bert.py#L1060
Also I'll publish a more in-detail notebook about "Leveraging Encoder-Decoder models" soon. This model card could also be helpful: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#bert2bert-summarization-with-%F0%9F%A4%97-encoderdecoder-framework
<|||||>thank you very much |
transformers | 7,945 | closed | Move NoLayerEmbedTokens | As agreed upon with @patrickvonplaten , this moves the very useful, very model agnostic, `NoLayerEmbedTokens` to `modeling_tf_utils.py`, where it can be used by `TFBart` and `TFT5`.
`TFProphetNet` and other seq2seq may also eventually need. | 10-21-2020 15:08:26 | 10-21-2020 15:08:26 | `TFWrappedEmbeddings` is definitely way better, but as a user, I still don't understand what this means. Do you think we could add a comment where it's used? Maybe something along the lines of:
```
# Wraps layer to avoid problems with weight restoring and ensuring we're in the correct TF scope.
```<|||||>Done, thanks for writing it out! |
transformers | 7,944 | closed | [ProphetNet] Correct Doc string example | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-21-2020 14:00:43 | 10-21-2020 14:00:43 | Can we have the examples take less than 119 chars (i'd even settle for 200 honestly)?
<|||||>> Can we have the examples take less than 119 chars (i'd even settle for 200 honestly)?
Can I break lines while using `>>>` ? Or just use a smaller input text? <|||||>> Can I break lines while using `>>>` ? Or just use a smaller input text?
You use `... ` instead of `>>> ` for the intermediate lines, but yes you can. See the [quicktour](https://github.com/huggingface/transformers/blob/master/docs/source/quicktour.rst) for an example (scroll down to "That's encouraging! You can use it on a list of sentences" since GitHub doesn't let me link a specific line in a rst file).<|||||>> Thanks :-)
My Pylinter doesn't pick up the docstring :-/ will have to find a way to fix this. Sorry for all those long lines in the docs |
transformers | 7,943 | closed | [PretrainedConfig] Fix save pretrained config for edge case | # What does this PR do?
There is an edge case for which the "diff" save method for `PretrainedConfig` fails. We decided a while ago in this PR: https://github.com/huggingface/transformers/pull/3797 that we wanted to have more readable configs and thus tweaked the `save_pretrained()` method so that only parameters that are different to the default **PretrainedConfig** class are serialized.
There was an edge case we did not consider:
If a parameter, like `add_cross_attention` defaults to `True` in `ProphetNetConfig`, but is by default `False` in `PretrainedConfig` a problem can arise when a user wants to save `add_cross_attention=False` in his `ProphetNetConfig`. Because `add_cross_attention=False` corresponds to the `PretrainedConfig` default case, this parameter will not be serialized and thus when reloading the config, the parameter defaults to `ProphetNetConfig` which is `True` and which is then an error.
This PR fixes this behavior by simply making sure that a parameter is only **not** saved if it is equal to both `PretrainedConfig` and `ProphetNetConfig`.
This feature requires configs to be instantiated without providing any parameters. This is currently not possible for the `EncoderDecoderModelConfig` and `RagConfig` because those configs are composed of multiple sub-configs which have to be provided. => A new class attribute `is_composition` is added to correctly handle these classes.
Two tests are added.
Also cc @stas00 for FSTM config.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
| 10-21-2020 11:17:30 | 10-21-2020 11:17:30 | Any reason not to look at just the config class? At a first glance, I'd say we want to compare the defaults to the class we instantiated, not to the superclass `PretrainedConfig`.<|||||>> Any reason not to look at just the config class? At a first glance, I'd say we want to compare the defaults to the class we instantiated, not to the superclass `PretrainedConfig`.
Back then this was my initial idea as well - but then the configs could be more or less emtpy if all parameters are the same. This has a couple of disadvantages:
- When looking at the config online people cannot see any parameters and would have to look into the code which might be annoying
- This would make the configs much more prone to break if the init values of respective classes are changed.<|||||>**UPDATE**: I had to add a class attribute to the config to make this feature work (see description above) - @julien-c @sgugger @thomwolf @LysandreJik - could you check if this is fine for you guys.<|||||>LGTM |
transformers | 7,942 | closed | [ProphetNet] Add Question Generation Model + Test | Thanks a lot for provided the model @qiweizhen ! | 10-21-2020 09:15:00 | 10-21-2020 09:15:00 | |
transformers | 7,941 | closed | [RAG] Handle the case when title is None while loading own datasets | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
While loading our own datasets from CSV: `title` and `text` can be `None`.
These `None` value cause issue with DPR tokenizer, hence this PR handle these cases -
1) When `text` is None then skip that record
2) When. `title` is None then use empty `string`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@LysandreJik, @patrickvonplaten | 10-21-2020 08:58:02 | 10-21-2020 08:58:02 | @lhoestq Can you please check |
transformers | 7,940 | closed | Access bert output with output_hidden_states=True of TFBertForSequenceClassification fails | Hey,
I want to access the output of the main bert model inside the TFBertForSequenceClassification model with output_hidden_states :
`bert_model = TFBertForSequenceClassification.from_pretrained('bert-base-german-cased', output_hidden_states=True)
`
then
```
print(bert_model.summary())
print(bert_model.get_layer("bert").output)
print(bert_model.layers[0].output[2]) ->yields error
```
bert_model.get_layer("bert").output gives just the 2 outputs for last_hidden_state and pooled_output but the hiden_states are missing.
Why are the hidden_sates not available though I set output_hidden_states=True ?
| 10-21-2020 05:59:53 | 10-21-2020 05:59:53 | Hello! Could you provide your software versions so that I may investigate?
I get an error earlier: `print(bert_model.get_layer("bert").output)`:
```
Traceback (most recent call last):
File "<input>", line 4, in <module>
File "/home/lysandre/Workspaces/Python/transformers/.env/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 2105, in output
raise AttributeError('Layer ' + self.name + ' has no inbound nodes.')
AttributeError: Layer bert has no inbound nodes.
```<|||||>Actually I found weird things, because previously it worked, but after a windows update I think my version of the transformer librry was set back somehow. Because I used version 3.0.2, and there there was no output at all of the hidden states. But now for the latest stable version (3.4) it works. <|||||>Glad to hear it! |
transformers | 7,939 | closed | Fix BatchEncoding.word_to_tokens for removed tokens | Fixes https://github.com/huggingface/tokenizers/issues/343
Copied from issue on `tokenizers` repo:
> I'm working with pre-tokenized data (UD-Treebanks) for a sequence-tagging task, since I don't want to inflate the importance of a training example based on the number of word-pieces the token gets split into, I need to map the labels to only the first word-piece of a token.
>
> To achieve this, I was iterating over the words in the original sentence as taken from the treebank and used the word_to_tokens method with the offset of the word in the sentence to get the corresponding token span. If words simply vanish from the sentence, then at first the offsets become invalid and at the final word of the sequence an exception is raised because there's no offset for disappearing words in the sequence.
This notebook demontrates the issue:
https://colab.research.google.com/drive/139mVXMQ7jZBBoTpgkribgVpOu6W1u8e9?usp=sharing
~~~Py
import transformers
import torch
tokenizer = transformers.AutoTokenizer.from_pretrained("bert-base-multilingual-cased", use_fast=True)
batch = [["Test", "\xad", "test"]]
encoded_batch = tokenizer.batch_encode_plus(
batch,
padding=True,
is_pretokenized=True,
return_tensors='pt',
truncation=True)
first_pieces = torch.zeros_like(encoded_batch.attention_mask, dtype=torch.bool)
for row, sentence in enumerate(batch):
for col, token in enumerate(sentence):
idx = encoded_batch.word_to_tokens(row, col)[0] # this method raises the exception
first_pieces[row, idx] = True
~~~ | 10-20-2020 22:11:54 | 10-20-2020 22:11:54 | |
transformers | 7,938 | closed | PPL guide code snippet minor fix | # What does this PR do?
Minor fix to the code snippet in the [perplexity guide](https://huggingface.co/transformers/perplexity.html), as discussed in [this thread](https://discuss.huggingface.co/t/guide-the-best-way-to-calculate-the-perplexity-of-fixed-length-models/193).
Previously the snippet didn't take into account the length of the the last loop over the data, which can be shorter than the specified `stride` length. | 10-20-2020 22:07:34 | 10-20-2020 22:07:34 | |
transformers | 7,937 | closed | Your example code for WNUT NER produces array indexing ValueError | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@stefan-it, @sgugger
## Information
Model I am using (Bert, XLNet ...): DistilBERT
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
I'm trying to run the example code Advanced Guides --> Fine-tuning with custom datasets --> [Token Classification with W-NUT Emerging Entities](https://huggingface.co/transformers/custom_datasets.html#token-classification-with-w-nut-emerging-entities).
Steps to reproduce the behavior:
1. I already have a [Google CoLab notebook with your code](https://colab.research.google.com/drive/1i5N7Xc-i91bqXmcp9hamt5q3a_5a-ZnZ?usp=sharing).
2. I use the `tokenizer` with `max_length=64`, which is typically my "best practice" choice. Note that if I set `max_length=None`, everything runs successfully.
```python
max_length = 64
encodings = tokenizer(texts, is_split_into_words=True, max_length=max_length, return_offsets_mapping=True, padding=True, truncation=True)
```
3. When I run `encode_tags()` on the WNUT data, I get a ValueError
```python
labels = encode_tags(tags, encodings)
11 # set labels whose first offset position is 0 and the second is not 0
---> 12 doc_enc_labels[(arr_offset[:,0] == 0) & (arr_offset[:,1] != 0)] = doc_labels
13 encoded_labels.append(doc_enc_labels.tolist())
14
ValueError: NumPy boolean array indexing assignment cannot assign 29 input values to the 24 output values where the mask is true
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I expect that `encode_tags()` should return the correct IOB tag labels when I run your `Tokenizer` with a `max_length=64`. | 10-20-2020 20:17:05 | 10-20-2020 20:17:05 | Hi,
not a HuggingFace developer but I came across the same problem. I think this is this is due to the fact that the Tokenizer is truncating sequences longer than 64 so there is a mismatch in length between `tags` and `encodings`. This is also why it's fixed when you increase the max_lenght. Another reason may be that some characters in your sentences are not properly decoded because of wrong charset detection. I hope this helps.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I am also facing this issue. I am using custom dataset and haven't passed any max_length argument to the tokenizer.
Any idea how to fix this ? But same piece of code works well on W-NUT dataset<|||||>> Hi,
> not a HuggingFace developer but I came across the same problem. I think this is this is due to the fact that the Tokenizer is truncating sequences longer than 64 so there is a mismatch in length between `tags` and `encodings`. This is also why it's fixed when you increase the max_lenght. Another reason may be that some characters in your sentences are not properly decoded because of wrong charset detection. I hope this helps.
I observed that in the notebook shared by Hugging face for W-Nut dataset either, the tags and encodings length (for each record) are not same. So hoping that shouldn't be the issue. <|||||>@joeddav I am facing the same issue when switching to another dataset, what could be the problem? the behavior continues even with setting `max_length=None`<|||||>For me the error occurred using the example code in combination with a sentence piece tokenizer (e.g. XLM-RoBERTa). Switching to the updated code used in the run_ner.py script (https://github.com/huggingface/transformers/blob/ad072e852816cd32547504c2eb018995550b126a/examples/token-classification/run_ner.py) solved the issue for me. <|||||>I figured out the problem. A typical input instance has `N` tokens and `N` NER tags with a one-to-one correspondence. When you pass in the sentence to the tokenizer, it will add `k` more tokens for either (1) subword tokens (e.g. `##ing`) or (2) special model-specific tokens (e.g. `[CLS]` or `[SEP]`. So now you have `N+k` tokens and `N` NER tags.
If you apply a max length truncation (e.g. `64`), then those `N+k` tokens will get truncated to `64`, leaving an unpredictable mix of valid tokens and special tokens because both types of tokens may have been truncated. However, there are still `N` NER tags which may not match up against valid tokens because the latter may have been truncated.
I fixed the problem by one of several approaches:
1. Removing data instances that are problematically long. For example, I removed sentences which had more than 45 tokens. Using Pandas really help out here.
2. Increasing the truncation length to, say, 128, or whatever number that's longer than any `N+k`. However, this increase forces me to reduce my batch size due to GPU memory constraints.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>I solved the issue by replacing
```python
doc_enc_labels[(arr_offset[:,0] == 0) & (arr_offset[:,1] != 0)] = doc_labels
encoded_labels.append(doc_enc_labels.tolist())
```
with
```python
mask = (arr_offset[:, 0] == 0) & (arr_offset[:, 1] != 0)
doc_enc_labels[mask] = doc_labels[:np.sum(mask)]
encoded_labels.append(doc_enc_labels.tolist())
```
By this way, it will only map the first `np.sum(mask)` true indices of `doc_labels` in case of any indexing problem. I am a newbie 🤗 Transformers user, and I wonder if this solution may cause any problems.<|||||>I have this same issue but
`mask = (arr_offset[:, 0] == 0) & (arr_offset[:, 1] != 0)
doc_enc_labels[mask] = doc_labels[:np.sum(mask)]
encoded_labels.append(doc_enc_labels.tolist())
`
did not work after the first encoded_labels run<|||||>Guys if the example has issues, why even put it out there and have us chaise our tails?<|||||>Hey! The example is currently being rewritten here by @stevhliu: https://github.com/huggingface/transformers/pull/13923<|||||>@LysandreJik Thanks for revisiting this problem. I feel that aligning tokens, token labels, and sub-world pieces is too complex for users of the library to implement themselves. Can you (HuggingFace) please provide some utility functions to make this task easier?<|||||>Hi @githubrandomuser2017, the examples we provide showcase exactly how to do that, for example here: https://github.com/huggingface/transformers/blob/master/examples/pytorch/token-classification/run_ner.py#L370-L404
Does this utility function help you out? <|||||>PR #13923 was merged with the new version of this example. Closing this issue, feel free to reopen/comment if the issue arises again.<|||||>@LysandreJik
> Hi @githubrandomuser2017, the examples we provide showcase exactly how to do that, for example here: https://github.com/huggingface/transformers/blob/master/examples/pytorch/token-classification/run_ner.py#L370-L404
>
> Does this utility function help you out?
I'll let other users chime in. |
transformers | 7,936 | closed | cannot load customized tokenizer with modified vocabulary | I have customized a tokenizer saved as tokenization_new.py and modified the vocab.txt from s3 server. I tried
`from transformers import NewBertTokenizer`
`tokenizer = NewBertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)`
where I modified `PRETRAINED_VOCAB_FILES_MAP = {"vocab_file":{"bert-base-uncased": /vocab/vocab.txt},}`
where `/vocab/` is a directory parallel to tokenization_new.py that contains my customized vocabulary. However, I got an error raised as
`Model name 'bert-base-uncased' was not found in tokenizers model name list (bert-base-uncased). We assumed 'bert-base-uncased' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.`
What should I do to use my own customized tokenizer?
Thanks for the help! | 10-20-2020 18:11:49 | 10-20-2020 18:11:49 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,935 | closed | TensorBoard/Wandb/optuna/raytune integration improvements. | Improves TensorBoard logging by grouping train / eval metrics as it is usually done in TensorBoard.
Improves TensorBoard/optuna model hyper-parameters logging.
Improves optuna and Ray/tune integration, and provides model hyper-parameter naming.
Test (and sample code) is provided in test_trainer.TrainerHyperParameterIntegrationTest .
Some more work may be need to harmonize metrics naming for eval / train, as the "eval_" prefix used is not very convenient, using a "eval/" prefix would be more foolproof, and consistent with TensorBoard usage, but it would break quite some code, and so may be done in a separate PR.
| 10-20-2020 17:30:22 | 10-20-2020 17:30:22 | |
transformers | 7,934 | closed | [s2s] create doc for pegasus/fsmt replication | This PR:
* created a dedicated doc for getting eval data
* move the existing entries to the new doc
* add FSMT
* add pegasus
@sshleifer | 10-20-2020 17:02:59 | 10-20-2020 17:02:59 | |
transformers | 7,933 | closed | Fix comet_ml import and add ensure availability | # What does this PR do?
1. Adds a better check to make sure comet_ml is ready to use
2. Moves the integration imports above the ML imports. This is required to use comet_ml
The current version 3.4.0 is broken, and can not be used with comet_ml without a workaround. This PR fixes that.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Trainer: @sgugger
| 10-20-2020 17:00:01 | 10-20-2020 17:00:01 | FYI: @stas00 <|||||>> but I agree with @stas00: I wouldn't want to see warnings about cometml or wandb if I don't have the libraries installed.
This is not the case already. We were talking about the odd case where some other package installed one of these as its auto-dependencies. So the user now unwittingly needs to figure out why in the world she needs to get an API key for something she didn't ask for in first place.
Unfortunately I am forced to reset my conda env a lot recently, so I lost the one where this exact scenario has happened, so at the moment I can't point the guilty finger at which package installed `cometml` without me doing so intentionally/directly. If it happens again I will report back.
Otherwise all is good.<|||||>Resolved merge conflicts. Should be ready to go.<|||||>Thanks! |
transformers | 7,932 | closed | Addition of MMI-antiLM decoding | # 🚀 Feature request
Hugging Face does a great job of including popular decoding strategies such as nucleus sampling, top-k, and temperature. There are also other really interesting decoding strategies for chatbots to fix the response "blandness" or "I don't know" problem, such as using the **Maximum Mutual Information anti-Language Model objective (MMI anti-LM)**. The algorithm is defined in [A Diversity-Promoting Objective Function for Neural Conversation Models](https://www.aclweb.org/anthology/N16-1014.pdf).
## Motivation
I'm requesting this as a feature because I used this in a narrative generation paper. From this point forward I will be using "we" to reference my co-authors (@XiangLi1999 also contributed to the code) and I. The results of our work show that antiLM decoding does in fact help make the generated output more interesting without hurting fluency. Our work is [Decoding Methods for Neural Narrative Generation](https://arxiv.org/abs/2010.07375) and the rest of our code is in our [paper repo](https://github.com/AADeLucia/gpt2-narrative-decoding).
We think others would also be interested in using this decoding method for their work.
## Your contribution
We have a working implementation in a hacked version of the `generation_utils.py` file. It's not pretty (sorry) but maybe a good starting point? The code is in PR #7931.
Also the author's implementation (not huggingface-based) is in [Jiwei Li's repo](https://github.com/jiweil/Neural-Dialogue-Generation). | 10-20-2020 16:53:21 | 10-20-2020 16:53:21 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,931 | closed | MMI-antiLM decoding | # What does this PR do?
Implements **Maximum Mutual Information anti-Language Model objective (MMI anti-LM)** decoding from [A Diversity-Promoting Objective Function for Neural Conversation Models](https://www.aclweb.org/anthology/N16-1014.pdf).
| 10-20-2020 16:51:47 | 10-20-2020 16:51:47 | This technique looks really cool! Unfortunately running `generate` with two models will break lots of our assumptions, so maybe you could write a standalone:
```python
def generate_anti_lm(model, lm_model, **kwargs):
... logic ...
return generated_ids
```
,put it in `examples/anti_mlm_generation/`, and add a test that it runs with `sshleifer/tiny-gpt2`, for example?
Does that make sense to you @patrickvonplaten ?<|||||>Hey @AADeLucia - thanks for your PR! The `generate()` function is a very central part of the library and thus we have to be super careful when implementing new features. I agree with @sshleifer that MMI-antiLM decoding probably fits better in an example (has its own `generate()` function in an example file) to begin with - would that be ok for you?<|||||>> Hey @AADeLucia - thanks for your PR! The `generate()` function is a very central part of the library and thus we have to be super careful when implementing new features. I agree with @sshleifer that MMI-antiLM decoding probably fits better in an example (has its own `generate()` function in an example file) to begin with - would that be ok for you?
Thank you so much for your quick responses! Yes, putting it as its own example is okay with me.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 7,930 | closed | update model cards of Illuin models | # What does this PR do?
Updates models cards of Illuin uploaded models to provide various informations.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-20-2020 16:15:56 | 10-20-2020 16:15:56 | |
transformers | 7,929 | closed | Reformer model does not work with padded sequences | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (No)
- Tensorflow version (GPU?): 2.3.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Reformer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) CommonGen
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import ReformerTokenizer, ReformerModel
tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
seq = tokenizer(['Hello this is a test.', 'This is a test as well'], padding=True, return_tensors='pt')
reformer = ReformerModel.from_pretrained('google/reformer-crime-and-punishment')
out = reformer(**seq)
```
```python
Traceback (most recent call last):
File "reformerbug.py", line 20, in <module>
out = reformer(**seq)
File "/home/fabian/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/fabian/.local/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 2096, in forward
embedding_output = self.embeddings(
File "/home/fabian/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/fabian/.local/lib/python3.8/site-packages/transformers/modeling_reformer.py", line 252, in forward
inputs_embeds = self.word_embeddings(input_ids)
File "/home/fabian/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/fabian/.local/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 124, in forward
return F.embedding(
File "/home/fabian/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 1814, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The model should properly calculate the forward pass given the encoded sequence.
<!-- A clear and concise description of what you would expect to happen. -->
| 10-20-2020 15:10:36 | 10-20-2020 15:10:36 | The only Reformer tokenizer we have actually doesn't have a PAD Token which is why this leads to problems. The PR attached below removes the PAD token. Before padding one should set
```python
tokenizer.pad_token = tokenizer.eos_token
```
Similar to GPT2 this won't cause any problems thanks to causal masking, see: https://github.com/huggingface/transformers/issues/4122#issuecomment-713749343 |
transformers | 7,928 | closed | Respect the 119 line chars | Respect the 119 lien chars limit in the model summary | 10-20-2020 14:56:08 | 10-20-2020 14:56:08 | |
transformers | 7,927 | closed | [ProphetNet] add model summary | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-20-2020 14:03:47 | 10-20-2020 14:03:47 | |
transformers | 7,926 | closed | Validation loop gives OOM when finetuning T5 | While finetuning T5-base on summarization task, using `--sortish_sampler` it gives an OOM error starting from a particular index during the validation loop. After removing those indices and training again, I still get the OOM error, but in the second validation loop, whereas the validation loop worked well the first time in the 0th epoch. I have finetuned t5-base previously as well on the same dataset and the same environment, and it never gave this error.
I am using a batch size of 1 during training and evaluation both.
I am using Colab to finetune the model on a single GPU.
GPU specs:
GPU: Tesla K80
RAM: 12GB
This is the traceback:
Epoch 0: 91% 5644/6231 [25:04<02:36, 3.75it/s, loss=2.089, v_num=16]
Epoch 0: 91% 5645/6231 [25:06<02:36, 3.75it/s, loss=2.089, v_num=16]
Epoch 0: 91% 5646/6231 [25:07<02:36, 3.75it/s, loss=2.089, v_num=16]
Epoch 0: 91% 5647/6231 [25:08<02:35, 3.74it/s, loss=2.089, v_num=16]
Epoch 1: 84% 5247/6231 [18:08<03:24, 4.82it/s, loss=1.859, v_num=16]
Validating: 0it [00:00, ?it/s]
Epoch 1: 84% 5248/6231 [18:10<03:24, 4.81it/s, loss=1.859, v_num=16]
Epoch 1: 84% 5249/6231 [18:11<03:24, 4.81it/s, loss=1.859, v_num=16]Traceback (most recent call last):
File "finetune.py", line 441, in <module>
main(args)
File "finetune.py", line 416, in main
logger=logger,
File "/content/drive/My Drive/Colab Notebooks/transformers_20_Oct_summarization/transformers/examples/lightning_base.py", line 386, in generic_train
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 440, in fit
results = self.accelerator_backend.train()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 54, in train
results = self.train_or_test()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/accelerator.py", line 66, in train_or_test
results = self.trainer.train()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 483, in train
self.train_loop.run_training_epoch()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 569, in run_training_epoch
self.trainer.run_evaluation(test_mode=False)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 568, in run_evaluation
output = self.evaluation_loop.evaluation_step(test_mode, batch, batch_idx, dataloader_idx)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 171, in evaluation_step
output = self.trainer.accelerator_backend.validation_step(args)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 76, in validation_step
output = self.__validation_step(args)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 86, in __validation_step
output = self.trainer.model.validation_step(*args)
File "finetune.py", line 181, in validation_step
return self._generative_step(batch)
File "finetune.py", line 221, in _generative_step
max_length=self.eval_max_length,
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py", line 489, in generate
model_kwargs=model_kwargs,
File "/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py", line 665, in _generate_beam_search
outputs = self(**model_inputs, return_dict=True) # (batch_size * num_beams, cur_len, vocab_size)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py", line 1212, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py", line 767, in forward
output_attentions=output_attentions,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py", line 556, in forward
output_attentions=output_attentions,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py", line 478, in forward
output_attentions=output_attentions,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py", line 374, in forward
q, k.transpose(3, 2)
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.73 GiB total capacity; 13.70 GiB already allocated; 13.88 MiB free; 13.72 GiB reserved in total by PyTorch)
Epoch 1: 84%|████████▍ | 5249/6231 [18:13<03:24, 4.80it/s, loss=1.859, v_num=16]
Any help is appreciated. :) | 10-20-2020 12:56:29 | 10-20-2020 12:56:29 | Maybe @sshleifer or @patil-suraj can help here? :-) <|||||>We can't help without:
1) a command that used to work and now OOMs.
2) Some notion of when it worked. (Ideally version numbers but a guess is fine.)
3) current `transformers-cli env` and `pip freeze | grep torch` outputs.<|||||>Maybe colab allocated you something larger than 12GB K80 the last time you ran your command?<|||||>> We can't help without:
>
> 1. a command that used to work and now OOMs.
> 2. Some notion of when it worked. (Ideally version numbers but a guess is fine.)
> 3. current `transformers-cli env` and `pip freeze | grep torch` outputs.
Yeah, I understand what you're saying. Itrained T5 like a month back on colab. Also I wanted to know, if the OOM error comes sporadically, i.e., sometimes in the first epoch, sometimes in the second epoch, and it's always during the validation loop, what should I conclude from it? It is an error in my code, or it is just the lack of appropriate memory.<|||||>Unfortunately, it's not clear what to conclude.
You can force eval to use less memory by controlling `val_max_target_length`, `eval_max_gen_length`, `val_batch_size`, and `eval_num_beams`.
<|||||>Alright. Thank you so much.<|||||>Alternatively, you can try to use `Seq2SeqTrainer`.
`Trainer` has recently added `eval_accumulation_step` argument which offloads the `logits/predictions` to `cpu` every `eval_accumulation_steps` to avoid OOM, you can use this with `Seq2SeqTrainer` as well.<|||||>> Alternatively, you can try to use `Seq2SeqTrainer`.
> `Trainer` has recently added `eval_accumulation_step` argument which offloads the `logits/predictions` to `cpu` every `eval_accumulation_steps` to avoid OOM, you can use this with `Seq2SeqTrainer` as well.
Thank you so much. Tried this and it worked well. :)<|||||>> > Alternatively, you can try to use `Seq2SeqTrainer`.
> > `Trainer` has recently added `eval_accumulation_step` argument which offloads the `logits/predictions` to `cpu` every `eval_accumulation_steps` to avoid OOM, you can use this with `Seq2SeqTrainer` as well.
>
> Thank you so much. Tried this and it worked well. :)
would you share what `eval_accumulation_steps` you used? Thanks.<|||||>I am having the exact same issue, this is happening only during evaluation and not training. thanks<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 7,925 | closed | # Add whole word mask support for lm fine-tune | This PR add support for **wwm** (whole word mask) proxy when fine-tune BERT like model.
And it can be divided into two part : English Model Support and Chinese Model Support
For English, it's simple. The original tokenizer res contains symbols like '##ing'.
I just use the same mask proxy in [data_collator.py](https://github.com/wlhgtc/transformers/blob/master/src/transformers/data/data_collator.py#L168) by [Google.](https://github.com/google-research/bert/blob/master/create_pretraining_data.py#L342)
For Chinese, it's hard. We need to rely on (word level) tokenizer, cause BERT is char level in Chinese.
So I do things as follow to get word level tokens:
1. add get info code in [chinese_ref.py](https://github.com/wlhgtc/transformers/blob/master/examples/language-modeling/chinese_ref.py#L79)
2. create a new dataset to keep ref info [language_model.py](https://github.com/wlhgtc/transformers/blob/master/src/transformers/data/datasets/language_modeling.py#L117)
3. create word level ref according to ref [data_collator.py](https://github.com/wlhgtc/transformers/blob/master/src/transformers/data/data_collator.py#L150)
Then, it's all same to English.
And I add two parameters (`wwm` and `chinese_ref_path` ) to run lm.
| 10-20-2020 10:24:33 | 10-20-2020 10:24:33 | Seem all test passed( expect the format problem), @sgugger @stas00 Could you help me review these PR ? <|||||>And I wonder which version of black you use in **check_code_quality**.
I got errors as follows:
```
would reformat /home/circleci/transformers/examples/language-modeling/run_language_modeling.py
would reformat /home/circleci/transformers/src/transformers/data/datasets/language_modeling.py
```
I reformat my code with black(19.10b0), and all files are left unchanged<|||||>> Thanks a lot for your PR!
>
> Before digging more in a detailed review, I have a general comment: I think this should be decoupled a bit more: you created a new class `LineByLineWithRefDataset`, and in the same vein, I think you should create a new `DataCollator` for the whole-world masking. This will make it clearer to read and easier to customize.
>
> It would also be super nice if you could document in the README how to use your example with a chinese reference file (do you pass the script you added? or use the script you added to generate a file?)
Finish ~ @sgugger <|||||>> And I wonder which version of black you use in **check_code_quality**.
```
$ grep black setup.py
extras["quality"] = ["black >= 20.8b1", "isort >= 5.5.4", "flake8 >= 3.8.3"]
```<|||||>> > And I wonder which version of black you use in **check_code_quality**.
>
> ```
> $ grep black setup.py
> extras["quality"] = ["black >= 20.8b1", "isort >= 5.5.4", "flake8 >= 3.8.3"]
> ```
thx!<|||||>Looking good to me except for the code quality. If you don't manage to fix it, I can force-push on your branch.<|||||>> Looking good to me except for the code quality. If you don't manage to fix it, I can force-push on your branch.
OK, I tried to fix it but failed :(<|||||>Just made the necessary change. Note that this wasn't styling that caused the isse but the code quality in general. `make quality` was erroring and telling you to run `make fix-copies` (which I did).<|||||>> This mostly looks good to me except I don't fully understand why we need the reference file. What's `LTP`? Why do we need reference files? Can this be explained in the README?
Thanks for your question.
**Q :** Why ref file ?
**A :** Suppose we have a Chinese sentence like : `我喜欢你。` The original Chinese-BERT will tokenize it as `['我','喜','欢','你']` in char level.
Actually, `喜欢` is a whole word. For whole word mask proxy, We need res like `['我','喜','##欢','你']`.
So we need a ref file to tell model which pos of BERT original token should be added `##`.
**Q :** Why LTP ?
**A :** Cause the best known Chinese WWM BERT is [https://github.com/ymcui/Chinese-BERT-wwm](https://github.com/ymcui/Chinese-BERT-wwm). It works well on so many Chines Task like CLUE (Chinese GLUE).
They use LTP, so if we want to fine-tune their model, we need LTP.
@LysandreJik hope this would help.
<|||||>@wlhgtc ltp is not added to the requirements.txt under examples folder <|||||>> @wlhgtc ltp is not added to the requirements.txt under examples folder
Thanks for your notice. I forgot add it to requirements.txt.
But this is an optional package only for Chinese LM Fine-tune(and could be replaced by others tokenizer), I haven't find a way to note that :(<|||||>> > @wlhgtc ltp is not added to the requirements.txt under examples folder
>
> Thanks for your notice. I forgot add it to requirements.txt.
> But this is an optional package only for Chinese LM Fine-tune(and could be replaced by others tokenizer), I haven't find a way to note that :(
Thanks, I also just tried, ltp requires transformer==3.2. I have no idea why. so have to install ltp with on dependency. Very annoying. By the way, thanks for the excellent work.
one more bug looks like when doing eval, it is referring to the ref file for the training data. if I set train_data = test_data. It goes through fine. Did I do something wrong? I am trying to follow your process as close as I can
```
Traceback (most recent call last):
File "../run_language_modeling.py", line 351, in <module>
main()
File "../run_language_modeling.py", line 279, in main
if training_args.do_eval
File "../run_language_modeling.py", line 174, in get_dataset
return _dataset(args.eval_data_file)
File "../run_language_modeling.py", line 160, in _dataset
ref_path=args.chinese_ref_file,
File "/home/chengyu/anaconda3/envs/pytorch_transformer/lib/python3.7/site-packages/transformers/data/datasets/language_modeling.py", line 139, in __init__
assert len(data) == len(ref)
```
<|||||>1. Yeah, the LTP version doesn't support the newest transformers. I do the same things as yours.
2. For the error, it means that your dataset has different length with you ref file(cause we read it line by line, this would lead to mismatch). Seem I didn't add the param `eval_ref_file` to data_args, then it will read `train_ref_file`; then cause this error.
I will fix it soon.
|
transformers | 7,924 | closed | EncoderDecoderModel not working with DDP | ## Environment info
- `transformers` version: 3.3.1
- Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.8
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Using Distributed
### Who can help
Trainer: @sgugger
EncoderDecoderModel: @patrickvonplaten
## Information
I am using a combination of `distilroberta-base` as encoder and `distilgpt2` as a decoder. Used `EncoderDecoderModel` for model initialisation and `Trainer` class for fine-tuning model. For launching distributed processes, used `torch.distributed.launch`.
Below is the model config.
```
{
"add_cross_attention": false,
"architectures": null,
"bad_words_ids": null,
"bos_token_id": null,
"chunk_size_feed_forward": 0,
"decoder": {
"_num_labels": 1,
"activation_function": "gelu_new",
"add_cross_attention": true,
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bad_words_ids": null,
"bos_token_id": 50256,
"chunk_size_feed_forward": 0,
"decoder_start_token_id": null,
"do_sample": false,
"early_stopping": false,
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"finetuning_task": null,
"gradient_checkpointing": false,
"id2label": {
"0": "LABEL_0"
},
"initializer_range": 0.02,
"is_decoder": true,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0
},
"layer_norm_epsilon": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"min_length": 0,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 6,
"n_positions": 1024,
"no_repeat_ngram_size": 0,
"num_beams": 1,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"pad_token_id": null,
"prefix": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"resid_pdrop": 0.1,
"return_dict": false,
"sep_token_id": null,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 50
}
},
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"use_bfloat16": false,
"use_cache": true,
"vocab_size": 50257,
"xla_device": null
},
"decoder_start_token_id": null,
"do_sample": false,
"early_stopping": false,
"encoder": {
"add_cross_attention": false,
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bad_words_ids": null,
"bos_token_id": 0,
"chunk_size_feed_forward": 0,
"decoder_start_token_id": null,
"do_sample": false,
"early_stopping": false,
"eos_token_id": 2,
"finetuning_task": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 514,
"min_length": 0,
"model_type": "roberta",
"no_repeat_ngram_size": 0,
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 1,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"pad_token_id": 1,
"prefix": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"return_dict": false,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 1,
"use_bfloat16": false,
"use_cache": true,
"vocab_size": 50265,
"xla_device": null
},
"eos_token_id": null,
"finetuning_task": null,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"is_decoder": false,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"length_penalty": 1.0,
"max_length": 20,
"min_length": 0,
"model_type": "encoder_decoder",
"no_repeat_ngram_size": 0,
"num_beams": 1,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"pad_token_id": null,
"prefix": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"return_dict": false,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"use_bfloat16": false,
"use_cache": true,
"xla_device": null
}
```
Got this error when using `Trainer` to fine-tune this model.
```
Traceback (most recent call last):
File "training/run_training.py", line 200, in <module>
raise e
File "training/run_training.py", line 197, in <module>
main()
File "training/run_training.py", line 163, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 768, in train
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1116, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1142, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py", line 526, in forward
self.reducer.prepare_for_backward(list(_find_tensors(output)))
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
```
I checked this variable is `find_unused_parameters` set as true. Also above training works well in a single GPU setting. | 10-20-2020 06:57:08 | 10-20-2020 06:57:08 | @sgugger @patrickvonplaten Can someone please help me?<|||||>I tried running @patrickvonplaten `bert-bert` Encoder-Decoder summarization script using DDP but got same error.
Below is the script. Have modified script a bit to skip some download for fast experimentation.
``` patrick_script.py
#!/usr/bin/env python3
import os
import nlp
import logging
from transformers import BertTokenizer, EncoderDecoderModel, Trainer, TrainingArguments
logging.basicConfig(level=logging.INFO)
local_rank = int(os.environ.get('LOCAL_RANK', -1))
print("local rank", local_rank)
model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased")
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
# CLS token will work as BOS token
tokenizer.bos_token = tokenizer.cls_token
# SEP token will work as EOS token
tokenizer.eos_token = tokenizer.sep_token
# load train and validation data
# train_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="train")
train_dataset = None
val_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="validation[:1%]")
# # load rouge for validation
# rouge = nlp.load_metric("rouge")
# set decoding params
model.config.decoder_start_token_id = tokenizer.bos_token_id
model.config.eos_token_id = tokenizer.eos_token_id
model.config.max_length = 142
model.config.min_length = 56
model.config.no_repeat_ngram_size = 3
model.early_stopping = True
model.length_penalty = 2.0
model.num_beams = 4
# map data correctly
def map_to_encoder_decoder_inputs(batch):
# Tokenizer will automatically set [BOS] <text> [EOS]
# cut off at BERT max length 512
inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=512)
# force summarization <= 128
outputs = tokenizer(batch["highlights"], padding="max_length", truncation=True, max_length=128)
batch["input_ids"] = inputs.input_ids
batch["attention_mask"] = inputs.attention_mask
batch["decoder_input_ids"] = outputs.input_ids
batch["labels"] = outputs.input_ids.copy()
# mask loss for padding
batch["labels"] = [
[-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"]
]
batch["decoder_attention_mask"] = outputs.attention_mask
assert all([len(x) == 512 for x in inputs.input_ids])
assert all([len(x) == 128 for x in outputs.input_ids])
return batch
# def compute_metrics(pred):
# labels_ids = pred.label_ids
# pred_ids = pred.predictions
#
# # all unnecessary tokens are removed
# pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
# label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
#
# rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid
#
# return {
# "rouge2_precision": round(rouge_output.precision, 4),
# "rouge2_recall": round(rouge_output.recall, 4),
# "rouge2_fmeasure": round(rouge_output.fmeasure, 4),
# }
# set batch size here
batch_size = 1
# make train dataset ready
# train_dataset = train_dataset.map(
# map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"],
# )
# train_dataset.set_format(
# type="torch", columns=["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
# )
# same for validation dataset
val_dataset = val_dataset.map(
map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"],
)
val_dataset.set_format(
type="torch", columns=["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
)
# set training arguments - these params are not really tuned, feel free to change
training_args = TrainingArguments(
output_dir="./",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
evaluate_during_training=True,
do_train=True,
do_eval=True,
logging_steps=1000,
save_steps=1000,
eval_steps=1000,
overwrite_output_dir=True,
warmup_steps=2000,
save_total_limit=10,
local_rank=local_rank
)
# instantiate trainer
trainer = Trainer(
model=model,
args=training_args,
# compute_metrics=compute_metrics,
train_dataset=val_dataset,
eval_dataset=train_dataset,
)
# start training
trainer.train()
````
ran it using
`python -m torch.distributed.launch --nproc_per_node ${GPUS_ALLOWED} --use_env patrick_script.py`
error stack trace
```
I1021 08:03:09.387925 140506964375360 arrow_dataset.py:905] Loading cached processed dataset at /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8/cache-34961a58ac716d5b0323e755fe4ab272.arrow
I1021 08:03:09.390125 139812404635456 filelock.py:274] Lock 139808428511864 acquired on /root/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.4fe1f8a4d3f3c15617ba15dd2d93f559a09627c62d0b04e22f89a5131b7bffb9.py.lock
I1021 08:03:09.390361 139812404635456 load.py:331] Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /usr/local/lib/python3.6/dist-packages/nlp/datasets/cnn_dailymail
I1021 08:03:09.390519 139812404635456 load.py:344] Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /usr/local/lib/python3.6/dist-packages/nlp/datasets/cnn_dailymail/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8
I1021 08:03:09.390662 139812404635456 load.py:357] Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py to /usr/local/lib/python3.6/dist-packages/nlp/datasets/cnn_dailymail/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8/cnn_dailymail.py
I1021 08:03:09.390966 139812404635456 load.py:371] Found dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/dataset_infos.json to /usr/local/lib/python3.6/dist-packages/nlp/datasets/cnn_dailymail/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8/dataset_infos.json
I1021 08:03:09.391108 139812404635456 load.py:382] Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /usr/local/lib/python3.6/dist-packages/nlp/datasets/cnn_dailymail/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8/cnn_dailymail.json
I1021 08:03:09.391246 139812404635456 filelock.py:318] Lock 139808428511864 released on /root/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.4fe1f8a4d3f3c15617ba15dd2d93f559a09627c62d0b04e22f89a5131b7bffb9.py.lock
I1021 08:03:09.394302 139812404635456 info.py:236] Loading Dataset Infos from /usr/local/lib/python3.6/dist-packages/nlp/datasets/cnn_dailymail/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8
I1021 08:03:09.395207 140506964375360 arrow_dataset.py:563] Set __getitem__(key) output type to torch for ['input_ids', 'attention_mask', 'decoder_input_ids', 'decoder_attention_mask', 'labels'] columns (when key is int or slice) and don't output other (un-formated) columns.
/usr/local/lib/python3.6/dist-packages/transformers/training_args.py:332: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)
FutureWarning,
I1021 08:03:09.395819 139812404635456 builder.py:169] Overwrite dataset info from restored data version.
I1021 08:03:09.396007 139812404635456 info.py:194] Loading Dataset info from /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8
I1021 08:03:09.396629 139812404635456 builder.py:388] Reusing dataset cnn_dailymail (/root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8)
I1021 08:03:09.396816 139812404635456 builder.py:590] Constructing Dataset for split validation[:1%], from /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8
I1021 08:03:09.400055 139812404635456 info_utils.py:39] All the checksums matched successfully for post processing resources
I1021 08:03:09.434219 139812404635456 arrow_dataset.py:905] Loading cached processed dataset at /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/d8c27f2d603e2864036d92b0ec379f081896f6c28605ffd2e194c42cd04d48d8/cache-34961a58ac716d5b0323e755fe4ab272.arrow
I1021 08:03:09.441347 139812404635456 arrow_dataset.py:563] Set __getitem__(key) output type to torch for ['input_ids', 'attention_mask', 'decoder_input_ids', 'decoder_attention_mask', 'labels'] columns (when key is int or slice) and don't output other (un-formated) columns.
/usr/local/lib/python3.6/dist-packages/transformers/training_args.py:332: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)
FutureWarning,
/usr/local/lib/python3.6/dist-packages/nlp/utils/py_utils.py:191: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
return function(data_struct)
Epoch: 0%| | 0/3 [00:00<?, ?it/s/usr/local/lib/python3.6/dist-packages/nlp/utils/py_utils.py:191: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)
return function(data_struct)
Traceback (most recent call last): | 1/67 [00:00<00:40, 1.63it/s]
File "patrick_script.py", line 129, in <module>
trainer.train()
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 763, in train
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1113, in training_step
Traceback (most recent call last):
File "patrick_script.py", line 129, in <module>
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1137, in compute_loss
trainer.train()
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 763, in train
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py", line 526, in forward
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1113, in training_step
self.reducer.prepare_for_backward(list(_find_tensors(output)))
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1137, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py", line 526, in forward
self.reducer.prepare_for_backward(list(_find_tensors(output)))
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Epoch: 0%| | 0/3 [00:00<?, ?it/s]
Iteration: 1%|█▊ | 1/67 [00:00<01:02, 1.06it/s]
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 261, in <module>
main()
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 257, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/local/bin/python', '-u', 'patrick_script.py']' returned non-zero exit status 1.
```
@patrickvonplaten I will be happy to fix this issue if you can give me some lead as to what might be wrong in code.<|||||>The problem is, in the encoder forward method, BERT forward method returns a tuple `(encoder_hidden_state, pooler_output)` [link](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L847).
Now in encoder-decoder model for decoding, we use only `encoder_hidden_state` [link](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_encoder_decoder.py#L406).
Thus will be no grad computed for the `pooler` layer and the code is breaking.
Solution
pass `encoder_add_pooling_layer=False` in model intialisation.
Model Architecture
```
(encoder): RobertaModel(
(embeddings): RobertaEmbeddings(
(word_embeddings): Embedding(50265, 768, padding_idx=1)
(position_embeddings): Embedding(514, 768, padding_idx=1)
(token_type_embeddings): Embedding(1, 768)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): RobertaEncoder(
(layer): ModuleList(
(0): RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): RobertaPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
```<|||||>@patrickvonplaten It is interesting that BERT forward method returns both `encoder output` and `pooler output`. How should we handle this in `EncoderDecoderModel`. Should we change BERT implementation? or we may set `pooler` layer as unused parameter. Not sure how to do that.<|||||>Hey @ayubSubhaniya - yeah I see where the bug is coming from!
Your reasoning is 100% correct, great catch!
I think the best solution is to actually pass a `encoder_add_pooling_layer=False` variable at initialization so it looks like:
```python
from transformers import EncoderDecoderModel
model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased", encoder_add_pooling_layer=False)
print(model.encoder.pooler) # should give `None`
```<|||||>This is pretty hard to see though, so I think we should add an explicit warning/error message for this case.
I think one thing we should do is add a warning statement to the `__init__` of all `BertModel`, `RobertaModel`, ... (all models have this pooling layer structure) that checks a) if the model is in parallel mode - think this can be done via `isinstance(m, nn.DataParallel)` and b) if `add_pooling_layer=True` => If both a) and b) are True => then display a warning `That if model is used within `EncoderDecoderModel` and if errors arrises with unused parameters, use `encoder_add_pooling_layer=False`.
I think this is the best we can to do help the user. It would be amazing if you want to open a PR for this -> otherwise I'll add it to my ToDo List :-) <|||||>Will create a PR for this thanks 🙂 <|||||>```
from transformers import EncoderDecoderModel
model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased", encoder_add_pooling_layer=False)
print(model.encoder.pooler) # should give `None`
```
this does print None but after
`model.save_pretrained("model")`
the error returns and `print(model.encoder.pooler)`
print
```
LongformerPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
```<|||||>@alexyalunin yes I too faced same issue, workaround for this is to set pooler None explicitly .
```model.encoder.pooler = None```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 7,923 | closed | Loading a pytorch quantized model | Hi, I quantized a pre-trained model via **torch.quantization.quantize_dynamic** and saved it using **save_pretrained**.
based on https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/dynamic_quantization_bert_tutorial.ipynb#scrollTo=foe-dVxHIgOC .
On reloading it using **from_pretrained** again the model blows up to its original size and the resultant predictions are garbage as well.
Is there a way to properly save the quantized weights and reload them?
Opening an issue because I couldn't find a resolution for the same.
@mfuntowicz @VictorSanh @patrickvonplaten | 10-20-2020 06:08:17 | 10-20-2020 06:08:17 | Hello @amanpreet692
Are you making sure you are reloading the saved (quantized) weights into a quantized model?
A dense checkpoint and its quantized counterpart have two different state dicts and I suspect you are re-initializing random matrices because when loading into from_pretrained it doesn't find the right keys in the state dict.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,922 | closed | Is it possible to recommend the deployment method for implementing trained mode | # ❓ Questions & Help
I trained a model the Class Trainer. Could you recommend some high performance deployment proposal ? | 10-20-2020 05:57:09 | 10-20-2020 05:57:09 | I find this https://pytorch.org/blog/model-serving-in-pyorch . It is very useful! |
transformers | 7,921 | closed | [testing] experiment with a different way of skipping torch-only test modules | This is an experiment.
I was getting irked by `require_pytorch` followed by yet another `if is_torch_available()` code in many test modules, specifically this part:
```
@require_torch
class BARTModelTest(ModelTesterMixin, unittest.TestCase):
all_model_classes = (
(BartModel, BartForConditionalGeneration, BartForSequenceClassification, BartForQuestionAnswering)
if is_torch_available()
else ()
)
all_generative_model_classes = (BartForConditionalGeneration,) if is_torch_available() else ()
```
`require_torch` doesn't stop the parser from compiling the rest of the code, so the ugly workaround is used.
I tried to find a better solution, to tell the parser to ignore the whole class, since we did tell it to skip it - but didn't succeed.
But then I noticed that all of the module classes/tests requires pytorch, so I thought why not skip the whole module and not need to repeatedly ask if pytorch is available. That is:
```
if not is_torch_available():
raise unittest.SkipTest("Skip the whole module as it requires pytorch")
```
and then we can code worry-free, removing any torch checks. We can then alias this whole thing in `testing_utils.py` and call as something like:
```
from testing_utils import skip_module_require_torch
skip_module_require_torch()
```
This PR is such possible solution applied to just one pure pytorch test module. To see it in action, run:
```
USE_TF=1 pytest tests/test_modeling_bart.py
```
The only drawback is that it doesn't count/report any of the skipped tests, so we get just:
```
collected 0 items / 1 skipped
```
from pytest. But this will only happen on _tf CI job, so it doesn't matter anyway.
We can do exactly the same for tf-only tests, with its own `skip_module_require_tf`.
As a bonus the test suite will run marginally faster for those pt/tf-only jobs, as it won't need to load/parse any modules - should be a very negligible improvement.
The current way is just fine. But I thought I'd share my experiment in case perhaps it'd lead to a more readable code.
Thank you for reading.
@LysandreJik, @sgugger, @sshleifer, @patrickvonplaten | 10-20-2020 05:53:32 | 10-20-2020 05:53:32 | Great!
Do you want for us to make this a "model" file first, merge it, see how it feels and then replicate to the rest? Or should I just proceed with the rest?
<|||||>I think you ca just proceed.<|||||>Bummer! If I make it into a function and thus remove `if ...` - isort and flake now complain:
```
$ flake8 tests
tests/test_modeling_bart.py:37:1: E402 module level import not at top of file
tests/test_modeling_bart.py:39:1: E402 module level import not at top of file
tests/test_modeling_bart.py:57:1: E402 module level import not at top of file
$ isort --check-only tests
ERROR: /mnt/nvme1/code/huggingface/transformers-torch-req/tests/module_skip_pytorch.py Imports are incorrectly sorted and/or formatted.
```
so that would require adding #noqa to all the subsequent imports ;( which leaves the code ugly just in a different way.
These tools have so little flexibility. They are supposed to make things better but lead to a much uglier code :(
<|||||>meh! I call this experiment a failure thanks to `make quality` oppression.<|||||>I will just leave the helper I wrote here, in case someone figures out a magical way to solve the ugliness.
```
def test_module_skip_require_pytorch():
"""
Call this one on top of test module to skip the whole module if pytorch is not available:
test_module_skip_require_pytorch()
"""
if not _torch_available:
raise unittest.SkipTest("Skip the whole module as it requires pytorch")
``` |
transformers | 7,920 | closed | what's the values of start_positon and end_position while the answer is impossible in run_squad.py | # ❓ Questions & Help
### Details
I want to know what's the values of `start_positon` and `end_position` while the answer is **impossible** in `run_squad.py`.
`start_positions = end_postions = -1` OR `start_positions = end_postions = 0` | 10-20-2020 02:07:32 | 10-20-2020 02:07:32 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,919 | closed | Expose the Flax code quality problems | # What does this PR do?
This PR is just there to expose the problems related to the code quality with objects introduced in the flax PR. #7914 contains a fix to a few of them, maybe all. | 10-19-2020 21:48:25 | 10-19-2020 21:48:25 | |
transformers | 7,918 | closed | Add Flax dummy objects | # What does this PR do?
Following the first JAX models, this PR adds the dummy objects to make sure the library always has the same objects available.
cc @mfuntowicz for information
| 10-19-2020 21:44:00 | 10-19-2020 21:44:00 | |
transformers | 7,917 | closed | New run glue script | # What does this PR do?
This PR cleans up the `run_glue.py` script to use the Datasets library. Along the way it adds a few fixes in Trainer. The script supports all glue tasks as well as custom user tasks (passed along with a training and validation file in csv or json format). It has been tested on the following setups:
- single GPU
- multi-GPU with DataParallel
- multi-GPU with DistributedDataParallel
- TPU
The README has been updated to reflect the changes, there is just one breaking change from before which is that `data_dir` is not an accepted argument anymore (since Datasets will take care of downloading the data files). | 10-19-2020 21:19:12 | 10-19-2020 21:19:12 | Should we start thinking about automating the creation of the metadata block for the model's model card?
here for instance we'd already have this info:
```
---
datasets:
- mrpc
metrics:
- f1
finetuned_from: bert-base-cased
---
```<|||||>We could think of something like that and add a blank model card to be completed by the user in the final checkpoint. We could also include the results of the last evaluation if there is one. |
transformers | 7,916 | closed | TypeError: __init__() got an unexpected keyword argument 'vocab_file' in transformers/tokenization_gpt2.py", line 380 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- `tokenizers` version: 0.9.2
- Platform: Linux-3.10.0-1062.4.1.el7.x86_64-x86_64-with-redhat-7.7-Maipo
- Python version: 3.7.6
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help @mfuntowicz
## Information
Model I am using (Bert, XLNet ...): RoBERTa-base
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
fairseq
## To reproduce
I use the **RobertaTokenizerFast** and it seems an arg name mismatch.
Steps to reproduce the behavior:
1. self.tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', cache_dir=args.cache_dir)
In transformers.tokenization_gpt2.py L376 it is:
`ByteLevelBPETokenizer(
vocab_file=vocab_file,
merges_file=merges_file,
add_prefix_space=add_prefix_space,
trim_offsets=trim_offsets,
)`
But in tokenizers.implementations.ByteLevelBPETokenizer it is expected to be `vocab`.
## Expected behavior
` File "/zfs1/hdaqing/rum20/kp/fairseq-kpg/fairseq/data/encoders/hf_bpe.py", line 31, in __init__
self.tokenizer = RobertaTokenizerFast.from_pretrained(args.pretrained_model, cache_dir=args.cache_dir)
File "/ihome/hdaqing/rum20/anaconda3/envs/kp/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1428, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/ihome/hdaqing/rum20/anaconda3/envs/kp/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1575, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/ihome/hdaqing/rum20/anaconda3/envs/kp/lib/python3.7/site-packages/transformers/tokenization_roberta.py", line 380, in __init__
**kwargs,
File "/ihome/hdaqing/rum20/anaconda3/envs/kp/lib/python3.7/site-packages/transformers/tokenization_gpt2.py", line 380, in __init__
trim_offsets=trim_offsets,
TypeError: __init__() got an unexpected keyword argument 'vocab_file'`
| 10-19-2020 20:00:51 | 10-19-2020 20:00:51 | same issue<|||||>Hello! I think this is due to a mismatch between your `transformers` and `tokenizers` versions. `transformers` version v3.3.1 expects `tokenizers == 0.8.1.rc2`.
If you want to use `tokenizers == 0.9.2` you should work on the current `master` branch or wait for version v3.4.0 which should be released sometimes today.<|||||>Thank you! I upgraded both and it works. |
transformers | 7,915 | closed | [EncoderDecoder] Fix Typo | # What does this PR do?
Remove dead code
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-19-2020 19:43:46 | 10-19-2020 19:43:46 | |
transformers | 7,914 | closed | [flax] fix repo_check | Unless, this is actually a problem, this PR adds `modeling_flax_utils` to ignore list. otherwise currently it expects to have `tests/test_modeling_flax_utils.py` for this module.
It also adds the 2 new tests that don't run common tests to `TEST_FILES_WITH_NO_COMMON_TESTS`
For context please see: https://github.com/huggingface/transformers/pull/3722#issuecomment-712360415
now check_repo is happy.
@sgugger | 10-19-2020 19:25:55 | 10-19-2020 19:25:55 | I don't understand why you have a problem. `make quality` runs fine for me on master (and the CI is also happy).
**Edit:** Not saying that this is useless, I just want to understand why it fails for you and not for me :-)<|||||>Something is wrong on your side and CI.
`check_repo.py` actually did its job correctly and reported problems that this PR fixes. Have a look at the fix and tell me if that checker shouldn't have caught it.
Specifically:
* `test_modeling_flax_bert.py` and `test_modeling_flax_roberta.py` don't run common tests, and yet they weren't added to the ignore list
* there is no `tests/test_modeling_flax_utils.py` so `modeling_flax_utils` has to be on the other ignore list
I have no idea why you and CI don't reproduce these failures.<|||||>I'm in the middle of something else right now, but will investigate once I'm done. I'm unsure of why the CI hasn't been more angry with the jax PR myself. But I'd like to understand why it's not failing for me and the CI before taking the fix if that makes sense.<|||||>Absolutely, @sgugger. I will try to poke and see if the script behaves differently for some reason. I will report back if I find the culprit.
It should be safe for you to merge this as the lack of it may impacts other devs, I posted all the details why it's correct in here https://github.com/huggingface/transformers/pull/7914#issuecomment-712396143
It's your call.<|||||>Found part of the culprit - my py37 env doesn't catch the problem, whereas py38 does - now need to figure out whether it's some package difference or python itself. <|||||>Oh, interesting! I'm indeed on Python 3.7.9<|||||>and so is CI
Using my mightly https://github.com/stas00/conda-tools/blob/master/conda-env-compare.pl - I should get to the root of it in a few minutes<|||||>I downgraded the the py38 env to py37 and it doesn't detect the problem anymore. Upgraded it back to py38 via conda and it fails now too! bummer - so it must have been some package. I need to dig more.
I'd check `jax` and `flax` since the new flax code depends on it.<|||||>I'm confused by your report: by "it fails now too!" do you mean you don't see the problem anymore?
**Edit:** Think I've found the issue. It's because the presence/absence of jax/flax will change what's in the __init__. And neither me nor the CI have it installed.<|||||>If my reasoning is correct, #7919 should be red. We can then add it to your fixes and merge all of this together.<|||||>Yes, this is the culprit.
If you `pip install jax jaxlib flax` you should be able to reproduce the problem.
So basically the validation script is as good as the preinstalled pre-requisites allow it to be, therefore to move forward to do proper testing we need to have a prerequisites set that contains **all possible external packages** used by the core library.
Perhaps we need to change `setup.py` to add:
`extras["all"] = list all groups here`
and have the `check_code_quality` CI job installing `pip install -e .[all].
But specifically for this issue https://github.com/huggingface/transformers/pull/7919 will do the trick. I merged it here as you suggested.<|||||>> I'm confused by your report: by "it fails now too!" do you mean you don't see the problem anymore?
Yeah, I was trying to figure out the difference and there were too many differences in installed modules, so I downgraded to py37, then upgraded to py38 and lost an environment that was good for the purpose of this issue. I eventually recovered it. I need to remember to back up conda envs before I try to mess with them :( |
transformers | 7,913 | closed | `add_prefix_space=True` option in the BPE tokenizer | Hello,
I understand that when I add the `add_prefix_space=True` option in the BPE tokenizer statement, the tokenizer will add a space in the beginning of every sequence.
Is there some specific advantages of using the `add_prefix_space=True` option for BPE tokenizer (compared to when I don't use the option), given that all my sequences start without a space in the beginning.?
Thanks, | 10-19-2020 19:23:25 | 10-19-2020 19:23:25 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,912 | closed | run_tf_text_classification.py giving "ValueError: too many values to unpack" | I am trying to run this script for token classification
https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_tf_text_classification.py
Accoring to the instructions here
https://github.com/huggingface/transformers/tree/master/examples/text-classification
I formatted the data according> to the instructions
>the CSV files must have a header corresponding to the column names and not more than three columns: one column for the id, one column for the text and another column for a second piece of text in case of an entailment classification for example.
However, I am getting this error
> Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/default-57112360018dd326/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4. Subsequent calls will reuse this data.
> 10/19/2020 18:03:50 - INFO - filelock - Lock 140266200732560 released on /root/.cache/huggingface/datasets/_root_.cache_huggingface_datasets_csv_default-57112360018dd326_0.0.0_49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4.lock
> 0% 0/5 [00:00<?, ?ba/s]Traceback (most recent call last):
> File "/content/transformers/examples/text-classification/run_tf_text_classification.py", line 292, in <module>
> main()
> File "/content/transformers/examples/text-classification/run_tf_text_classification.py", line 231, in main
> max_seq_length=data_args.max_seq_length,
> File "/content/transformers/examples/text-classification/run_tf_text_classification.py", line 68, in get_tfds
> batched=True,
> File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1256, in map
> update_data=update_data,
> File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 156, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 163, in wrapper
> out = func(self, *args, **kwargs)
> File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1517, in _map_single
> batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset
> File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1435, in apply_function_on_filtered_inputs
> function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
> File "/content/transformers/examples/text-classification/run_tf_text_classification.py", line 66, in <lambda>
> padding="max_length",
> File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 2323, in batch_encode_plus
> **kwargs,
> File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 560, in _batch_encode_plus
> ids, pair_ids = ids_or_pair_ids
> ValueError: too many values to unpack (expected 2)
> 0% 0/5 [00:00<?, ?ba/s]
It looks like the issue may be the script itself. I was having a previous issue running the script, and it looks like it was due to the datasets library
https://github.com/huggingface/datasets/issues/705#event-3839135529
It looks like the error is now with the script, or possibly the tokenizer. It sort of looks like the training wants only two types inputs, but is being passed all of the inputs from `batch_encode_plus`, which may be more than two (token id type, attention id type, segment id type, etc)
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: colab
- Python version: version 3, colab default
- PyTorch version (GPU?): colab default
- Tensorflow version (GPU?): colab default
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
I'm not sure, most likely the bug seems to be due to the example script itself, but could be the dataset, or tokenizer.
## Information
Model I am using: Bert, specifically scibert
The problem arises when using:
* [x ] the official example scripts: (give details below)
The tasks I am working on is:
* [x ] my own task or dataset: I am working with the chemprot dataset, for token classification. I following the instructions to have the data in a csv file, with two columns (one for label, another for text), and headers.
## To reproduce
Here is a colab notebook of the issue.
https://colab.research.google.com/drive/1r3XCKYA8RBtfYmU2jqHVJT-uTt1ii04S?usp=sharing
## Expected behavior
Should train without error. | 10-19-2020 18:34:04 | 10-19-2020 18:34:04 | Hey, I am getting the same error.
I am using a three column CSV file which looks like this,
data.csv
label,sent1,sent2
0,he,so
1,yes,why
Any help would be appreciated. <|||||>The error is caused by the following function
`
transformed_ds[k] = ds[k].map(
lambda example: tokenizer.batch_encode_plus(
(example[features_name[0]], example[features_name[1]]),
truncation=True,
max_length=max_seq_length,
padding="max_length",
),
batched=True,
)
`
When I set `batched=False`, it could pass; however, another error arises. Any idea? @jplu<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 7,911 | closed | [Docstring] fix t5 training docstring | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes T5 docstring according to recent tokenizer changes.
Fixes #7904
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-19-2020 17:45:26 | 10-19-2020 17:45:26 | |
transformers | 7,910 | closed | [T5] Ignore sentinel indices for unsupervised denoising / masking objective? | The [docs](https://huggingface.co/transformers/model_doc/t5.html#training) state that the masked language modeling objective is simply
```
input_ids = tokenizer.encode('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt')
labels = tokenizer.encode('<extra_id_0> cute dog <extra_id_1> the <extra_id_2> </s>', return_tensors='pt')
model(input_ids=input_ids, labels=labels)
```
I was wondering if I need to manually set the `additional_special_tokens_ids` (corresponding to the `<extra_id_#>` sentinels) in the `labels` to `-100` during training so that they are ignored by the loss, as I believe would be the case for the `[MASK]` tokens in BERT? It seems that at least the `pad_token_id` is ignored in [`examples/seq2seq`](https://github.com/huggingface/transformers/blob/a09fe140c1c059baf05c4f97e5b4e83c719608db/examples/seq2seq/finetune.py#L153), but it's not clear if this ought to be true for the sentinels as well. My suspicion is _no_, but since there's no canonical MLM code for T5, I figured it was worth checking.
(I asked this in the forums and in a somewhat related issue, but was recommended to post here & tag @patrickvonplaten / @thomwolf) | 10-19-2020 16:16:36 | 10-19-2020 16:16:36 | Hey @ahoho - good question!
I'm pretty confident that you should mask all sentinel tokens (with -100) and only compute the loss the "real" labels being "cute dog", "the" and "</s>".
Also they are definitely not automatically ignored as is done for the pad_token_id in `examples/seq2seq`
I could not find a more detailed explanation in the paper - so maybe @craffel could take a quick look as well and confirm (hope it's fine to tag you here Colin)<|||||>No need to treat the sentinel tokens specially (masking out their loss or otherwise). The model is trained to output both the sentinel tokens and the filled-in blanks. |
transformers | 7,909 | closed | pegasus/cnn_dm 12-2 distillation performing poorly | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Linux-5.4.0-1028-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@sshleifer
## Information
I am trying to distil the pegasus model to reduce the runtime and memory requirements. I am following **No Teacher Distillation** approach. However, the model generates bad quality text.
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name): CNN
* [ ] my own task or dataset: (give details below)
## To reproduce
I have trained the model using below command:
**Download data:**
wget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz
tar -xzvf cnn_dm_v2.tgz # empty lines removed
mv cnn_cln cnn_dm
**Command to train:**
python finetune.py --learning_rate=3e-5 --do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 6 --freeze_encoder --freeze_embeds --data_dir ./cnn_dm/ --max_target_length 142 --val_max_target_length=142 --train_batch_size=1 --eval_batch_size=1 --gradient_accumulation_steps=256 --model_name_or_path sshleifer/student_pegasus_cnn_12_2 --tokenizer_name google/pegasus-cnn_dailymail --warmup_steps 500 --output_dir distilpegasus-cnn-12-2 --gpus 1 --adafactor --num_workers=0 --fp16_opt_level=O1 --fp16
**Inference code:**
```
from transformers import PegasusForConditionalGeneration, PegasusTokenizer, PegasusConfig
import torch
PEGASUS_MODEL = '/home/ubuntu/finetune/transformers/examples/seq2seq/distilpegasus-cnn-12-2/best_tfmr'
PEGASUS_TOKENIZER = 'google/pegasus-cnn_dailymail'
class PegasusSummarizer:
def __init__(self):
self.torch_device = 'cpu'
self.tokenizer = PegasusTokenizer.from_pretrained(PEGASUS_TOKENIZER)
self.model = PegasusForConditionalGeneration.from_pretrained(PEGASUS_MODEL).to(self.torch_device)
def summarize(self, text):
src_text = text
batch = self.tokenizer.prepare_seq2seq_batch([src_text],truncation=True,padding='longest').to(self.torch_device)
translated = self.model.generate(**batch)
tgt_text = self.tokenizer.batch_decode(translated, skip_special_tokens=True)
return tgt_text
summarizer = PegasusSummarizer()
print(summarizer.summarize('''(CNN)For the first time in eight years, a TV legend returned to doing what he does best. Contestants told to "come on down!" on the April 1 edition of "The Price Is Right" encountered not host Drew Carey but another familiar face in charge of the proceedings. Instead, there was Bob Barker, who hosted the TV game show for 35 years before stepping down in 2007. Looking spry at 91, Barker handled the first price-guessing game of the show, the classic "Lucky Seven," before turning hosting duties over to Carey, who finished up. Despite being away from the show for most of the past eight years, Barker didn't seem to miss a beat.'''))
```
**Output:** ['"It\'s time for the first time in a five-year anniversary of the show.']
**Output of google/pegasus-cnn_dailymail model**:['Barker hosted "The Price Is Right" for 35 years.<n>He stepped down in 2007.']
test_results.txt output:
src_pad_frac = tensor(0., device='cuda:0')
src_pad_tok = tensor(0, device='cuda:0')
step_count = 26
test_avg_gen_len = 48.63716275021758
test_avg_gen_time = 1.3503953615824382
test_avg_loss = 3.6937525272369385
test_avg_rouge1 = 19.983542428198433
test_avg_rouge2 = 4.130034786771105
test_avg_rougeL = 14.352700217580503
test_avg_rougeLsum = 18.460456248912102
test_loss = tensor(3.6938, device='cuda:0')
test_rouge2 = tensor(4.1300, device='cuda:0')
tpb = tensor(511, device='cuda:0')
val_avg_gen_len = 50.144
val_avg_gen_time = 1.513235685825348
val_avg_loss = 3.77506422996521
val_avg_rouge1 = 16.9548154
val_avg_rouge2 = 3.1666046
val_avg_rougeL = 12.980990400000001
val_avg_rougeLsum = 15.404284
val_loss = tensor(3.7751, device='cuda:0')
val_rouge2 = tensor(3.1666, device='cuda:0')
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect the output to be much cleaner and higher Rouge score. Any help in this regard would be of great help.
I am trying to retrain the model by removing **--freeze_encoder**.
<!-- A clear and concise description of what you would expect to happen. -->
| 10-19-2020 15:48:42 | 10-19-2020 15:48:42 | I would also try without `--fp16`<|||||>Sure @sshleifer I will try without `--fp16` and update the results here. Thanks for looking into this.<|||||>Hi @sshleifer
I ran the below command for distillation (without --fp16 as you suggested):
`python finetune.py --learning_rate=3e-5 --do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 6 --freeze_embeds --data_dir ./cnn_dm/ --max_target_length 142 --val_max_target_length=142 --train_batch_size=1 --eval_batch_size=1 --gradient_accumulation_steps=256 --model_name_or_path sshleifer/student_pegasus_cnn_12_2 --tokenizer_name google/pegasus-cnn_dailymail --warmup_steps 500 --output_dir distilpegasus-cnn-12-2 --gpus 1 --num_workers=0 --adafactor --freeze_encoder --task summarization --dropout 0.1 --attention_dropout 0.1 --label_smoothing 0.1 `
However, the rouge scores are not improving even after 1 epoch
`{
"val": [
{
"val_avg_loss": 940.3131713867188,
"val_avg_rouge1": 0.0,
"val_avg_rouge2": 0.0,
"val_avg_rougeL": 0.0,
"val_avg_rougeLsum": 0.0,
"val_avg_gen_time": 1.9830520153045654,
"val_avg_gen_len": 128.0,
"step_count": 1
},
{
"val_avg_loss": 457.8860168457031,
"val_avg_rouge1": 0.8307167999999999,
"val_avg_rouge2": 0.0106524,
"val_avg_rougeL": 0.8102172,
"val_avg_rougeLsum": 0.8177266,
"val_avg_gen_time": 1.9989106116294861,
"val_avg_gen_len": 128.0,
"step_count": 2
},
{
"val_avg_loss": 297.9767761230469,
"val_avg_rouge1": 2.7392655999999995,
"val_avg_rouge2": 0.08615479999999999,
"val_avg_rougeL": 2.4773216,
"val_avg_rougeLsum": 2.6349664,
"val_avg_gen_time": 1.7901806454658509,
"val_avg_gen_len": 93.732,
"step_count": 3
},
{
"val_avg_loss": 272.0320129394531,
"val_avg_rouge1": 4.0338778,
"val_avg_rouge2": 0.2913826,
"val_avg_rougeL": 3.4839722,
"val_avg_rougeLsum": 3.7919970000000003,
"val_avg_gen_time": 1.4304678964614868,
"val_avg_gen_len": 47.67,
"step_count": 4
},
{
"val_avg_loss": 259.57611083984375,
"val_avg_rouge1": 7.9237036000000005,
"val_avg_rouge2": 0.7740864000000001,
"val_avg_rougeL": 6.5176862,
"val_avg_rougeLsum": 7.265688,
"val_avg_gen_time": 1.3813148093223573,
"val_avg_gen_len": 37.046,
"step_count": 5
}
]
}
`
After 1 epoch, rogue2 score is 0.77. Could you please help if I am doing something wrong here?
Thanks in advance for your help.
Regards,
Karthik
<|||||>+ Note that scores are improving, just very slowly.
+ I have not had good luck with `sshleifer/student_pegasus_cnn_12_2`, I'd try to make your own student with a full encoder and a 4+ layer decoder starting. Using, for example:
```bash
python make_student.py sshleifer/pegasus-cnn-ft-v2 -save_path student_peg_cnn_16_4 -e 16 -d 4
```
Here is the [wandb log](https://wandb.ai/sshleifer/pegasus_ft/runs/32ov7btf?workspace=user-) for a run that used `student_peg_cnn_16_4`

I started at `--max_target_length 56` and then finetuned more with `--max_target_length 142`. That log is the first run. The second run is [here](https://wandb.ai/sshleifer/pegasus_ft/runs/2z1t4r0t?workspace=user-)
FWIW, XSUM trains much faster!<|||||>Thanks @sshleifer for your inputs.
I am using this model (https://huggingface.co/sshleifer/distill-pegasus-cnn-16-4) which has 16 encoders and 4 decoders. I am trying to reduce the inference runtime of the model - for this reason, I am trying distillation with lesser encoders and decoders.
Could you please suggest if I should try something different to reduce the inference runtime?
Regards,
Karthik
<|||||>Try generating with the 16/4 model and `num_beams=2`.
<|||||>Thanks @sshleifer for your suggestion. This improved the runtime. Please let me know if you have more such ideas.
<|||||>Besides that, all that's easy is to make your input documents shorter, or make your generations shorter (with min_length, max_length).
|
transformers | 7,908 | closed | [Model] M2M-100 Multilingual machine translation | # 🌟 New model addition
## Model description
Facebook AI is introducing,
M2M-100
the first multilingual machine translation (MMT) model that translates between any pair of 100 languages without relying on English data.
<!-- Important information -->
## Open source status
* [x] the model implementation is available: (give details) https://github.com/pytorch/fairseq/tree/master/examples/m2m_100?fbclid=IwAR2Oqew-PAwZpTmHMrq_yiXN2dwdzzbTMZ-4HfbNKfdoZ_M5TpQiPY3dYFo
* [x] the model weights are available: (give details) https://dl.fbaipublicfiles.com/m2m_100/12b_last_checkpoint.pt
* [ ] who are the authors: (mention them, if possible by @gh-username)
| 10-19-2020 15:19:16 | 10-19-2020 15:19:16 | This model is very big. Is there a good way to prune it?<|||||>Moving to #8054 which is a duplicate (that I created)!<|||||>> This model is very big. Is there a good way to prune it?
@Bachstelze Did you find any ways to distill or prune such a large model?<|||||>@robotsp
There is a smaller version: https://huggingface.co/alirezamsh/small100
[SMaLL-100: Introducing Shallow Multilingual Machine Translation Model for Low-Resource Languages](https://aclanthology.org/2022.emnlp-main.571.pdf) |
transformers | 7,907 | closed | Reproducing Bart Xsum from Bart Large | Summarization: @sshleifer
Bart: @sshleifer
## Information
I'm trying to finetune bart large for Xsum and unable to reproduce the results from the paper.
When I try eval with facebook/bart-large-xsum, I get R1=45.3595, RLSum=37.1717 so I assume my eval script is working ok. For finetuning bart large, I use the same config as bart-large-xsum with vocab size=50265 to enable starting from bart-large. However, I am unable to reach the same scores. The best I have is R1=45.4188, RLSum=36.6986 with LR=1.2e-4, gbs=128 and --max_target_length=60 --max_source_length=1024 --val_check_interval 0.1 --val_max_target_length=60 --warmup_steps 50 --max_steps 5000.
How can I reproduce the results?
| 10-19-2020 14:57:08 | 10-19-2020 14:57:08 | I don't know the answer to this question. Your numbers are close enough that all I can suggest is to either try fairseq's [command](https://github.com/pytorch/fairseq/blob/master/examples/bart/README.summarization.md#4-fine-tuning-on-cnn-dm-summarization-task) or look at differences between our command and fairseq.
<|||||>> Summarization: @sshleifer Bart: @sshleifer
>
> ## Information
> I'm trying to finetune bart large for Xsum and unable to reproduce the results from the paper.
>
> When I try eval with facebook/bart-large-xsum, I get R1=45.3595, RLSum=37.1717 so I assume my eval script is working ok. For finetuning bart large, I use the same config as bart-large-xsum with vocab size=50265 to enable starting from bart-large. However, I am unable to reach the same scores. The best I have is R1=45.4188, RLSum=36.6986 with LR=1.2e-4, gbs=128 and --max_target_length=60 --max_source_length=1024 --val_check_interval 0.1 --val_max_target_length=60 --warmup_steps 50 --max_steps 5000.
>
> How can I reproduce the results?
Hi @swethmandava . I'm trying to reproduce the result by Transformres. Would you mind sharing your fine-tuning script? |
transformers | 7,906 | closed | labels and decoder_input_ids to Glossary | Completes the glossary with entries for `labels` and `decoder_input_ids`.
Closes https://github.com/huggingface/transformers/issues/7865
Pinging @sshleifer and @patrickvonplaten for advice regarding the `decoder_input_ids`, @sgugger for docs. | 10-19-2020 14:39:50 | 10-19-2020 14:39:50 | |
transformers | 7,905 | closed | [RAG] How to extract generated strings from `RetrievAugLMMarginOutput` | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
How to extract generated strings from `RetrievAugLMMarginOutput`?
## Details
<!-- Description of your issue -->
When using `RagSequenceForGeneration` and `retriever` separately we can't use `model.generate` (refer #7829). And calling `model.__call__` directly return `RetrievAugLMMarginOutput`. I not able to to find way to extract `generated_ids` from it.
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 10-19-2020 14:26:18 | 10-19-2020 14:26:18 | @patrickvonplaten can you please help<|||||>Hey @lalitpagaria,
Using embedding, retrieval and generation separately for RagSequence is not yet available sadly.
You should take a look into the `generate()` function of `RagSequenceForGeneration` for more detail on how to run it separately yourself.<|||||>Thanks @patrickvonplaten . I think we (haystack) will wait for implementation in transformers and use only RagToken for now.
Please let me know should I keep this open in case you plan to add functionality in the future? or close this.
cc: @tholor <|||||>Closing |
transformers | 7,904 | closed | T5 Docs training example has shifted labels | https://github.com/huggingface/transformers/blob/master/docs/source/model_doc/t5.rst#L42
Here is that link quoted:
#### Unsupervised denoising training
In teacher-forcing style, the target sequence is then appended by the EOS token and corresponds to the `labels`.
In this setup spans of the input sequence are masked by so-called sentinel tokens (*a.k.a* unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the *real* masked tokens.
Each sentinel token represents a unique mask token for this sentence and should start with `<extra_id_0>`, `<extra_id_1>`, ... up to `<extra_id_99>`. As a default, 100 sentinel tokens are available in `transformers.T5Tokenizer`.
For instance, the sentence "The cute dog walks in the park" with the masks put on "cute dog" and "the" should be processed as follows:
```python
input_ids = tokenizer.encode('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt')
labels = tokenizer.encode('<extra_id_0> cute dog <extra_id_1> the <extra_id_2> </s>', return_tensors='pt')
# the forward function automatically creates the correct decoder_input_ids
model(input_ids=input_ids, labels=labels)
```
1) Shouldn't the labels be unshifted, given that `decoder_input_ids = shift_right(labels)` @patrickvonplaten @patil-suraj ?
2) @craffel does this look correct to you?
| 10-19-2020 14:19:19 | 10-19-2020 14:19:19 | Hey Sam, it looks like the labels in the example you quoted are not shifted - can you be more specific about you think the labels are shifted?<|||||>yes, I think the `labels` should be unshifted here (i.e `labels` should be same as `input_ids`) since `shift_right` takes care of preparing shifted `decoder_input_ids`.<|||||>@craffel I assumed the labels were shifted because:
+ Original: `The cute dog walks in the park`
+ Input_ids: `The <extra_id_0> walks in <extra_id_1> park`
+ Labels: `<extra_id_0> cute dog <extra_id_1> the <extra_id_2> </s>`
`input_ids` starts with unmasked "The", whereas labels starts with a sentinel token. <|||||>I'm still not following - are you think the sentinel token `<extra_id_0>` is the same as the start-of-sequence token? They are different tokens.<|||||>@sshleifer - I don't really understand the problem here either. In the example the `labels` are provided as:
```python
<extra_id_0> cute dog <extra_id_1> the <extra_id_2> </s>
```
which means that `decoder_input_ids` will be automatically created as:
```python
<s> <extra_id_0> cute dog <extra_id_1> the <extra_id_2>
```
=> This looks correct to me
<|||||>+1 to Patrick's take<|||||>Aah, yes, for t5 we just predict the masked out spans, unlike BART. So this looks correct. <|||||>In the docs, the `</s>` is omitted from `input_ids`, but will be silently added due to #5866. Is this also the correct behavior?<|||||>@ahoho => good point - I will update the docs to reflect this behavior<|||||>@patrickvonplaten, thanks! Does this mean the docs were incorrect before? I guess my question is, for the denoising training, is it correct to append the `</s>` token to the `input_ids` (not `labels`) or isn't it?<|||||>`</s>` should be appended IMO -> It's just that this is done automatically since #5866 as you mentioned above :-) |
transformers | 7,903 | closed | Modelling Encoder-Decoder | Error :- `decoder_config` used before intialisation | Getting error when sending `decoder_config` as a parameter while initializing an encoder-decoder model from pretrained.
# What does this PR do?
fixes "UnboundLocalError: local variable 'decoder_config' referenced before assignment"
## Who can review?
@patrickvonplaten @sgugger | 10-19-2020 13:47:02 | 10-19-2020 13:47:02 | @patrickvonplaten @sgugger, please review<|||||>Great catch @ayubSubhaniya ! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.