repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 7,402 | closed | Tokenizers as an optional dependency | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Would it be possible to make `tokenizers` an optional dependency? I see this was already attempted here by @thomwolf: https://github.com/huggingface/transformers/pull/2342.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
For various reasons we are unable to support Rust in our environment. Given that `tokenizers` is a hard dependency, this means we cannot use `transformers` at all. We would be fine with using the non-fast versions of tokenizers as a workaround.
| 09-25-2020 18:52:13 | 09-25-2020 18:52:13 | Sounds reasonable, but – if you're at liberty to share – and out of curiosity, would also like to know why you can't install Rust-built native deps<|||||>Thanks for the quick response. Rust is generally fine – what's causing issues is specifically the `pyo3` crate, which has a somewhat involved build script which doesn't get along with our build system.<|||||>Hi @jeanm, I work on the tokenizers library, can you explain how you are unable to use `tokenizers` ? You should never have to "see" Rust as we ship prebuilt libraries.
Maybe we are missing a platform we should add so that you don't have to build from source and so you don't have an issue with Rust or Pyo3 ?<|||||>Only using the python tokenizers may prevent you from running some example scripts and use some additional functionalities of the library in the future though since we plan to rely more and more on the fast alignements tools provided by the Rust tokenizers to make processing simpler and more accurate.
Do you think you could give us more details on the issue so that we can try to make the tokenizers library compatible with your system?
Happy to talk further by DM/mail if it's easier for you to give some details, you can ping me by email or twitter/linkedin for instance.<|||||>Hi @Narsil @thomwolf, thanks for the responses. As a matter of policy (+ technical reasons I unfortunately cannot get into) we have to build all python wheels from source. If it were possible to make `tokenizers` optional without complicating things on your end, we would be perfectly fine with dealing with reduced functionality, as that's still much better than not being able to run the package at all :) |
transformers | 7,401 | closed | Catch PyTorch warning when saving/loading scheduler | When saving or loading the scheduler, PyTorch **always** sends a warning to save/load the optimizer state as well (with a typo). We are saving/loading the optimizer state along the scheduler but there is no way to tell that to PyTorch and avoid the annoying warning (and its typo).
This PR fixes that by catching all warnings while loading/saving the scheduler then reissuing the non-expected ones.
| 09-25-2020 18:51:04 | 09-25-2020 18:51:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7401?src=pr&el=h1) Report
> Merging [#7401](https://codecov.io/gh/huggingface/transformers/pull/7401?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e50a931c118b9f55f77a743bf703f436bf7a7c29?el=desc) will **increase** coverage by `2.28%`.
> The diff coverage is `13.33%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7401?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7401 +/- ##
==========================================
+ Coverage 77.06% 79.35% +2.28%
==========================================
Files 181 181
Lines 35781 35793 +12
==========================================
+ Hits 27575 28403 +828
+ Misses 8206 7390 -816
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7401?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.01% <13.33%> (-0.70%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `17.46% <0.00%> (-81.13%)` | :arrow_down: |
| [src/transformers/modeling\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (-34.74%)` | :arrow_down: |
| [src/transformers/configuration\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.51% <0.00%> (-15.11%)` | :arrow_down: |
| [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.31% <0.00%> (-10.12%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.22% <0.00%> (+0.31%)` | :arrow_up: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.61% <0.00%> (+0.33%)` | :arrow_up: |
| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7401/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7401?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7401?src=pr&el=footer). Last update [e50a931...864dd99](https://codecov.io/gh/huggingface/transformers/pull/7401?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>worth pushing upstream to `pytorch`?<|||||>We can certainly file an issue about it but my guess is that they though a warning always passed was fine (since there is no way to know if the user is saving/loading the optimizer with its scheduler).<|||||>Thanks for this!
Regarding whether to push upstream to pytorch:
maybe a solution is to add optional flag to the pytorch save command like optimizer_was_saved. Make it default False. Only if you explicitly mark the param true in your call to save the optimizer will the warning be suppressed. Puts all the onus on the calling user. |
transformers | 7,400 | closed | remove codecov PR comments |
#### Problem
+ @stas00 has tried very hard to get codecov working to no avail in https://github.com/huggingface/transformers/issues/6317
+ Files that are not affected by a PR show changes in coverage.
+ Codecoverage information is rarely/never useful
+ lots of distracting spam emails
+ lots of vertical space that could otherwise be used for reading discussion history.
#### Proposed solution:
The idea of codecov -- to warn people before they introduce untested code -- is good, but the current implementation is worse than nothing, and after significant effort (mostly from @stas00) I think it is time to give up. If we see another tool we like, or manage to reconfigure this one to work well, that's great, but I think that should happen without broken codecov on master.
| 09-25-2020 18:46:00 | 09-25-2020 18:46:00 | The page is nice but the data seems wrong --
https://codecov.io/gh/huggingface/transformers/src/master/src/transformers/modeling_layoutlm.py
says that `__init__` is not covered, but I checked and it is.. in `test_modeling_layoutlm.py`.
I can also just disable PR comments if that works better for you.
<|||||>> Is there another tool we could leverage to give us more reliable information on our code coverage?
The tool is not the problem, the problem lies in our test suite being not idempotent. codecov only compares the old coverage to new coverage. It can't give correct coverage if the data it works with is invalid. Garbage in garbage out.
If you want an approximate coverage, you can just add `cov: pytest --cov` in Makefile. It has a bunch of formats if you want the report in a particular format. It should be within 98% of correctness based on the current state of the test suite. <|||||>I understand the issue. Could we simply disable the PR comments for now @sshleifer, as that's the only pain point?<|||||>Done. Will merge once checks pass! |
transformers | 7,399 | closed | [Rag] fix rag retriever save_pretrained method | This PR fixes a typo in `RagRetriever`. `generator_tokenizer` was renamed to just `generator` in `RagTokenizer` | 09-25-2020 17:37:28 | 09-25-2020 17:37:28 | |
transformers | 7,398 | closed | Uploading/Sharing large models to HuggingFace | Hi,
I am trying to upload a `t5-3b` based model to HuggingFace. The folder to upload has 11G.
When I am uploading, it will gives `'Connection aborted.', BrokenPipeError(32, 'Broken pipe')`.
Does it because the model is too large and there is a limitation? How could I deal with that?
Thank you for your help! | 09-25-2020 17:20:08 | 09-25-2020 17:20:08 | There is no limit to the file sizes on the model hub, however, for uploads that large and if your connection is even slightly unstable, it can indeed fail.
If you have another host (S3 bucket or whatever) you can upload the file to, I can handle `cp`ing it to your namespace on huggingface.co<|||||>Actually It will abort at the very beginning of uploading process for the large file every time. All my other smaller models could be uploaded smoothly. so I feel it might not be my network issue.
My `pytorch_model.bin` is about 11G. I tried to use `truncate` to truncate the file size and noticed that I will keep aborting until I `truncate` the file to 5G<|||||>Btw, we have a public google cloud storage host. Does it work for you if i am still not able to upload the model?<|||||>I can indeed reproduce. For now, can you upload to a GCS or S3 bucket, post the url here, and I'll cp the file?
Will take a note to investigate/fix this in the future.<|||||>```
gs://ron-random/castorini/monot5-3b-med-msmarco/
gs://ron-random/castorini/monot5-3b-msmarco/
```
Could you help us cp these two models to our organization `castorini`
Thank you very much for your help!<|||||>Here you go: https://huggingface.co/castorini<|||||>Will close this for now but we are tracking the "large file upload" issue internally |
transformers | 7,397 | open | Add DistilBERTGeneration comparable to BertGeneration | # 🚀 Feature request
I noticed the new `BertGeneration` class, which uses BERT-style models as both encoder and decoder, as well as the more general `EncoderDecoder` class. This is all great stuff! It would also be great to be able to use distilled models. I believe this is possible for the encoder, but for the decoder a language head must be added.
Since DistilBert is implemented as its own model, and not as a BertModel, I don't think it's possible (or at least it's not easy) for the end user to do this. At least not loading pretrained models, since any pretrained model needs to be a type approved by `AutoModelForCausalLM`.
## Motivation
Same motivation as using distilled models in general. Same results at higher speed, this time applied to an `EncoderDecoder` model.
## Your contribution
Happy to be an alpha tester for this feature
| 09-25-2020 16:53:42 | 09-25-2020 16:53:42 | Hey @jsilter - yes we could definitely add a `DistilForCausalLM` model. I think instead of doing something similar to `BertGeneration` it would be easier to just add a `DistilBertForCausalLM` to `modeling_distilbert.py` similar to `BertLMHeadModel` or `RobertaForCausalLM`. This could actually be an interesting `Good Second Issue`. If someone is interested in opening a PR - I'd be more than happy to provide some guidance :-)<|||||>Hi @patrickvonplaten, I would love to work on this if it is still possible?<|||||>Hey @KMFODA - yes absolutely :-) Do you want to open a PR? I think we can very analogues to `BertLMHeadModel` add a `DistilBertForCausalLM` model in `modeling_distilbert.py`.<|||||>Great! Will open up a PR and start adding a `DistilBertForCausalLM` model into `modeling_distilbert.py` and get back to you if I have any issues :)<|||||>Hi @patrickvonplaten, I've built the `DistilBertForCausalLM` class into `modelling_distilbert.py` and can run it on the example used in both the `BertLMHeadModel` and the `RobertaForCausalLM` and the outputs look fine. Other than this example, are there any other tests I can run to check it's working as expected?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,396 | closed | (GPT2) Running out of GPU memory(24G) on WSL2 but not on native linux. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: WSL2 Debian
- Python version: 3.7
- PyTorch version (GPU?): 1.3.1
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
TextGeneration: @TevenLeScao
-->
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
the official example scripts: (give details below)
I'm running the run_language_modeling.py trying to finetune GPT-2
----
On WSL2 i Run out of memory:
```RuntimeError: CUDA out of memory. Tried to allocate 384.00 MiB (GPU 0; 24.00 GiB total capacity; 22.01 GiB already allocated; 342.71 MiB free; 65.09 MiB cached)```
but if i boot a live ubuntu and run the exact same script it works fine.
I'm using all default settings just as in the example doc.
not sure what is it due to and how to fix it?
| 09-25-2020 16:51:01 | 09-25-2020 16:51:01 | I can train the bert-base-multilingual-cased model and its taking almost all my memory (21109MiB / 24576MiB) on WSL2 meanwhile only taking about 8G on native linux..
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,395 | closed | [RAG] Remove dependency on `examples/seq2seq` from rag | We were importing some functionality from `examples/seq2seq`, however, it seems more HugginFace-like and less error-prone to just copy-paste.
Tested by launching evaluation and training runs. | 09-25-2020 15:48:52 | 09-25-2020 15:48:52 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7395?src=pr&el=h1) Report
> Merging [#7395](https://codecov.io/gh/huggingface/transformers/pull/7395?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cf1c88e0921243e760d306e63a5938e1bac880f3?el=desc) will **increase** coverage by `0.96%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/7395?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7395 +/- ##
==========================================
+ Coverage 76.65% 77.62% +0.96%
==========================================
Files 181 181
Lines 35728 35728
==========================================
+ Hits 27387 27733 +346
+ Misses 8341 7995 -346
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7395?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.13% <0.00%> (-15.42%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.36% <0.00%> (-0.33%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.28%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.12% <0.00%> (-0.17%)` | :arrow_down: |
| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7395/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7395?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7395?src=pr&el=footer). Last update [cf1c88e...4601efb](https://codecov.io/gh/huggingface/transformers/pull/7395?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,394 | closed | Speedup check_copies script | Checking the copies by using black was slowing down the script quite a lot, so removing this check makes the script way faster. Removing `blackify` use could make the script less robust though, so leaving the function for now even if we don't use it anymore. If a situation arises where we see the script fail, I can code a (more complex) way of using black that would be fast.
With the two lines removed, the script takes 0.129s on my setup (instead of 18s).
cc @stas00 for information. | 09-25-2020 15:46:24 | 09-25-2020 15:46:24 | Whoah! That's blazing fast! Thanks, @sgugger!
I think that's why `flake8` is slow - it runs black in some slow mode (`black` itself is very fast)
You can always add a flag that activates that disabled function, so it's there if needed.<|||||>This is no longer needed: `--line-length 119 --target-version py35` at https://github.com/huggingface/transformers/blob/90d1545f25b02a05b1581ae7a617db609fece0a0/utils/check_copies.py#L85
it now uses the config file - ensures we only have one place to do this setting.
Also, I haven't studied your code, but if it's applicable - skip checking files that haven't changes since last check - should give a huge speed increase, since typically only a few files are touched during a development of a single PR. If if is applicable and I can be of help let me know.
I wish black/flake8/isort did that too. It makes no sense to re-run the check on files that haven't changed, which is like 99% of files most of the time.<|||||>No need for a check-since-file-modified approach, use this instead:
```
git diff --name-only $(git merge-base --fork-point master)
```
as the source of what files to check.
It will give you all the files that were modified since the branch was made - yay!
But you only want specific sub-folders, so:
```
git diff --name-only $(git merge-base --fork-point master) | egrep '^(examples|templates|tests|src|utils)' | tr '\n' ' '
```
Now you can unleash whatever checks and it'd be all blazing fast.
I will post shortly a PR to make flake8 and other checkers rocket-fast! https://github.com/huggingface/transformers/pull/7403
I will make a function in Makefile which you can use to feed to the check scripts just the modified files. **edit**: See https://github.com/huggingface/transformers/pull/7403 you now have a variable with all the modified files. |
transformers | 7,393 | closed | [trainer] Training from scratch | @patil-suraj is this possible in the new `Seq2SeqTrainer`?
Possible solution sketch:
Where we call:
```
AutoSeq2SeqModelWithLMHead.from_pretrained(model_name)
```
Switch to
```
if args.from_scratch: model = AutoSeq2SeqModelWithLMHead(config)
else: model = AutoSeq2SeqModelWithLMHead.from_pretrained(model_name)
```
What do you think?
| 09-25-2020 15:31:50 | 09-25-2020 15:31:50 | @sshleifer Definitely possible, except we'll need to use `AutoModelForSeq2SeqLM` 😉
We can also pass a different `config` if we don't want to use `pretrained config` using `config_name` argument.
Happy to open a PR if it's needed :). Let me know<|||||>@sgugger is this possible with existing `Trainer? It seems like Seq2Seq is the wrong level for this feature to be implemented.<|||||>Models are initialised in example scripts rather than `Trainer`. Currently we need to save a from scratch model and then pass that. IMO it makes sense to add the `from_scratch` argument to `TrainingArguments` but each examples scripts will need to handle this itself
<|||||>Oh that's a reasonable workaround.
```
def save_randomly_initialized_version(config_name, save_dir, **config_kwargs):
cfg = AutoConfig.from_pretrained(config_name, **config_kwargs)
model = AutoModelForSeq2SeqLM.from_config(cfg)
model.save_pretrained(save_dir)
AutoTokenizer.from_pretrained(config_name).save_pretrained(save_dir)
```
I'll put this in the make_student PR.
|
transformers | 7,392 | closed | Pull request template | The goal of this PR is to complete the existing pull request template with some additional information, some useful comments for the contributor, as well as the helpful tagging suggestions that already exist in the issue template.
co-authored-by: sgugger <[email protected]>
| 09-25-2020 15:27:57 | 09-25-2020 15:27:57 | |
transformers | 7,391 | closed | Remove unhelpful bart warning | This gets hit at the first step of generate. My bad.
The CI Failures are spurious. | 09-25-2020 14:22:02 | 09-25-2020 14:22:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7391?src=pr&el=h1) Report
> Merging [#7391](https://codecov.io/gh/huggingface/transformers/pull/7391?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cf1c88e0921243e760d306e63a5938e1bac880f3?el=desc) will **decrease** coverage by `0.25%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/7391?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7391 +/- ##
==========================================
- Coverage 76.65% 76.40% -0.26%
==========================================
Files 181 181
Lines 35728 35726 -2
==========================================
- Hits 27387 27296 -91
- Misses 8341 8430 +89
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7391?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.10% <ø> (-0.19%)` | :arrow_down: |
| [src/transformers/tokenization\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-74.53%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.28%)` | :arrow_down: |
| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/7391/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7391?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7391?src=pr&el=footer). Last update [cf1c88e...6779ded](https://codecov.io/gh/huggingface/transformers/pull/7391?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,390 | closed | Fix BartModel output documentation | As mentioned in #7380, the output documented for `BartModel` was wrong. This PR should fix this.
| 09-25-2020 13:50:12 | 09-25-2020 13:50:12 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7390?src=pr&el=h1) Report
> Merging [#7390](https://codecov.io/gh/huggingface/transformers/pull/7390?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/571c7a11c17bd00ba3e79f4d853cc51428a14e45?el=desc) will **decrease** coverage by `0.76%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/7390?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7390 +/- ##
==========================================
- Coverage 77.64% 76.87% -0.77%
==========================================
Files 181 181
Lines 35722 35722
==========================================
- Hits 27736 27461 -275
- Misses 7986 8261 +275
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7390?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.12% <ø> (ø)` | |
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-74.53%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-1.26%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.01% <0.00%> (+0.64%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `96.55% <0.00%> (+15.41%)` | :arrow_up: |
| [src/transformers/activations\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `75.00% <0.00%> (+20.83%)` | :arrow_up: |
| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/7390/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7390?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7390?src=pr&el=footer). Last update [571c7a1...5603f56](https://codecov.io/gh/huggingface/transformers/pull/7390?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,389 | closed | Custom preprocessing of text | I feel like this a silly question. But I just thought in using BERT, wait, before working with fasttext for example I had to do preprocessing like word stemming/lemmatization and stopwords. How is the advise for using BERT models?
Does it makes problems if I do stemming or lemmatization before feeding to BERT tokenizer?
questions over questions... | 09-25-2020 13:14:41 | 09-25-2020 13:14:41 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>It shouldn't be necessary as it performs byte pair encoding (BPE) when a word isn't in it's vocabulary. For example "Gallbladder palpaple". Palpaple isn't in my vocabulary so it breaks the word into many partial words that are in the vocabulary as: ['p', '##al', '##pa', '##ple']. This would match variations that would otherwise need to be stemmed or converted to it's lemma.
This will however be an issue if you are using the model to perform cosine similarly. The results are terrible when you have many words out of vocabulary. |
transformers | 7,388 | closed | Update LayoutLM doc | Minor update to model_doc/layoutlm.rs
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger @julien-c | 09-25-2020 13:14:21 | 09-25-2020 13:14:21 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7388?src=pr&el=h1) Report
> Merging [#7388](https://codecov.io/gh/huggingface/transformers/pull/7388?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9e68d075a4100906509170498480823e7e61874a?el=desc) will **decrease** coverage by `2.58%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/7388?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7388 +/- ##
==========================================
- Coverage 79.33% 76.75% -2.59%
==========================================
Files 181 181
Lines 35759 35759
==========================================
- Hits 28371 27447 -924
- Misses 7388 8312 +924
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7388?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-39.79%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.13% <0.00%> (-0.25%)` | :arrow_down: |
| [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (+6.76%)` | :arrow_up: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+10.71%)` | :arrow_up: |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `91.31% <0.00%> (+12.50%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `96.55% <0.00%> (+15.41%)` | :arrow_up: |
| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/7388/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7388?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7388?src=pr&el=footer). Last update [9e68d07...9d4e5fc](https://codecov.io/gh/huggingface/transformers/pull/7388?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,387 | closed | Fix tokenization in SQuAD for RoBERTa, Longformer, BART | Originating from this discussion: https://github.com/huggingface/transformers/pull/4615#issuecomment-697725357
**Issue:**
Tokenization of context in `squad_convert_example_to_features()` for RoBERTA-like tokenizers is not preserving whitespace, because we call the tokenizer on previously splitted, individual words.
**Example:**
Q = Who was Jim Henson?
Context = Jim Henson was a nice puppet
Expected Tokens: ['< s>', 'who', 'Ġwas', 'Ġj', 'im', 'Ġhen', 'son', '?', '</s>', '</s>', 'Ġj', 'im', 'Ġhen', 'son', 'Ġwas', 'Ġa', 'Ġnice', 'Ġpuppet', '</s>']
Actual Tokens: ['< s>', 'who', 'Ġwas', 'Ġj', 'im', 'Ġhen', 'son', '?', '</s>', '</s>', 'j', 'im', 'hen', 'son', 'was', 'a', 'nice', 'p', 'uppet', '</s>']
Decoded string: Who was Jim Henson?JimHensonwasanicepuppet
**Why a problem?**
- Inconsistency: The question gets tokenized incl. whitespace while the context doesn't. If we have the same word in question and context, we will encode them to different ids.
- Model performance: Eval metrics of `deepset/roberta-base-squad2` on SQuAD 2 dev are significantly lower than originally (F1: 69.6 vs. 81.7). After this fix, it's back to normal (F1: 81.91).
Evalated via:
```
run_squad.py \
--model_type roberta \
--model_name_or_path deepset/roberta-base-squad2 \
--output_dir results/deepset-roberta-base-squad2 \
--data_dir . \
--predict_file dev-v2.0.json \
--do_eval \
--version_2_with_negative \
--per_gpu_eval_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--seed 42 \
--threads 12 \
```
**Fix:**
Enable `add_prefix_space` for RoBERTa-like tokenizers
**Limitations:**
- not the most elegant solution
- not sure if there are more tokenizers with similar behavior that we should add
**Related to:**
https://github.com/huggingface/transformers/issues/7249
@patrickvonplaten @patil-suraj | 09-25-2020 10:44:11 | 09-25-2020 10:44:11 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7387?src=pr&el=h1) Report
> Merging [#7387](https://codecov.io/gh/huggingface/transformers/pull/7387?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2dd652d757132d97e43173fb048849685ecccb68?el=desc) will **increase** coverage by `2.39%`.
> The diff coverage is `50.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7387?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7387 +/- ##
==========================================
+ Coverage 76.92% 79.32% +2.39%
==========================================
Files 181 181
Lines 35721 35726 +5
==========================================
+ Hits 27480 28339 +859
+ Misses 8241 7387 -854
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7387?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.61% <50.00%> (+0.47%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (-34.74%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `78.81% <0.00%> (-12.50%)` | :arrow_down: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `83.11% <0.00%> (-10.39%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.33% <0.00%> (-7.31%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.94% <0.00%> (-6.27%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `90.47% <0.00%> (-1.37%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |
| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7387/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7387?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7387?src=pr&el=footer). Last update [2dd652d...a3b11d3](https://codecov.io/gh/huggingface/transformers/pull/7387?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Also pinging @mfuntowicz here<|||||>@mfuntowicz @sgugger Is there anything else you want to tackle before merging? |
transformers | 7,386 | closed | [Rag] Fix wrong usage of `num_beams` and `bos_token_id` in Rag Sequence generation | Small changes => big impact. Hopefully e2e results are better now @ola13 | 09-25-2020 09:25:35 | 09-25-2020 09:25:35 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7386?src=pr&el=h1) Report
> Merging [#7386](https://codecov.io/gh/huggingface/transformers/pull/7386?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8d3bb781ee2643ad1076f4cbcc6f417245671e94?el=desc) will **increase** coverage by `2.51%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7386?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7386 +/- ##
==========================================
+ Coverage 76.61% 79.12% +2.51%
==========================================
Files 181 181
Lines 35759 35760 +1
==========================================
+ Hits 27395 28295 +900
+ Misses 8364 7465 -899
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7386?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `25.32% <0.00%> (-51.66%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (-34.74%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `65.26% <0.00%> (-33.64%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |
| [src/transformers/configuration\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `97.77% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `82.62% <0.00%> (-1.41%)` | :arrow_down: |
| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/7386/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7386?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7386?src=pr&el=footer). Last update [8d3bb78...10026f3](https://codecov.io/gh/huggingface/transformers/pull/7386?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,385 | closed | [s2s, examples] minor doc changes | Updates `The Big Table of Tasks`, and note about `fp16` with torch 1.6 for `Seq2SeqTrainer`
| 09-25-2020 09:20:51 | 09-25-2020 09:20:51 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7385?src=pr&el=h1) Report
> Merging [#7385](https://codecov.io/gh/huggingface/transformers/pull/7385?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7cdd9da5bf28c53c214e22d082dd62032f9b00fc?el=desc) will **decrease** coverage by `0.61%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/7385?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7385 +/- ##
==========================================
- Coverage 77.57% 76.96% -0.62%
==========================================
Files 181 181
Lines 35721 35721
==========================================
- Hits 27712 27492 -220
- Misses 8009 8229 +220
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7385?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (-34.74%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.12% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `87.64% <0.00%> (+0.37%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.50%)` | :arrow_up: |
| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/7385/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7385?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7385?src=pr&el=footer). Last update [7cdd9da...7b4f617](https://codecov.io/gh/huggingface/transformers/pull/7385?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Yay! Thanks, cc @sshleifer |
transformers | 7,384 | closed | Flos fix | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #7146
This basically unwraps the model that is used during training and can be either plain `Module` or `DataParallel`/`DistributedDataParallel`. | 09-25-2020 08:34:19 | 09-25-2020 08:34:19 | Please merge at will, as this fix is blocking us (https://github.com/huggingface/transformers/issues/7146#issuecomment-698852274). |
transformers | 7,383 | closed | Missing keys when loading weights in TF are not useful | ## This concerns all TF models
If one loads weights of a tensorflow model these lines are run:
https://github.com/huggingface/transformers/blob/9e68d075a4100906509170498480823e7e61874a/src/transformers/modeling_tf_utils.py#L627
to check which layers are in the model weights file and which layer names of the model are actually loaded.
The problem is that these layer names consists only of the "highest" layer names of a model. *E.g.* for *TFBertForMaskedLM*, these layer names are just:
"bert" and "mlm",
but a name for each weight as it should be.
See:
https://github.com/huggingface/transformers/blob/3c6bf8998fb6ca5aca063fed2543b7176883b004/src/transformers/modeling_tf_bert.py#L865
So the missing keys argument for tensorflow will only capture the most "high" level missing weights.
| 09-25-2020 08:29:34 | 09-25-2020 08:29:34 | Fixed in #7422 <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,382 | closed | [RAG] Add missing doc and attention_mask to rag | Adds docs to the newly added `attention_mask` (hope Sylvain is not gonna be too mad that I forgot!) and corrects evaluation for RAG fine-tuning. | 09-25-2020 07:30:13 | 09-25-2020 07:30:13 | |
transformers | 7,381 | closed | modeling_bart: 3 small cleanups that dont change outputs | + Fixes #6259
+ allows a better diff if the mbart integration test breaks
+ raises a Warning in the classic "use cache when call forward" mixup (test_benchmark triggers this warning).
| 09-25-2020 01:24:25 | 09-25-2020 01:24:25 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7381?src=pr&el=h1) Report
> Merging [#7381](https://codecov.io/gh/huggingface/transformers/pull/7381?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0ccb6f5c6da9e703766e8053581fddfc6dcc71a9?el=desc) will **decrease** coverage by `1.41%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7381?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7381 +/- ##
==========================================
- Coverage 78.20% 76.78% -1.42%
==========================================
Files 181 181
Lines 35751 35753 +2
==========================================
- Hits 27959 27454 -505
- Misses 7792 8299 +507
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7381?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.12% <100.00%> (+0.18%)` | :arrow_up: |
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-39.79%)` | :arrow_down: |
| [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `57.14% <0.00%> (-39.69%)` | :arrow_down: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.36% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: |
| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/7381/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7381?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7381?src=pr&el=footer). Last update [0ccb6f5...3c0b8e3](https://codecov.io/gh/huggingface/transformers/pull/7381?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,380 | closed | Incorrect output fields names in docs | ## Environment info
- `transformers` version: 3.2.0
- Platform: Linux-5.4.0-7642-generic-x86_64-with-glibc2.29
- Python version: 3.8.2
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@sgugger
## Information
The model I am using (Bert, XLNet ...): Bart
The problem arises when using the official example scripts.
```Python
from transformers import BartModel, BartTokenizer
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
model = BartModel.from_pretrained('facebook/bart-base', return_dict=True,
output_hidden_states=True, output_attentions=True)
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
print(outputs.hidden_states)
```
This script results in the error `'Seq2SeqModelOutput' object has no attribute 'hidden_states'`.
## To reproduce
Steps to reproduce the behavior:
1. Run the script above.
## Expected behavior
My expectation was to get a set of hidden states for the model. But in fact, the model returns two sets of hidden states - one for the decoder and another one for the encoder. It can be observed by looking at the keys of the `outputs`:
```Python
>>> print(outputs.keys())
odict_keys(['last_hidden_state', 'decoder_hidden_states', 'encoder_last_hidden_state', 'encoder_hidden_states'])
```
The same is valid for attentions if I specify `output_attentions=True`:
```Python
>>> print(outputs.keys())
odict_keys(['last_hidden_state', 'decoder_hidden_states', 'decoder_attentions', 'encoder_last_hidden_state', 'encoder_hidden_states', 'encoder_attentions'])
```
My conclusion is that the documentation gives an incorrect description of the output fields. | 09-24-2020 22:46:51 | 09-24-2020 22:46:51 | Actually, the root of the problem might be related to the fact that the documentation states that the forward pass returns `BaseModelOutputWithPast` but in fact in returns `Seq2SeqModelOutput` ([source code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L947-L955)).<|||||>Thanks for flagging! This should be fixed once the PR above is merged.<|||||>Solved by #7390 |
transformers | 7,379 | closed | Movement Pruning for GPT2 | # ❓ Questions & Help
Is it possible to make the movement pruning work for GPT2 model?
Principally it should work as it is, did anyone try it and can we have it in examples?
Thanks
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 09-24-2020 22:16:38 | 09-24-2020 22:16:38 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,378 | closed | how to customize the position encoding | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I want to add in non sequence encoding to pre-train a model. Could anyone please point me where should I look at?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 09-24-2020 21:42:16 | 09-24-2020 21:42:16 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,377 | closed | Document RAG again | Do not merge before Monday
| 09-24-2020 21:11:31 | 09-24-2020 21:11:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7377?src=pr&el=h1) Report
> Merging [#7377](https://codecov.io/gh/huggingface/transformers/pull/7377?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eadd870b2f503047dd81b8dcd9d115dc1b4a9196?el=desc) will **increase** coverage by `0.75%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/7377?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7377 +/- ##
==========================================
+ Coverage 77.99% 78.75% +0.75%
==========================================
Files 181 181
Lines 35759 35759
==========================================
+ Hits 27891 28161 +270
+ Misses 7868 7598 -270
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7377?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `22.08% <0.00%> (-75.26%)` | :arrow_down: |
| [src/transformers/modeling\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-39.79%)` | :arrow_down: |
| [src/transformers/configuration\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.91% <0.00%> (+0.27%)` | :arrow_up: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (+0.33%)` | :arrow_up: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (+0.55%)` | :arrow_up: |
| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/7377/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7377?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7377?src=pr&el=footer). Last update [a8e7982...52fbf34](https://codecov.io/gh/huggingface/transformers/pull/7377?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,376 | closed | Remove mentions of RAG from the docs | You haven't seen anything. Those are not the droids you are looking for. | 09-24-2020 20:51:24 | 09-24-2020 20:51:24 | |
transformers | 7,375 | closed | CUDA out of memory error for Bert Model | Hi there,
I am building a BERT binary classification on SageMaker using Pytorch. Previously when I ran the model, I set the Batch size to 16 and the model were able to run successfully. However, yesterday after I stopped SageMaker and restarted the this morning, I can't run the model with Batch size as 16 any more. I am able to run the model with batch size 8. However, the model is not producing the same result (of course). I didn't change anything else in between. All other settings are the same. (Except I change the SageMaker volume from 30GB to 200GB.)
Does anyone know what may cause this problem? I really want to reproduce the result with batch size 16.
Any answers will help and thank you in advance! | 09-24-2020 20:43:21 | 09-24-2020 20:43:21 | I agree, I had a stable training pipeline for training on TPU and suddenly it broke because it ran out of memory when using the newer versions of Huggingface. I am using the Trainer class. For me the crash happens either during the first evaluation step or right after it.<|||||>Also because the Trainer is such a critical code that will be used in production systems in companies and various research labs, it is very important that the Trainer code is stable and is well tested for correctness, performance (iterations/sec) and memory use for training and evaluation. The tests should also cover the various devices it supports, i.e. CPU, GPU and TPU.
It would be great if these tests could run every time a change is made in the trainer code, so that we have confidence that the Trainer is stable. Over the last 3 months I have seen a lot of bugs popping into huggingface master and trying to debug Trainer bugs is very unproductive for Huggingface's users.<|||||>The commit id where I do not see an increase in device memory for Trainer 8fcbe486e1592321e868f872545c8fd9d359a515 . I have reverted back to this commit id and my training pipeline works again.<|||||>I think whats happening is something changed in the Trainer code and now it suddenly takes a bit more memory. Because most of the people select the training batch size = 1 less than when they start seeing memory failures, the setup becomes extra sensitive to any increase in memory used by the Trainer.
With the current master, I tried training with a lower batch size and it trained properly. Although, I lose convergence speed because I process less examples and the iterations/seconds remain almost the same as the larger batch size.
I would rather revert to the older commit than train with smaller batch sizes.<|||||>Hi @Backpackerice
Would you mind sharing your code? It's hard to investigate a leak with just a general statement.<|||||>> Hi @Backpackerice
> Would you mind sharing your code? It's hard to investigate a leak with just a general statement.
Please find below my code:
To explain a little bit, this is trying to run a dual bert - two different two inputs (with attention or concat method). But when I ran into this Cuda issues, I was only using text input from review text (not agent text).
`
class ReviewClassification(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = 2
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
embedding_size = config.hidden_size
self.classifier = nn.Linear(embedding_size, len(LABEL_NAME))
self.init_weights()
def forward(
self,
review_input_ids=None,
review_attention_mask=None,
review_token_type_ids=None,
agent_input_ids=None,
agent_attention_mask=None,
agent_token_type_ids=None,
labels=None,
):
review_outputs = self.bert(
review_input_ids,
attention_mask=review_attention_mask,
token_type_ids=review_token_type_ids,
position_ids=None,
head_mask=None,
inputs_embeds=None,
)
feature = review_outputs[1]
logits = self.classifier(feature)
outputs = (logits,) # + outputs[2:] # add hidden states and attention if they are here
if labels is not None:
pos_weight=torch.tensor(8.85) # N_negative/N_positive from entire training set
loss_fct = nn.BCEWithLogitsLoss(pos_weight=pos_weight).cuda()
loss = loss_fct(logits, labels)
outputs = (loss,) + outputs
return outputs # (loss, logits, hidden_states, attentions) `
<|||||>> I think whats happening is something changed in the Trainer code and now it suddenly takes a bit more memory. Because most of the people select the training batch size = 1 less than when they start seeing memory failures, the setup becomes extra sensitive to any increase in memory used by the Trainer.
> With the current master, I tried training with a lower batch size and it trained properly. Although, I lose convergence speed because I process less examples and the iterations/seconds remain almost the same as the larger batch size.
> I would rather revert to the older commit than train with smaller batch sizes.
Currently I temporarily solve the issues but creating a new SageMaker. Seems like on the old SageMaker, some phantom python processes were hogging the GPU cards. Also it was acting wired even with the same setting, it will provides significantly different results. <|||||>I encountered similar problems. What I did was to uninstall the latest version of transformers (v3.4.0) and install v3.1.0 instead. My code works fine with the old version.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,374 | closed | Fix FP16 and attention masks in FunnelTransformer | This `.float()` should have been removed, it was necessary before I converted the attention masks to floating types at the beginning of the forward of the Encoder, but it's now useless (and bad for mixed precision as shown in #7371).
Also, the attentions masks were used the wrong way (0 for non-masked tokens, 1 for masked) which was incompatible with the way transformers tokenizers work.
Fixes #7371 | 09-24-2020 19:37:35 | 09-24-2020 19:37:35 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7374?src=pr&el=h1) Report
> Merging [#7374](https://codecov.io/gh/huggingface/transformers/pull/7374?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0ccb6f5c6da9e703766e8053581fddfc6dcc71a9?el=desc) will **decrease** coverage by `1.43%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7374?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7374 +/- ##
==========================================
- Coverage 78.20% 76.76% -1.44%
==========================================
Files 181 181
Lines 35751 35750 -1
==========================================
- Hits 27959 27444 -515
- Misses 7792 8306 +514
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7374?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mdW5uZWwucHk=) | `86.72% <100.00%> (-0.02%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `94.04% <100.00%> (ø)` | |
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |
| [src/transformers/tokenization\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-39.79%)` | :arrow_down: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.36% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: |
| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/7374/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7374?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7374?src=pr&el=footer). Last update [0ccb6f5...705ee7a](https://codecov.io/gh/huggingface/transformers/pull/7374?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Yes let's keep it open until the problem is fully solved.<|||||>@LysandreJik this is ready for review and to be merged. Confirmed I can overfit the training set on a sequence classification task and train with the `fp16` flag so this should solve all problems with FunnelTransformer.<|||||>There's a failing Funnel integration that should be taken care of before merging. |
transformers | 7,373 | closed | [RAG] Add `attention_mask` to RAG generate | Previously the attention mask was not passed to the generate function so that the encoder_outputs were possibly working if the batch has different sizes of input ids.
@ola13 Also fixed in eval script | 09-24-2020 17:59:19 | 09-24-2020 17:59:19 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7373?src=pr&el=h1) Report
> Merging [#7373](https://codecov.io/gh/huggingface/transformers/pull/7373?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8d3bb781ee2643ad1076f4cbcc6f417245671e94?el=desc) will **increase** coverage by `1.44%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7373?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7373 +/- ##
==========================================
+ Coverage 76.61% 78.05% +1.44%
==========================================
Files 181 181
Lines 35759 35759
==========================================
+ Hits 27395 27911 +516
+ Misses 8364 7848 -516
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7373?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `76.98% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `17.16% <0.00%> (-81.42%)` | :arrow_down: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `18.69% <0.00%> (-74.15%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.23% <0.00%> (-72.70%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.13% <0.00%> (-15.42%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.28%)` | :arrow_down: |
| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/7373/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7373?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7373?src=pr&el=footer). Last update [8d3bb78...e4e1ea8](https://codecov.io/gh/huggingface/transformers/pull/7373?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,372 | closed | [RAG] Fix retrieval offset in RAG's HfIndex and better integration tests | Address @yjernite 's comment in https://github.com/huggingface/transformers/pull/7129#discussion_r488904472
Indeed the retriever was returning the indexes offset by one.
Cc @ola13 | 09-24-2020 16:19:27 | 09-24-2020 16:19:27 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7372?src=pr&el=h1) Report
> Merging [#7372](https://codecov.io/gh/huggingface/transformers/pull/7372?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/571c7a11c17bd00ba3e79f4d853cc51428a14e45?el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `88.88%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7372?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7372 +/- ##
==========================================
- Coverage 77.64% 77.63% -0.01%
==========================================
Files 181 181
Lines 35722 35728 +6
==========================================
+ Hits 27736 27738 +2
- Misses 7986 7990 +4
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7372?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/retrieval\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9yZXRyaWV2YWxfcmFnLnB5) | `91.01% <88.88%> (-0.27%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-0.76%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7372?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7372?src=pr&el=footer). Last update [571c7a1...9ecf660](https://codecov.io/gh/huggingface/transformers/pull/7372?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@lhoestq - can you check how the `RUN_SLOW` tests would change in this case? <|||||>> @lhoestq - can you check how the `RUN_SLOW` tests would change in this case?
They change indeed. I updated the expected values.<|||||>@yjernite - could you take a final look and approve if everything seems fine to you? <|||||>Okey great - this should be the last big fix for RAG. I'll rebase this PR and merge it after |
transformers | 7,371 | closed | FunnelTransformerForSequenceClassification crashes when fine tuning with mixed precision flag | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.2.0
- Platform: Linux-4.15.0-45-generic-x86_64-with-debian-buster-sid
- Python version: Python 3.7.7
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger As I saw you were the one who worked on the PR implementing Funnel Transformer
## Information
Model I am using: Funnel Transformer
The problem arises when using:
* [ o ] the official example scripts: (give details below)
* [ x ] my own modified scripts:
Only when enabling the mixed precision flag. I am now training the model without it, but I had to lower the batch size, thus increasing the training time.
I have to mention that I just fined tuned a `roberta-base` model using `fp16=True` and `fp16_opt_level='O1'`, thus nvidia APEX is properly installed/configured.
The tasks I am working on is:
* [ o ] an official GLUE/SQUaD task: (give the name)
* [ x ] my own task or dataset:
Basically I am trying to fine tune `FunnelForSequenceClassification` using my own custom data-set:
```python
# some code to load data from CSV
# ...
# wrapper around PyTorch for holding datasets
class IMDbDataset(torch.utils.data.Dataset):
# same code as in the Huggingface docs
# ...
# load tokenizer
tokenizer = FunnelTokenizer.from_pretrained('funnel-transformer/large-base')
# tokenize texts
train_encodings = tokenizer(train_texts, truncation=True, padding=True)
val_encodings = tokenizer(val_texts, truncation=True, padding=True)
test_encodings = tokenizer(test_texts, truncation=True, padding=True)
train_dataset = IMDbDataset(train_encodings, train_labels)
val_dataset = IMDbDataset(val_encodings, val_labels)
test_dataset = IMDbDataset(test_encodings, test_labels)
# training args used
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
#learning_rate=35e-6,
weight_decay=0.01, # strength of weight decay
warmup_steps=500, # number of warmup steps for learning rate scheduler
logging_dir='./logs', # directory for storing logs
logging_steps=10,
fp16=True,
fp16_opt_level='O1' # here I tried both O1 and O2 with the same result
)
model = FunnelForSequenceClassification.from_pretrained('funnel-transformer/large-base',
return_dict=True,
num_labels=max(train_labels)+1)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
)
trainer.train()
trainer.save_model('funnel')
```
## To reproduce
Steps to reproduce the behavior:
1. Run script
2. Wait for script to reach the training part
Stacktrace:
```
File "funnel.py", line 89, in <module>
trainer.train()
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/transformers/trainer.py", line 741, in train
tr_loss += self.training_step(model, inputs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/transformers/trainer.py", line 1046, in training_step
loss = self.compute_loss(model, inputs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/transformers/trainer.py", line 1070, in compute_loss
outputs = model(**inputs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/transformers/modeling_funnel.py", line 1263, in forward
return_dict=return_dict,
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/transformers/modeling_funnel.py", line 950, in forward
return_dict=return_dict,
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/transformers/modeling_funnel.py", line 655, in forward
layer_output = layer(query, key, value, attention_inputs, output_attentions=output_attentions)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/transformers/modeling_funnel.py", line 602, in forward
attn = self.attention(query, key, value, attention_inputs, output_attentions=output_attentions)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/transformers/modeling_funnel.py", line 548, in forward
content_score = torch.einsum("bind,bjnd->bnij", q_head + r_w_bias, k_head)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/functional.py", line 292, in einsum
return _VF.einsum(equation, operands)
RuntimeError: Expected object of scalar type Float but got scalar type Half for argument #2 'mat2' in call to _th_bmm
```
[This](https://github.com/NVIDIA/apex/issues/302#issuecomment-552198322) seems like a very similar issue.
## Expected behavior
We should be able to train the model with mixed precision to use VRAM more efficiently. | 09-24-2020 16:19:01 | 09-24-2020 16:19:01 | Thanks for flagging!
I think I have found the cause for this. Model runs fine on my end in half precision when it's applied.<|||||>Thanks for the quick fix, but unfortunately I checked out that branch (and installed from source) and I still get the issue at this line: https://github.com/huggingface/transformers/blob/624cb37b38574566522072c19659b4cff60b98f9/src/transformers/modeling_funnel.py#L544
Edit (attached new stacktrace):
```python
File "funnel.py", line 90, in <module>
trainer.train()
File "/root/transformers/src/transformers/trainer.py", line 743, in train
tr_loss += self.training_step(model, inputs)
File "/root/transformers/src/transformers/trainer.py", line 1050, in training_step
loss = self.compute_loss(model, inputs)
File "/root/transformers/src/transformers/trainer.py", line 1074, in compute_loss
outputs = model(**inputs)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/root/transformers/src/transformers/modeling_funnel.py", line 1269, in forward
return_dict=return_dict,
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/root/transformers/src/transformers/modeling_funnel.py", line 955, in forward
return_dict=return_dict,
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/root/transformers/src/transformers/modeling_funnel.py", line 651, in forward
layer_output = layer(query, key, value, attention_inputs, output_attentions=output_attentions)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/root/transformers/src/transformers/modeling_funnel.py", line 598, in forward
attn = self.attention(query, key, value, attention_inputs, output_attentions=output_attentions)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/root/transformers/src/transformers/modeling_funnel.py", line 544, in forward
content_score = torch.einsum("bind,bjnd->bnij", q_head + r_w_bias, k_head)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/functional.py", line 292, in einsum
return _VF.einsum(equation, operands)
RuntimeError: Expected object of scalar type Float but got scalar type Half for argument #2 'mat2' in call to _th_bmm
```<|||||>What got me past this error was casting `.float()` on all tensor arguments to `torch.einsum()`, but then I ran into this issue:
```python
File "funnel.py", line 90, in <module>
trainer.train()
File "/root/transformers/src/transformers/trainer.py", line 743, in train
tr_loss += self.training_step(model, inputs)
File "/root/transformers/src/transformers/trainer.py", line 1062, in training_step
scaled_loss.backward()
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/tensor.py", line 198, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/autograd/__init__.py", line 100, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: expected dtype Float but got dtype Long (validate_dtype at /opt/conda/conda-bld/pytorch_1591914880026/work/aten/src/ATen/native/TensorIterator.cpp:143)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x4e (0x7f49c1e64b5e in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: at::TensorIterator::compute_types() + 0xce3 (0x7f49ea00c113 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #2: at::TensorIterator::build() + 0x44 (0x7f49ea00eaf4 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #3: at::native::mse_loss_backward_out(at::Tensor&, at::Tensor const&, at::Tensor const&, at::Tensor const&, long) + 0x193 (0x7f49e9e5c043 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #4: <unknown function> + 0xdfc047 (0x7f49c30ba047 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)
frame #5: at::native::mse_loss_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, long) + 0x172 (0x7f49e9e64782 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #6: <unknown function> + 0xdfc2ff (0x7f49c30ba2ff in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)
frame #7: <unknown function> + 0xe20c26 (0x7f49ea294c26 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #8: <unknown function> + 0x27fd3cb (0x7f49ebc713cb in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #9: <unknown function> + 0xe20c26 (0x7f49ea294c26 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #10: torch::autograd::generated::MseLossBackward::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x1f7 (0x7f49eba78e67 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #11: <unknown function> + 0x2ae7df5 (0x7f49ebf5bdf5 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #12: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&) + 0x16f3 (0x7f49ebf590f3 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #13: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&, bool) + 0x3d2 (0x7f49ebf59ed2 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #14: torch::autograd::Engine::thread_init(int) + 0x39 (0x7f49ebf52549 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #15: torch::autograd::python::PythonEngine::thread_init(int) + 0x38 (0x7f49ef4a2638 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #16: <unknown function> + 0xc819d (0x7f49f1cfd19d in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/../../../.././libstdc++.so.6)
frame #17: <unknown function> + 0x76db (0x7f4a0a4186db in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #18: clone + 0x3f (0x7f4a0a141a3f in /lib/x86_64-linux-gnu/libc.so.6)
```<|||||>Okay, it turns out the first issue with `torch.einsum` was PyTorch's fault as the function did not accept mixed precision types. After updating it to `1.6.0` and recompiling nvidia APEX, I'm stuck with:
```python
File "funnel.py", line 90, in <module>
trainer.train()
File "/root/transformers/src/transformers/trainer.py", line 743, in train
tr_loss += self.training_step(model, inputs)
File "/root/transformers/src/transformers/trainer.py", line 1059, in training_step
self.scaler.scale(loss).backward()
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/root/anaconda/envs/ai/lib/python3.7/site-packages/torch/autograd/__init__.py", line 127, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Found dtype Long but expected Float
Exception raised from compute_types at /opt/conda/conda-bld/pytorch_1595629403081/work/aten/src/ATen/native/TensorIterator.cpp:183 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7f6b6fede77d in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: at::TensorIterator::compute_types(at::TensorIteratorConfig const&) + 0x259 (0x7f6ba2f35ca9 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #2: at::TensorIterator::build(at::TensorIteratorConfig&) + 0x6b (0x7f6ba2f3944b in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #3: at::TensorIterator::TensorIterator(at::TensorIteratorConfig&) + 0xdd (0x7f6ba2f39abd in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #4: at::native::mse_loss_backward_out(at::Tensor&, at::Tensor const&, at::Tensor const&, at::Tensor const&, long) + 0x18a (0x7f6ba2d9e71a in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #5: <unknown function> + 0xd1d610 (0x7f6b71061610 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)
frame #6: at::native::mse_loss_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, long) + 0x90 (0x7f6ba2d9b140 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #7: <unknown function> + 0xd1d6b0 (0x7f6b710616b0 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)
frame #8: <unknown function> + 0xd3f936 (0x7f6b71083936 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cuda.so)
frame #9: at::mse_loss_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, long) + 0x119 (0x7f6ba325dda9 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #10: <unknown function> + 0x2b5e8c9 (0x7f6ba4eb68c9 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #11: <unknown function> + 0x7f60d6 (0x7f6ba2b4e0d6 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #12: at::mse_loss_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, long) + 0x119 (0x7f6ba325dda9 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #13: torch::autograd::generated::MseLossBackward::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x1af (0x7f6ba4df252f in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #14: <unknown function> + 0x30d1017 (0x7f6ba5429017 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #15: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x1400 (0x7f6ba5424860 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #16: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7f6ba5425401 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #17: torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x89 (0x7f6ba541d579 in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #18: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x4a (0x7f6ba974c99a in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #19: <unknown function> + 0xc819d (0x7f6bac27e19d in /root/anaconda/envs/ai/lib/python3.7/site-packages/torch/lib/../../../.././libstdc++.so.6)
frame #20: <unknown function> + 0x76db (0x7f6bc49996db in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #21: clone + 0x3f (0x7f6bc46c2a3f in /lib/x86_64-linux-gnu/libc.so.6)
```<|||||>Due to reducing my data-set to be able to load it faster and check various fixes, I was accidentally passing only one training labels to my classifier. Fixed this and the model started training, however, `loss` is always reported as `nan`.
Is this an issue? I double checked and running without mixed precision mode correctly reports the loss and I can see it decreasing between log statements.<|||||>I can reproduce the losses being at `nan` and will try to investigate the source of this bug. Note that starting in PyTorch 1.6, apex is not used anymore for mixed precision training since PyTorch has native support for it.<|||||>I have found the reason (and why I wasn't managing to fine-tune a model on some GLUE task yesterday). Turns out I was matching exactly the implementation of the authors **but** in transformers, we put 1 in attentions masks for tokens not masked... stupid me.<|||||>Good thing to know I don't have to build APEX next time ;)
I just pulled the latest commit from your branch and can confirm loss is no longer `nan`.
Great job and thanks for assistance! |
transformers | 7,370 | open | Add new PET Model | # 🌟 New model addition
## Model description
A new article just landed on ArXiv: https://arxiv.org/pdf/2009.07118.pdf
An implementation will eventually be available at https://github.com/timoschick/pet
Authors are @timoschick and Hinrich Schutze.
I didn't see any pre-trained models linked on the GitHub README, but the model is pretty small and easy to train.
Update: the code is available open source along, and it can presumably use pretrained BERT models(I do not know how this works, bu the GitHub page states that the roberta-large pretrained model can be used). The model also works unsupservised.
## Open source status
* [x] the model implementation is available: (give details)
* [x] the model weights are available: (give details)
* [x] who are the authors: (mention them, if possible by @gh-username)
| 09-24-2020 15:42:28 | 09-24-2020 15:42:28 | The readme in the repo still says this:
> :rotating_light: This repository does not yet contain the modifications to PET introduced in "[It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners](https://arxiv.org/abs/2009.07118)" but will be updated soon.<|||||>Looks like the authors updated the repo and added the necessary model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>While I don't have the time to add PET to this repository myself, I'm always happy to help if someone wants to take it on :) |
transformers | 7,369 | closed | The absence of source/target language parameters when using MBart in Summarization example | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.2.0
- Platform: Ubuntu 16.04
- Python version: 3.7
- PyTorch version (GPU?): 1.6.0+cu101
- Tensorflow version (GPU?): 1.15
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
Summarization: @sshleifer
## Information
Model I am using (Bert, XLNet ...): MBart
The problem arises when using:
* [x] the official example scripts: (give details below)
I'm following the example for finetuning a summarization model.
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
xsum
## To reproduce
Steps to reproduce the behavior:
1. Using the script [finetune.sh](https://github.com/huggingface/transformers/blob/78387cc63e/examples/seq2seq/finetune.sh)
2. Keep all the default parameters
3. Add --model_name_or_path facebook/mbart-large-cc25 --data_dir datasets/xsum --src_lang en_XX --tgt_lang en_XX
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Validation sanity check: 0it [00:00, ?it/s]Traceback (most recent call last):
File "/home/shining/code/seq2seq/finetune.py", line 440, in <module>
main(args)
File "/home/shining/code/seq2seq/finetune.py", line 415, in main
logger=logger,
File "/home/shining/code/seq2seq/lightning_base.py", line 385, in generic_train
trainer.fit(model)
File "/home/shining/miniconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/home/shining/miniconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1073, in fit
results = self.accelerator_backend.train(model)
File "/home/shining/miniconda3/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_backend.py", line 51, in train
results = self.trainer.run_pretrain_routine(model)
File "/home/shining/miniconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1224, in run_pretrain_routine
self._run_sanity_check(ref_model, model)
File "/home/shining/miniconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1257, in _run_sanity_check
eval_results = self._evaluate(model, self.val_dataloaders, max_batches, False)
File "/home/shining/miniconda3/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 305, in _evaluate
for batch_idx, batch in enumerate(dataloader):
File "/home/shining/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 363, in __next__
data = self._next_data()
File "/home/shining/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 989, in _next_data
return self._process_data(data)
File "/home/shining/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1014, in _process_data
data.reraise()
File "/home/shining/miniconda3/lib/python3.7/site-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/shining/miniconda3/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop
data = fetcher.fetch(index)
File "/home/shining/miniconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/shining/code/seq2seq/utils.py", line 232, in collate_fn
add_prefix_space=self.add_prefix_space,
File "/home/shining/miniconda3/lib/python3.7/site-packages/transformers/tokenization_mbart.py", line 236, in prepare_seq2seq_batch
self.set_src_lang_special_tokens(src_lang)
File "/home/shining/miniconda3/lib/python3.7/site-packages/transformers/tokenization_mbart.py", line 268, in set_src_lang_special_tokens
self.cur_lang_code = self.lang_code_to_id[src_lang]
KeyError: None
```
## Expected behavior
Because the summarization example uses pytorch-linghtning backend I could only track the bug in the collate_fn function in Seq2SeqDataset. I noticed that the parameter self.src_lang=None.
<!-- A clear and concise description of what you would expect to happen. -->
| 09-24-2020 15:26:52 | 09-24-2020 15:26:52 | add
```
self.dataset_kwargs["src_lang"] = hparams.src_lang
self.dataset_kwargs["tgt_lang"] = hparams.tgt_lang
```
here https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L70 |
transformers | 7,368 | closed | Formatter | Add two new methods to the logging utility to automatically set the format like it is done in the `examples/` folder. | 09-24-2020 14:30:05 | 09-24-2020 14:30:05 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7368?src=pr&el=h1) Report
> Merging [#7368](https://codecov.io/gh/huggingface/transformers/pull/7368?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0cffa424f855cbbd657c4f1b57f94a51b7aa8d6d?el=desc) will **decrease** coverage by `1.34%`.
> The diff coverage is `22.22%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7368?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7368 +/- ##
==========================================
- Coverage 76.51% 75.17% -1.35%
==========================================
Files 181 181
Lines 34851 34860 +9
==========================================
- Hits 26666 26205 -461
- Misses 8185 8655 +470
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7368?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/utils/logging.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy91dGlscy9sb2dnaW5nLnB5) | `79.31% <22.22%> (-6.59%)` | :arrow_down: |
| [src/transformers/tokenization\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.32% <0.00%> (-73.63%)` | :arrow_down: |
| [src/transformers/modeling\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |
| [src/transformers/tokenization\_phobert.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGhvYmVydC5weQ==) | `21.80% <0.00%> (-61.66%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |
| [src/transformers/tokenization\_bert\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9nZW5lcmF0aW9uLnB5) | `39.28% <0.00%> (-55.36%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.34% <0.00%> (-42.28%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `62.69% <0.00%> (-28.58%)` | :arrow_down: |
| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/7368/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7368?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7368?src=pr&el=footer). Last update [0cffa42...ee18eb2](https://codecov.io/gh/huggingface/transformers/pull/7368?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,367 | closed | Finetuning Pegasus for summarization task | i have been trying to fine tune Pegasus for summarization task, it worked fine without getting any error.
but when i tried to generate the summary i was getting only empty list as a output.
i am not able to figure it out, is anything wrong with my fine tuning script?
```py
def label_smoothed_nll_loss(lprobs, target, epsilon, ignore_index=-100):
"""From fairseq"""
if target.dim() == lprobs.dim() - 1:
target = target.unsqueeze(-1)
nll_loss = -lprobs.gather(dim=-1, index=target)
smooth_loss = -lprobs.sum(dim=-1, keepdim=True)
if ignore_index is not None:
pad_mask = target.eq(ignore_index)
nll_loss.masked_fill_(pad_mask, 0.0)
smooth_loss.masked_fill_(pad_mask, 0.0)
else:
nll_loss = nll_loss.squeeze(-1)
smooth_loss = smooth_loss.squeeze(-1)
nll_loss = nll_loss.sum() # mean()? Scared to break other math.
smooth_loss = smooth_loss.sum()
eps_i = epsilon / lprobs.size(-1)
loss = (1.0 - epsilon) * nll_loss + eps_i * smooth_loss
return loss, nll_loss
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": weight_decay,
},
{"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0},
]
optimizer = AdamW(optimizer_grouped_parameters, lr=learning_rate, eps=adam_epsilon)
scheduler = get_linear_schedule_with_warmup(
optimizer, num_warmup_steps=warmup_steps, num_training_steps=t_total
)
pad_token_id = tokenizer.pad_token_id
epochs = 5
for epoc in range(epochs):
t0 = time.time()
print("")
print('======== Epoch {} ========'.format(epoc+1))
model.train()
total_train_loss = 0
for i,batch in enumerate(train_dataset):
title = []
body = []
for item in batch['title'].numpy():
title.append(item.decode('utf-8'))
for item in batch['body'].numpy():
body.append(item.decode('utf-8'))
batch_tokens = tokenizer.prepare_seq2seq_batch(body,title,max_length=320,max_target_length=60,truncation=True,padding='max_length').to(device)
decoder_input_ids = shift_tokens_right(batch_tokens['labels'], pad_token_id)
outputs = model(batch_tokens['input_ids'], attention_mask=batch_tokens['attention_mask'], decoder_input_ids=decoder_input_ids, use_cache=False)
lm_logits = outputs[0]
lprobs = torch.nn.functional.log_softmax(lm_logits, dim=-1)
loss, nll_loss = label_smoothed_nll_loss(
lprobs, batch_tokens['labels'],0.1, ignore_index=pad_token_id
)
total_train_loss += loss.item()
optimizer.zero_grad()
loss.backward()
#torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
scheduler.step()
```
| 09-24-2020 14:21:02 | 09-24-2020 14:21:02 | notebook link :
https://colab.research.google.com/drive/1c7G1WXE6mgl2rwA-VR7q8DAqmbVqB62m?usp=sharing#scrollTo=VRzl54I-5isw <|||||>
<|||||>Facing the same issue. A reply on this will be highly appreciated.<|||||>This might help! Though implementation documentation is in tensorflow
https://github.com/google-research/pegasus#finetuning-on-downstream-datasets<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 7,366 | closed | test_rag_sequence_generate_batch failing on CUDA | https://github.com/huggingface/transformers/runs/1157849932?check_suite_focus=true
```
> self.assertEqual(output_text_1, EXPECTED_OUTPUT_TEXT_1)
E AssertionError: 'The song peaked at number 17 in the' != '"I Know Him So Well"'
``` | 09-24-2020 13:50:35 | 09-24-2020 13:50:35 | Yeah, If you run on CPU the test passes - I added a comment that the test fails on GPU: https://github.com/huggingface/transformers/blob/9e68d075a4100906509170498480823e7e61874a/tests/test_modeling_rag.py#L659
Beam search seems very sensible to small changes.<|||||>you mean sensitive, but OK. Maybe we should skip the test on CUDA so that slow CI isn't broken?<|||||>Actually, I fixed all `RagSequence` related bugs today and added better integration tests that should all pass on both CPU and GPU => so I think it's fine now.
See https://github.com/huggingface/transformers/blob/cf1c88e0921243e760d306e63a5938e1bac880f3/tests/test_modeling_rag.py#L664 |
transformers | 7,365 | closed | Fixing case in which `Trainer` hung while saving model in distributed training | As found thanks to the great @mfuntowicz , the call to `store_flos` in `Trainer` can hang indefinitely, as it was only executed in the main thread and in some cases the other threads were already past this point. This PR moves this call in order to avoid this behaviour. | 09-24-2020 13:45:28 | 09-24-2020 13:45:28 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7365?src=pr&el=h1) Report
> Merging [#7365](https://codecov.io/gh/huggingface/transformers/pull/7365?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28cf873036d078b47fb9dd38ac3421a7c874da44?el=desc) will **increase** coverage by `2.52%`.
> The diff coverage is `50.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7365?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7365 +/- ##
==========================================
+ Coverage 76.58% 79.11% +2.52%
==========================================
Files 181 181
Lines 34828 34827 -1
==========================================
+ Hits 26674 27552 +878
+ Misses 8154 7275 -879
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7365?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.70% <50.00%> (+0.08%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |
| [src/transformers/modeling\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `25.39% <0.00%> (-51.59%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.10% <0.00%> (-29.80%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.37% <0.00%> (-4.87%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `81.81% <0.00%> (-4.55%)` | :arrow_down: |
| [src/transformers/configuration\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `97.77% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `82.62% <0.00%> (-1.41%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.34%)` | :arrow_down: |
| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/7365/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7365?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7365?src=pr&el=footer). Last update [28cf873...3213d34](https://codecov.io/gh/huggingface/transformers/pull/7365?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Reminder that we need to add some CI infra (and tests) for multi-gpu and/or multi-node setups |
transformers | 7,364 | closed | Getting "TypeError: forward() got multiple values for argument 'attention_mask'" when replacing pytorch_transformers with transformers | # 📚 Migration
## Information
<!-- Important information -->
Model I am using (Bert):
Language I am using the model on (Japanese):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [* ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [* ] my own task or dataset: (give details below)
## Details
<!-- A clear and concise description of the migration issue.
If you have code snippets, please provide it here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
-->
This is the complaint from python:
/content/train_extractive.py in train_ext(args, device_id)
225 train_multi_ext(args)
226 else:
--> 227 train_single_ext(args, device_id)
228
229
/content/train_extractive.py in train_single_ext(args, device_id)
267
268 trainer = build_trainer(args, device_id, model, optim)
--> 269 trainer.train(train_iter_fct, args.train_steps)
/content/trainer_ext.py in train(self, train_iter_fct, train_steps, valid_iter_fct, valid_steps)
150 self._gradient_accumulation(
151 true_batchs, normalization, total_stats,
--> 152 report_stats)
153
154 report_stats = self._maybe_report_training(
/content/trainer_ext.py in _gradient_accumulation(self, true_batchs, normalization, total_stats, report_stats)
393 mask_cls = batch.mask_cls
394
--> 395 sent_scores, mask = self.model(src, segs, clss, mask, mask_cls)
396
397 loss = self.loss(sent_scores, labels.float())
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/content/model_builder.py in forward(self, src, segs, clss, mask_src, mask_cls)
176 print (type(mask_src))
177 print (mask_src)
--> 178 top_vec = self.bert(src, segs, mask_src)
179 sents_vec = top_vec[torch.arange(top_vec.size(0)).unsqueeze(1), clss]
180 sents_vec = sents_vec * mask_cls[:, :, None].float()
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/content/model_builder.py in forward(self, x, segs, mask)
126 def forward(self, x, segs, mask):
127 if(self.finetune):
--> 128 top_vec, _ = self.model(x, segs, attention_mask=mask)
129 else:
130 self.eval()
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
TypeError: forward() got multiple values for argument 'attention_mask'
***
I get the above complaint after replacing pytorch-transfomers with transformers.
from pytorch_transformers import BertModel
->
from transformers import BertForMaskedLM
I have to make this change because I am importing the Japanese model when the original code calling BertModel only caters to English model
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
python: can't open file 'transformers-cli': [Errno 2] No such file or directory
- `transformers` version:
- Platform: ubuntu colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101
- Tensorflow version (GPU?): 2.3
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:
<!-- IMPORTANT: which version of the former library do you use? -->
* `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch):
pytorch-transformers
## Checklist
- [ *] I have read the migration guide in the readme.
([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers);
[pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [ *] I checked if a related official extension example runs on my machine.
| 09-24-2020 12:09:02 | 09-24-2020 12:09:02 | Hi, I believe this is the cause of your issue: https://huggingface.co/transformers/migration.html#positional-order-of-some-models-keywords-inputs-attention-mask-token-type-ids-changed<|||||>Thanks. I agree. Can you suggest where I should fix in the codes given in the error log?<|||||>I can't really see where's your code, do you mind pasting a snippet that reproduces the error? (using backticks \`\`\` to format it)<|||||>Thanks. I highlight the codes given in the error log. In case, it is of no use. Please let me know how I should dig up the relevant portion.
/content/train_extractive.py in train_ext(args, device_id)
```
225 train_multi_ext(args)
226 else:
--> 227 train_single_ext(args, device_id)
228
229
```
/content/train_extractive.py in train_single_ext(args, device_id)
```
267
268 trainer = build_trainer(args, device_id, model, optim)
--> 269 trainer.train(train_iter_fct, args.train_steps)
```
/content/trainer_ext.py in train(self, train_iter_fct, train_steps, valid_iter_fct, valid_steps)
```
150 self._gradient_accumulation(
151 true_batchs, normalization, total_stats,
--> 152 report_stats)
153
154 report_stats = self._maybe_report_training(
```
/content/trainer_ext.py in _gradient_accumulation(self, true_batchs, normalization, total_stats, report_stats)
```
393 mask_cls = batch.mask_cls
394
--> 395 sent_scores, mask = self.model(src, segs, clss, mask, mask_cls)
396
397 loss = self.loss(sent_scores, labels.float())
```
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
```
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
```
/content/model_builder.py in forward(self, src, segs, clss, mask_src, mask_cls)
```
176 print (type(mask_src))
177 print (mask_src)
--> 178 top_vec = self.bert(src, segs, mask_src)
179 sents_vec = top_vec[torch.arange(top_vec.size(0)).unsqueeze(1), clss]
180 sents_vec = sents_vec * mask_cls[:, :, None].float()
```
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
```
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
```
/content/model_builder.py in forward(self, x, segs, mask)
```
126 def forward(self, x, segs, mask):
127 if(self.finetune):
--> 128 top_vec, _ = self.model(x, segs, attention_mask=mask)
129 else:
130 self.eval()
```
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
```
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
```
<|||||>I try replacing all the parameters in
```
--> 128 top_vec, _ = self.model(x, segs, attention_mask=mask)
```
of /content/model_builder.py in forward(self, x, segs, mask)
to
```
--> 128 top_vec, _ = self.model(input_ids=x, token_type_ids=segs, attention_mask=mask)
```
Now the changed line gives this error:
ValueError: not enough values to unpack (expected 2, got 1)
Now I am left with no clue.
<|||||>I remove the "_" from the returned value of model in
```
--> 128 top_vec, _ = self.model(input_ids=x, token_type_ids=segs, attention_mask=mask)
```
And get away with the error: ValueError: not enough values to unpack (expected 2, got 1)
But now I receive from this line:
```
178 print (mask_src)
179 top_vec = self.bert(src, segs, mask_src)
--> 180 sents_vec = top_vec[torch.arange(top_vec.size(0)).unsqueeze(1), clss]
181 sents_vec = sents_vec * mask_cls[:, :, None].float()
182 sent_scores = self.ext_layer(sents_vec, mask_cls).squeeze(-1)
```
of /content/model_builder.py in forward(self, src, segs, clss, mask_src, mask_cls) the following complaint:
AttributeError: 'tuple' object has no attribute 'size'
So the returned value of bert has changed. bert is an instance of BertForMaskedLM.from_pretrained('cl-tohoku/bert-base-japanese-whole-word-masking'). How do I get back the size of the first returned value of a pretrained model in the old torch_transformers?
<|||||>Instead of simply removing the `_` value, which will not unpack the tuple anymore, you can get the first value of the tuple (which has a single value in your case):
```py
top_vec = self.model(x, segs, attention_mask=mask)[0]
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,363 | closed | Check config type using `type` instead of `isinstance` | This seems like the textbook case where using `type` should be preferred over using `isinstance`.
Thanks to @hjptriplebee for showing the way in https://github.com/huggingface/transformers/pull/6870, this PR does the same for all remaining cases. | 09-24-2020 10:52:48 | 09-24-2020 10:52:48 | In that case we can even remove the for loops entirely, no?<|||||>I agree with @julien-c, we can directly check if `type(config)` is in the dict.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7363?src=pr&el=h1) Report
> Merging [#7363](https://codecov.io/gh/huggingface/transformers/pull/7363?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0cffa424f855cbbd657c4f1b57f94a51b7aa8d6d?el=desc) will **increase** coverage by `0.22%`.
> The diff coverage is `43.52%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7363?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7363 +/- ##
==========================================
+ Coverage 76.51% 76.73% +0.22%
==========================================
Files 181 181
Lines 34851 34811 -40
==========================================
+ Hits 26666 26713 +47
+ Misses 8185 8098 -87
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7363?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `74.30% <25.00%> (+4.95%)` | :arrow_up: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `87.04% <55.00%> (+3.00%)` | :arrow_up: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `92.53% <100.00%> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: |
| [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `88.28% <0.00%> (+55.85%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7363?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7363?src=pr&el=footer). Last update [0cffa42...1833b45](https://codecov.io/gh/huggingface/transformers/pull/7363?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Done! @julien-c @sgugger <|||||>This is faster than catching a KeyError, that's what you do it that way? |
transformers | 7,362 | closed | Difference between tokenize chinese char | The function `BertTokenizer` has a parameter `tokenize_chinese_chars` and default True.
When I set it to false, I got different result as follows:
```
1. tokenize chinese char: ['任', '务']
2. not tokenize chinese char: ['任', '##务']
```
The code is as follows(and 任务 means task in English):
```
vocab_file = './resources/robert/vocab.txt'
bert_tokenizer1 = BertTokenizer(vocab_file, tokenize_chinese_chars=True)
bert_tokenizer2 = BertTokenizer(vocab_file, tokenize_chinese_chars=False)
text = '任务'
res1 = bert_tokenizer1.tokenize(text)
res2 = bert_tokenizer2.tokenize(text)
print('tokenize chinese char:', res1)
print('not tokenize chinese char:' ,res2)
```
If I use the default setting, I will get the first result. In that way, **the nearly half of vocab words will not used**(like `'##务'`)!
Cause we split all chinese char, it will not get `'任务'` in `WordpieceTokenizer`. It's wired.
Can somebody explain this setting for me ? | 09-24-2020 10:38:41 | 09-24-2020 10:38:41 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,361 | closed | ImportError: cannot import name 'AutoModelForTokenClassification' | # was trying to use below model but got import error for AutoModelForTokenClassification
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-large-discriminator-finetuned-conll03-english")
model = AutoModelForTokenClassification.from_pretrained("dbmdz/electra-large-discriminator-finetuned-conll03-english") | 09-24-2020 10:27:11 | 09-24-2020 10:27:11 | Hi, what is your transformers version? Can you run `transformers-cli env` and paste the result here?<|||||>Here is the output for transformers-cli env
2020-09-24 16:56:51.133770: W
tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not
load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1:
cannot open shared object file: No such file or directory
2020-09-24 16:56:51.133813: I
tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart
dlerror if you do not have a GPU set up on your machine.
WARNING:tensorflow:From
/home/gnaeshkharad/allenv/t5env/lib/python3.6/site-packages/transformers/commands/env.py:36:
is_gpu_available (from tensorflow.python.framework.test_util) is deprecated
and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2020-09-24 16:56:53.097782: I
tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary
is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the
following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate
compiler flags.
2020-09-24 16:56:53.134585: I
tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency:
2099940000 Hz
2020-09-24 16:56:53.135040: I
tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5ffa320
initialized for platform Host (this does not guarantee that XLA will be
used). Devices:
2020-09-24 16:56:53.135065: I
tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device
(0): Host, Default Version
2020-09-24 16:56:53.137882: W
tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not
load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open
shared object file: No such file or directory
2020-09-24 16:56:53.137898: W
tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit:
UNKNOWN ERROR (303)
2020-09-24 16:56:53.137915: I
tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does
not appear to be running on this host (4F4W0X2):
/proc/driver/nvidia/version does not exist
Copy-and-paste the text below in your GitHub issue and FILL OUT the two
last points.
- `transformers` version: 3.2.0
- Platform: Linux-5.4.0-47-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
On Thu, Sep 24, 2020 at 4:12 PM Lysandre Debut <[email protected]>
wrote:
> Hi, what is your transformers version? Can you run transformers-cli env
> and paste the result here?
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7361#issuecomment-698265526>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEZTBOR3ILZVKYMSCLZLXBTSHMPC5ANCNFSM4RYD7THQ>
> .
>
<|||||>Could you run:
```
import transformers
transformers.is_torch_available()
```
Does this command returns `True` 🤔<|||||>Got False
I have found the issue its in my environment...
My bad..Thanks for your time!!<|||||>Great, thanks @stefan-it!<|||||>> from transformers import AutoTokenizer, AutoModelForTokenClassification
Hi, i also meet this issue, transformers.is_torch_available() gives me true, but i still can't import AutoModelForTokenClassification<|||||>> > from transformers import AutoTokenizer, AutoModelForTokenClassification
>
> Hi, i also meet this issue, transformers.is_torch_available() gives me true, but i still can't import AutoModelForTokenClassification
update transformers will be fine
<|||||>> Could you run:
>
> ```
> import transformers
> transformers.is_torch_available()
> ```
>
> Does this command returns `True` 🤔
Hi @stefan-it, the follwoing code snippet returns true but shows the same import error. |
transformers | 7,360 | closed | How to add some parameters in T5 (in T5Block layer) and initialize the original T5 parameters with pre-trained model and the new introduced parameters randomly? | Hi,
I want to add a new layer in T5Block,
However, I want to initialize all original parameters with pre-trained T5 and the newly added ones randomly.
Can someone guide me how that's possible or point me to the right direction?
Thanks
| 09-24-2020 08:05:53 | 09-24-2020 08:05:53 | Hi! The simplest way to do this would be simply to update the `modeling_t5.py` file. You should first clone the repo and install that version in your virtual environment:
```bash
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e ".[dev]"
```
Right now if you load a T5 model it should tell you what layers it's ignoring:
```py
from transformers import T5Model
model = T5Model.from_pretrained("t5-small")
```
Results in:
```
Some weights of T5Model were not initialized from the model checkpoint at t5-small and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
Now if you edit the `modeling_t5.py` file, especially the `T5Block` as you mentioned:
```py
class T5Block(nn.Module):
def __init__(self, config, has_relative_attention_bias=False):
super().__init__()
self.is_decoder = config.is_decoder
self.layer = nn.ModuleList()
self.layer.append(T5LayerSelfAttention(config, has_relative_attention_bias=has_relative_attention_bias))
if self.is_decoder:
self.layer.append(T5LayerCrossAttention(config, has_relative_attention_bias=has_relative_attention_bias))
self.layer.append(T5LayerFF(config))
# ADDED LAYER BELOW
self.extra_layer = nn.Linear(200, 200)
```
I've simply added an extra layer here called "extra_layer". I haven't done anything with it in the forward, it's up to you to decide how to use it. If you now re-run the code, you should see the following:
```
Some weights of T5Model were not initialized from the model checkpoint at t5-small and are newly initialized: ['encoder.embed_tokens.weight', 'encoder.block.0.extra_layer.weight', 'encoder.block.0.extra_layer.bias', 'encoder.block.1.extra_layer.weight', 'encoder.block.1.extra_layer.bias', 'encoder.block.2.extra_layer.weight', 'encoder.block.2.extra_layer.bias', 'encoder.block.3.extra_layer.weight', 'encoder.block.3.extra_layer.bias', 'encoder.block.4.extra_layer.weight', 'encoder.block.4.extra_layer.bias', 'encoder.block.5.extra_layer.weight', 'encoder.block.5.extra_layer.bias', 'decoder.embed_tokens.weight', 'decoder.block.0.extra_layer.weight', 'decoder.block.0.extra_layer.bias', 'decoder.block.1.extra_layer.weight', 'decoder.block.1.extra_layer.bias', 'decoder.block.2.extra_layer.weight', 'decoder.block.2.extra_layer.bias', 'decoder.block.3.extra_layer.weight', 'decoder.block.3.extra_layer.bias', 'decoder.block.4.extra_layer.weight', 'decoder.block.4.extra_layer.bias', 'decoder.block.5.extra_layer.weight', 'decoder.block.5.extra_layer.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
Which means that all these new layers (all the extra_layer layers in T5Block) have been initialized randomly. The rest has been initialized from the checkpoint.
Hope this helps!<|||||>@LysandreJik Thanks, It helps to me!<|||||>@LysandreJik @SuHe36 I want to change some model parameters in the t5 model.
So Is it possible to edit the model class in **modeling_t5.py** if I already installed the transformer library using pip in my machine? (**Without cloning from the repo in a virtual environment as you mentioned in the above comment**)<|||||>If you want to edit the model file then I heavily recommend you clone the repo and install it in an editable way `pip install -e <path_to_clone>`<|||||>@LysandreJik Thanks.
Actually, I tried to edit the **configuration_t5.py** file.
This is the code I want to run for model creation
```
import torch
from transformers_master.src.transformers import T5ForConditionalGeneration
model_name = "t5-small"
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = T5ForConditionalGeneration.from_pretrained(model_name).to(torch_device)
```
The default initialization parameters of the **T5Config** class are as follows
```
self,
vocab_size=32128,
d_model=512,
d_kv=64,
d_ff=2048,
num_layers=6,
num_decoder_layers=None,
num_heads=8,
relative_attention_num_buckets=32,
dropout_rate=0.1,
layer_norm_epsilon=1e-6,
initializer_factor=1.0,
feed_forward_proj="relu",
is_encoder_decoder=True,
use_cache=True,
pad_token_id=0,
eos_token_id=1,
**kwargs
```
I changed the **d_model, d_kv,d_ff**, and **num_heads** from this configuration_t5.py file as follows.
```
d_model=256,
d_kv=32,
d_ff=1024,
num_heads=6,
```
But after changing the above parameters, It showing the error given below
```
RuntimeError: Error(s) in loading state_dict for T5ForConditionalGeneration:
size mismatch for shared.weight: copying a param with shape torch.Size([32128, 512]) from checkpoint, the shape in current model is torch.Size([32128, 256]).
size mismatch for encoder.block.0.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.0.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.0.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.0.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 8]) from checkpoint, the shape in current model is torch.Size([32, 6]).
size mismatch for encoder.block.0.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.0.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for encoder.block.0.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for encoder.block.0.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.1.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.1.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.1.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.1.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for encoder.block.1.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.1.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for encoder.block.1.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for encoder.block.1.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.2.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.2.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.2.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.2.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for encoder.block.2.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.2.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for encoder.block.2.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for encoder.block.2.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.3.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.3.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.3.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.3.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for encoder.block.3.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.3.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for encoder.block.3.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for encoder.block.3.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.4.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.4.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.4.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.4.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for encoder.block.4.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.4.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for encoder.block.4.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for encoder.block.4.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.5.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.5.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.5.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for encoder.block.5.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for encoder.block.5.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.block.5.layer.1.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for encoder.block.5.layer.1.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for encoder.block.5.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for encoder.final_layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.0.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.0.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.0.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.0.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight: copying a param with shape torch.Size([32, 8]) from checkpoint, the shape in current model is torch.Size([32, 6]).
size mismatch for decoder.block.0.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.0.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.0.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.0.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.0.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.0.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.0.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for decoder.block.0.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for decoder.block.0.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.1.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.1.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.1.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.1.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.1.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.1.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.1.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.1.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.1.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.1.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.1.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for decoder.block.1.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for decoder.block.1.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.2.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.2.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.2.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.2.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.2.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.2.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.2.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.2.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.2.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.2.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.2.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for decoder.block.2.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for decoder.block.2.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.3.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.3.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.3.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.3.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.3.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.3.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.3.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.3.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.3.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.3.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.3.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for decoder.block.3.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for decoder.block.3.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.4.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.4.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.4.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.4.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.4.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.4.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.4.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.4.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.4.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.4.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.4.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for decoder.block.4.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for decoder.block.4.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.5.layer.0.SelfAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.5.layer.0.SelfAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.5.layer.0.SelfAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.5.layer.0.SelfAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.5.layer.0.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.5.layer.1.EncDecAttention.q.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.5.layer.1.EncDecAttention.k.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.5.layer.1.EncDecAttention.v.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([192, 256]).
size mismatch for decoder.block.5.layer.1.EncDecAttention.o.weight: copying a param with shape torch.Size([512, 512]) from checkpoint, the shape in current model is torch.Size([256, 192]).
size mismatch for decoder.block.5.layer.1.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.block.5.layer.2.DenseReluDense.wi.weight: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
size mismatch for decoder.block.5.layer.2.DenseReluDense.wo.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([256, 1024]).
size mismatch for decoder.block.5.layer.2.layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for decoder.final_layer_norm.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([256]).
```
So where I missed? How can I change the model configuration parameters- **d_model, d_kv,d_ff** and **num_heads**? |
transformers | 7,359 | closed | Update modeling_tf_longformer.py | correct a very small mistake
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-24-2020 06:57:52 | 09-24-2020 06:57:52 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7359?src=pr&el=h1) Report
> Merging [#7359](https://codecov.io/gh/huggingface/transformers/pull/7359?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/38f17037957d325b5540a8031f065e6f23c9e265?el=desc) will **increase** coverage by `2.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/7359?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7359 +/- ##
==========================================
+ Coverage 77.54% 79.55% +2.00%
==========================================
Files 181 181
Lines 34851 34851
==========================================
+ Hits 27024 27724 +700
+ Misses 7827 7127 -700
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7359?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `98.67% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.03% <0.00%> (-73.03%)` | :arrow_down: |
| [src/transformers/retrieval\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9yZXRyaWV2YWxfcmFnLnB5) | `28.48% <0.00%> (-62.80%)` | :arrow_down: |
| [src/transformers/modeling\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `25.39% <0.00%> (-51.59%)` | :arrow_down: |
| [src/transformers/tokenization\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |
| [src/transformers/tokenization\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `83.74% <0.00%> (-14.14%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.16% <0.00%> (-2.42%)` | :arrow_down: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `82.62% <0.00%> (-1.41%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |
| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/7359/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7359?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7359?src=pr&el=footer). Last update [38f1703...90b186a](https://codecov.io/gh/huggingface/transformers/pull/7359?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,358 | closed | Example for T5 model from doc is not working. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.3.0
- Platform: Linux
- Python version: 3.6
- PyTorch version (GPU?): No
- Tensorflow version (GPU?): No
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
Model I am using T5 (TFT5) and following the [documentation](https://huggingface.co/transformers/model_doc/t5.html):
The problem arises when using:
* the official example scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
import tensorflow as tf
from transformers import TFT5Model
model = TFT5Model.from_pretrained('t5-small')
inputs = tf.constant([[ 1, 30, 4, 19, 7, 41, 20, 4, 25, 40, 13, 46, 27, 54, 25, 2]])
print(model(inputs, labels=inputs))
```
The above snippet throws the error
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-98-0a790979b424> in <module>()
----> 1 model(tf.constant([[ 1, 30, 4, 19, 7, 41, 20, 4, 25, 40, 13, 46, 27, 54, 25, 2]]))
3 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
983
984 with ops.enable_auto_cast_variables(self._compute_dtype_object):
--> 985 outputs = call_fn(inputs, *args, **kwargs)
986
987 if self._activity_regularizer:
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_t5.py in call(self, inputs, attention_mask, encoder_outputs, inputs_embeds, head_mask, past_key_values, decoder_input_ids, decoder_attention_mask, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict, training, **kwargs)
1104 output_hidden_states,
1105 ],
-> 1106 training=training,
1107 )
1108 past = (
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
983
984 with ops.enable_auto_cast_variables(self._compute_dtype_object):
--> 985 outputs = call_fn(inputs, *args, **kwargs)
986
987 if self._activity_regularizer:
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_t5.py in call(self, inputs, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, training, **kwargs)
646 input_shape = shape_list(inputs_embeds)[:-1]
647 else:
--> 648 raise ValueError("You have to specify either inputs or inputs_embeds")
649
650 if inputs_embeds is None:
ValueError: You have to specify either inputs or inputs_embeds
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
This should instead return the instance of `TFSeq2SeqModelOutput` | 09-24-2020 04:47:34 | 09-24-2020 04:47:34 | Hi, you're using the wrong model. `labels` cannot work with the `TFT5Model` as that's just the base model. You're probably looking for `TFT5ForConditionalGeneration`, which is the T5 base model with a language modeling head:
```py
import tensorflow as tf
from transformers import TFT5ForConditionalGeneration
model = TFT5ForConditionalGeneration.from_pretrained('t5-small')
inputs = tf.constant([[ 1, 30, 4, 19, 7, 41, 20, 4, 25, 40, 13, 46, 27, 54, 25, 2]])
print(model(inputs, labels=inputs))
```
outputs:
```
(<tf.Tensor: shape=(16,), dtype=float32, numpy=
array([7.5659113 , 4.1611323 , 0.7870086 , 0.19761924, 0.10837179,
1.0610694 , 0.53702176, 0.01974043, 0.1657649 , 0.07946267,
0.19164713, 0.10359508, 0.844127 , 0.04230493, 0.0681754 ,
0.1965235 ], dtype=float32)>, <tf.Tensor: shape=(1, 16, 32128), dtype=float32, numpy=
array([[[-14.546758 , -7.1824822
[...]
```<|||||>I want to train a model for language translation would TFT5ForConditionalGeneration work?<|||||>Even this case is also not working.
```
model = TFT5Model.from_pretrained('t5-small')
model(tf.constant([[ 1, 30, 4, 19, 7, 41, 20, 4, 25, 40, 13, 46, 27, 54, 25, 2]]))
```
which is throwing the same error as stated above.<|||||>Indeed, the error message is wrong here but as T5 is a seq2seq model it requires both `input_ids` and `decoder_input_ids`. We should update the docstrings/error message, but you can have more information [here, in the docs](https://huggingface.co/transformers/model_doc/t5.html#tft5model).
cc @patrickvonplaten <|||||>Gotcha! Thanks for clarifying that. |
transformers | 7,357 | closed | how can i convert bert pytorch to tf2 ? | how can i convert bert pytorch to tf2 ? | 09-24-2020 03:41:07 | 09-24-2020 03:41:07 | You can save it and reload it:
```py
pytorch_model.save_pretrained("here")
tf_model = TFBertModel.from_pretrained("here")
```<|||||>> You can save it and reload it:
>
> ```python
> pytorch_model.save_pretrained("here")
> tf_model = TFBertModel.from_pretrained("here")
> ```
i can not convert pytorch bert model to tenforflow 2.3 ? can you help me ? @LysandreJik <|||||>Please provide more information, can you respect the issue template? What exactly are you trying to do? Do you have a PyTorch model? How did you get it, is it one of the checkpoints on the hub, is it fine-tuned?
You want to convert it to our TensorFlow API?
Please provide more details for us to help you better. |
transformers | 7,356 | closed | Fix eval to compute rouge correctly for rouge_score | Fixes #6808
| 09-24-2020 02:33:30 | 09-24-2020 02:33:30 | Hi, for the code quality test to pass you should run `make style` (to apply the style changes) and check with `make quality` (to make sure there is none left) at the root of your `transformers` directory.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7356?src=pr&el=h1) Report
> Merging [#7356](https://codecov.io/gh/huggingface/transformers/pull/7356?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/38f17037957d325b5540a8031f065e6f23c9e265?el=desc) will **decrease** coverage by `1.48%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/7356?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7356 +/- ##
==========================================
- Coverage 77.54% 76.05% -1.49%
==========================================
Files 181 181
Lines 34851 34851
==========================================
- Hits 27024 26507 -517
- Misses 7827 8344 +517
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7356?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.32% <0.00%> (-73.63%)` | :arrow_down: |
| [src/transformers/modeling\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |
| [src/transformers/tokenization\_bert\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9nZW5lcmF0aW9uLnB5) | `39.28% <0.00%> (-55.36%)` | :arrow_down: |
| [src/transformers/tokenization\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |
| [src/transformers/tokenization\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: |
| [src/transformers/activations\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |
| ... and [22 more](https://codecov.io/gh/huggingface/transformers/pull/7356/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7356?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7356?src=pr&el=footer). Last update [38f1703...c7e4959](https://codecov.io/gh/huggingface/transformers/pull/7356?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks for the contribution @swethmandava !
I'm going to run this over the weekend to see how metrics have changed.
Then if all goes well I'll merge this on Monday.
If you don't hear from me by Tuesday, please ping :)<|||||>I cleaned up and added tests in https://github.com/huggingface/transformers/pull/7410
Metrics look good, let me know what you think of the new code! I will add you as PR coauthor! |
transformers | 7,355 | closed | Add token_type_ids to prepare_inputs_for_generation for gpt/gpt2 | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-24-2020 00:06:25 | 09-24-2020 00:06:25 | Maybe related to https://github.com/huggingface/transformers/pull/6601 and https://github.com/huggingface/transformers/pull/7552<|||||>Yes seems to be related to both. https://github.com/huggingface/transformers/pull/7355 doesn't seem to have token_type_ids passed in though, but if those PRs get merged in I'll close mine<|||||>We have the same problem here as explained in https://github.com/huggingface/transformers/pull/6601#issuecomment-708029212. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 7,354 | closed | Faster Pegasus tokenizer tests | Current test_tokenization_pegasus.py takes more than a minute to run because it uses a full size tokenizer [here](https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_pegasus.py#L19)
It should use "fixtures/test_sentencepiece.model" like `tests/test_tokenization_t5.py`. | 09-23-2020 22:46:18 | 09-23-2020 22:46:18 | Any interest @stas00 ?<|||||>Yes, please, you can assign this to me, but most likely will be able to start on it in a few weeks when I have free time.<|||||>We can start this, but if we do we should wait for @thomwolf 's fast tokenizer PR to merge before we merge the fix.<|||||>This is unblocked, thom merged! |
transformers | 7,353 | closed | enable add_tokens for mbart tokenizer | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #7222
| 09-23-2020 21:32:47 | 09-23-2020 21:32:47 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,352 | closed | Make PyTorch model files independent from each other | As discussed after the survey and expressed in the project README, our goal is to have independent model files even if it means some code is duplicated. This PR fixes this for all PyTorch models except:
- the full subclasses (CamemBERT, FlauBERT, XLM-RoBERTa),
- the BART-like models (BART, mBART, marian, Pegasus)
- the "composite" models (BertGeneration, DPR and RetriBERT).
The first ones should stay as is, as we discussed internally, the second ones will be dealt with in another PR and I personally think the last ones (which directly import `BertModel`) should also stay as is.
This leverages the script introduced in #7219 to make sure the identical copies stay True to the original.
Also, as discussed with Lysandre, I removed the XxxLayerNorm when it was just `nn.LayerNorm`. | 09-23-2020 21:07:32 | 09-23-2020 21:07:32 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7352?src=pr&el=h1) Report
> Merging [#7352](https://codecov.io/gh/huggingface/transformers/pull/7352?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/129fdae04033fe4adfe013b734deaec6ec34ae2e?el=desc) will **increase** coverage by `1.25%`.
> The diff coverage is `71.95%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7352?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7352 +/- ##
==========================================
+ Coverage 76.68% 77.93% +1.25%
==========================================
Files 181 181
Lines 34851 35140 +289
==========================================
+ Hits 26724 27385 +661
+ Misses 8127 7755 -372
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7352?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `18.69% <22.72%> (-52.92%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <50.00%> (-2.66%)` | :arrow_down: |
| [src/transformers/modeling\_retribert.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZXRyaWJlcnQucHk=) | `34.24% <50.00%> (ø)` | |
| [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.38% <90.00%> (-0.08%)` | :arrow_down: |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `84.98% <90.95%> (+2.79%)` | :arrow_up: |
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `84.16% <97.14%> (+0.69%)` | :arrow_up: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.36% <100.00%> (+0.08%)` | :arrow_up: |
| [src/transformers/modeling\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mdW5uZWwucHk=) | `86.74% <100.00%> (-0.02%)` | :arrow_down: |
| [src/transformers/modeling\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `94.45% <100.00%> (ø)` | |
| [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `90.73% <100.00%> (-0.04%)` | :arrow_down: |
| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/7352/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7352?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7352?src=pr&el=footer). Last update [129fdae...90a918f](https://codecov.io/gh/huggingface/transformers/pull/7352?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,351 | closed | generic text classification with TensorFlow error (AttributeError: 'TFTrainingArguments' object has no attribute 'args') | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.2.0
- Platform: Linux-4.15.0-1091-oem-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@jplu
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): bert-base-multilingual-uncased
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
Running run_tf_text_classification.py with flags from the example in the "Run generic text classification script in TensorFlow" section of examples/text-classification
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Text classification dataset for classifying answers to questions. Using 3 CSVs (train, dev, and test) that each have headers (class, text) and columns containing class labels (int) and questions (strings). There are no commas present in the questions, for reference.
## To reproduce
Steps to reproduce the behavior:
1. Call run_tf_text_classification.py with flags from the example in the "Run generic text classification script in TensorFlow" section of examples/text-classification:
```python3
python run_tf_text_classification.py \
--train_file train.csv \
--dev_file dev.csv \
--test_file test.csv \
--label_column_id 0 \
--model_name_or_path bert-base-multilingual-uncased \
--output_dir model \
--num_train_epochs 4 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 32 \
--do_train \
--do_eval \
--do_predict \
--logging_steps 10 \
--evaluate_during_training \
--save_steps 10 \
--overwrite_output_dir \
--max_seq_length 128
```
2. Error is encountered:
```python3
Traceback (most recent call last):
File "run_tf_text_classification.py", line 283, in <module>
main()
File "run_tf_text_classification.py", line 199, in main
training_args.n_replicas,
File "/home/qd_team/qdmr_gpu/smart_env/lib/python3.6/site-packages/transformers/file_utils.py", line 936, in wrapper
return func(*args, **kwargs)
File "/home/qd_team/qdmr_gpu/smart_env/lib/python3.6/site-packages/transformers/training_args_tf.py", line 180, in n_replicas
return self._setup_strategy.num_replicas_in_sync
File "/home/qd_team/qdmr_gpu/smart_env/lib/python3.6/site-packages/transformers/file_utils.py", line 914, in __get__
cached = self.fget(obj)
File "/home/qd_team/qdmr_gpu/smart_env/lib/python3.6/site-packages/transformers/file_utils.py", line 936, in wrapper
return func(*args, **kwargs)
File "/home/qd_team/qdmr_gpu/smart_env/lib/python3.6/site-packages/transformers/training_args_tf.py", line 122, in _setup_strategy
if self.args.xla:
AttributeError: 'TFTrainingArguments' object has no attribute 'args'
```
3. If the logger.info call is commented out (lines 197-202), the above error is prevented but another error is encountered:
```python3
Traceback (most recent call last):
File "run_tf_text_classification.py", line 282, in <module>
main()
File "run_tf_text_classification.py", line 221, in main
max_seq_length=data_args.max_seq_length,
File "run_tf_text_classification.py", line 42, in get_tfds
ds = datasets.load_dataset("csv", data_files=files)
File "/home/qd_team/qdmr_gpu/smart_env/lib/python3.6/site-packages/datasets/load.py", line 604, in load_dataset
**config_kwargs,
File "/home/qd_team/qdmr_gpu/smart_env/lib/python3.6/site-packages/datasets/builder.py", line 158, in __init__
**config_kwargs,
File "/home/qd_team/qdmr_gpu/smart_env/lib/python3.6/site-packages/datasets/builder.py", line 269, in _create_builder_config
for key in sorted(data_files.keys()):
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Here is a pip freeze:
```python3
absl-py==0.10.0
astunparse==1.6.3
cachetools==4.1.1
certifi==2020.6.20
chardet==3.0.4
click==7.1.2
dataclasses==0.7
datasets==1.0.2
dill==0.3.2
filelock==3.0.12
gast==0.3.3
google-auth==1.21.3
google-auth-oauthlib==0.4.1
google-pasta==0.2.0
grpcio==1.32.0
h5py==2.10.0
idna==2.10
importlib-metadata==2.0.0
joblib==0.16.0
Keras-Preprocessing==1.1.2
Markdown==3.2.2
numpy==1.18.5
oauthlib==3.1.0
opt-einsum==3.3.0
packaging==20.4
pandas==1.1.2
protobuf==3.13.0
pyarrow==1.0.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing==2.4.7
python-dateutil==2.8.1
pytz==2020.1
regex==2020.7.14
requests==2.24.0
requests-oauthlib==1.3.0
rsa==4.6
sacremoses==0.0.43
scipy==1.4.1
sentencepiece==0.1.91
six==1.15.0
tensorboard==2.3.0
tensorboard-plugin-wit==1.7.0
tensorflow==2.3.0
tensorflow-estimator==2.3.0
termcolor==1.1.0
tokenizers==0.8.1rc2
tqdm==4.49.0
transformers==3.2.0
urllib3==1.25.10
Werkzeug==1.0.1
wrapt==1.12.1
xxhash==2.0.0
zipp==3.2.0
```
## Expected behavior
Model begins to train on custom dataset.
<!-- A clear and concise description of what you would expect to happen. -->
| 09-23-2020 20:04:40 | 09-23-2020 20:04:40 | Hello!
This is fixed in master.<|||||> @jplu Sorry, but I'm facing the same issue, and have version 3.2 installed. Can you please elaborate on how I might fix this? Thanks.<|||||>@sunnyville01 Just install the version on master with `pip install git+https://github.com/huggingface/transformers.git`<|||||>@jplu Thanks, that fixed it.<|||||>I am still facing this issue on colab with
!pip install git+https://github.com/huggingface/transformers.git
`---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-43-d201a6fb0a8d> in <module>()
17 learning_rate=LEARNING_RATE
18 )
---> 19 with training_argsTF.strategy.scope():
20 modelTF = TFAutoModelForSequenceClassification.from_pretrained(
21 model_args['model_name'],
4 frames
/usr/local/lib/python3.6/dist-packages/transformers/training_args_tf.py in _setup_strategy(self)
120 logger.info("Tensorflow: setting up strategy")
121
--> 122 if self.args.xla:
123 tf.config.optimizer.set_jit(True)
124
AttributeError: 'TFTrainingArguments' object has no attribute 'args'`<|||||>Something must be wrong with your install process, because this bug is fixed in master.<|||||>My bad, did not notice "requirements already met message", updated to
!pip install --upgrade git+https://github.com/huggingface/transformers.git
No more issue! Sorry .<|||||>> Something must be wrong with your install process, because this bug is fixed in master.
The error seems to persist with me. I installed using `!pip install git+https://github.com/huggingface/transformers.git` and got the same error `TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'`
Here's is a colab notebook, you can do runtime-> run all , and see the output of the last cell.
https://colab.research.google.com/drive/1r3XCKYA8RBtfYmU2jqHVJT-uTt1ii04S?usp=sharing<|||||>@jplu I'm also getting the same error `TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'`, and I also ran the colab from @Santosh-Gupta and the error happened too.
My local environment is also based on transformer's master branch. <|||||>@pvcastro Can you open a new issue please with all the details to be able for us to reproduce it. This thread is closed and about a different one. |
transformers | 7,350 | closed | Expand a bit the documentation doc | Add a few more instructions for people who do read the doc :-) | 09-23-2020 20:03:43 | 09-23-2020 20:03:43 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7350?src=pr&el=h1) Report
> Merging [#7350](https://codecov.io/gh/huggingface/transformers/pull/7350?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/129fdae04033fe4adfe013b734deaec6ec34ae2e?el=desc) will **increase** coverage by `1.40%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/7350?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7350 +/- ##
==========================================
+ Coverage 76.68% 78.08% +1.40%
==========================================
Files 181 181
Lines 34851 34851
==========================================
+ Hits 26724 27214 +490
+ Misses 8127 7637 -490
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7350?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: |
| [src/transformers/modeling\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |
| [src/transformers/tokenization\_phobert.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGhvYmVydC5weQ==) | `21.80% <0.00%> (-61.66%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |
| [src/transformers/tokenization\_bert\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9nZW5lcmF0aW9uLnB5) | `39.28% <0.00%> (-55.36%)` | :arrow_down: |
| [src/transformers/configuration\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `82.59% <0.00%> (-13.97%)` | :arrow_down: |
| [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.04% <0.00%> (-12.69%)` | :arrow_down: |
| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/7350/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7350?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7350?src=pr&el=footer). Last update [129fdae...f940cdf](https://codecov.io/gh/huggingface/transformers/pull/7350?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,349 | closed | Create README.md | Model card for akhooli/personachat-arabic
| 09-23-2020 19:35:41 | 09-23-2020 19:35:41 | |
transformers | 7,348 | closed | Clean RAG docs and template docs | Followup from #7345, this cleans up the documentation for RAG (since it was merged while I was working) and update the templates to the new docstrings. | 09-23-2020 19:10:02 | 09-23-2020 19:10:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7348?src=pr&el=h1) Report
> Merging [#7348](https://codecov.io/gh/huggingface/transformers/pull/7348?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/129fdae04033fe4adfe013b734deaec6ec34ae2e?el=desc) will **decrease** coverage by `0.92%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7348?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7348 +/- ##
==========================================
- Coverage 76.68% 75.75% -0.93%
==========================================
Files 181 181
Lines 34851 34853 +2
==========================================
- Hits 26724 26402 -322
- Misses 8127 8451 +324
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7348?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/modeling\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `76.98% <ø> (ø)` | |
| [src/transformers/retrieval\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9yZXRyaWV2YWxfcmFnLnB5) | `91.27% <ø> (ø)` | |
| [src/transformers/tokenization\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmFnLnB5) | `71.11% <100.00%> (+1.34%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |
| [src/transformers/tokenization\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.32% <0.00%> (-73.63%)` | :arrow_down: |
| [src/transformers/modeling\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |
| [src/transformers/activations\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |
| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/7348/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7348?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7348?src=pr&el=footer). Last update [129fdae...930dd4b](https://codecov.io/gh/huggingface/transformers/pull/7348?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,347 | closed | [s2s] can distributed eval intiate model download on each rank | + `from_pretrained` uses a FileLock to avoid this, but I wonder if there is a race condition.
+ Verify, then fix. Fix non-trivial because have to block other processes.
| 09-23-2020 17:44:53 | 09-23-2020 17:44:53 | No it can't `from_pretrained` uses `FileLock` |
transformers | 7,346 | closed | Difference between bart-large and bart-large-cnn vocabulary | Trying to finetune from pretrained bart checkpoint as follows:
```
config = BartConfig(**json.load(open(args.config_path, "r"))) #pointing to bart-large-cnn/config.json
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large', config=config) #use pretrained bart model's weights
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn')
```
but since facebook/bart-large and facebook/bart-large-cnn have different vocab size, it fails. What's the reason behind different vocab sizes? How can I use pretrained bart for finetuning - should I modify bart-large-cnn's config to use the same vocab size as bart-large?
@sshleifer | 09-23-2020 17:35:15 | 09-23-2020 17:35:15 | The reason is the mask token, see https://github.com/huggingface/transformers/issues/3108.
You could try to use the resize_token_embeddings method, but even easier would be to pass the config changes you want to init
```python
BartForConditionalGeneration.from_pretrained('bart-large', num_beams=4, min_length=56, max_length=142, length_penalty=142, ...)
```
|
transformers | 7,345 | closed | Models doc | Do not review this PR unless you're masochistic or @LysandreJik.
This PR does a big clean-up of all models/tokenizers/config docstrings. | 09-23-2020 16:31:17 | 09-23-2020 16:31:17 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7345?src=pr&el=h1) Report
> Merging [#7345](https://codecov.io/gh/huggingface/transformers/pull/7345?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28cf873036d078b47fb9dd38ac3421a7c874da44?el=desc) will **increase** coverage by `2.47%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7345?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7345 +/- ##
==========================================
+ Coverage 76.58% 79.05% +2.47%
==========================================
Files 181 181
Lines 34828 34828
==========================================
+ Hits 26674 27535 +861
+ Misses 8154 7293 -861
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7345?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnQucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_bert\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnRfZ2VuZXJhdGlvbi5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2N0cmwucHk=) | `97.05% <ø> (ø)` | |
| [src/transformers/configuration\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rwci5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VsZWN0cmEucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZsYXViZXJ0LnB5) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Z1bm5lbC5weQ==) | `100.00% <ø> (ø)` | |
| ... and [140 more](https://codecov.io/gh/huggingface/transformers/pull/7345/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7345?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7345?src=pr&el=footer). Last update [28cf873...fb4ea94](https://codecov.io/gh/huggingface/transformers/pull/7345?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,344 | closed | Remove reference to args in XLA check | Previously, the TFTrainingArguments object did a check to see if XLA was enabled, but did this by referencing `self.args.xla`, when it should be `self.xla`, because it is the args object. This can be verified a few lines above, where the XLA field is set.
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #7343 | 09-23-2020 16:28:00 | 09-23-2020 16:28:00 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7344?src=pr&el=h1) Report
> Merging [#7344](https://codecov.io/gh/huggingface/transformers/pull/7344?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28cf873036d078b47fb9dd38ac3421a7c874da44?el=desc) will **increase** coverage by `0.23%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7344?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7344 +/- ##
==========================================
+ Coverage 76.58% 76.82% +0.23%
==========================================
Files 181 181
Lines 34828 34828
==========================================
+ Hits 26674 26757 +83
+ Misses 8154 8071 -83
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7344?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/training\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `42.64% <0.00%> (ø)` | |
| [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `19.71% <0.00%> (-51.90%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.17% <0.00%> (-15.39%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `81.81% <0.00%> (-4.55%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.34%)` | :arrow_down: |
| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/7344/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7344?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7344?src=pr&el=footer). Last update [28cf873...82d7dee](https://codecov.io/gh/huggingface/transformers/pull/7344?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks for the quick review @LysandreJik and all this excellent work! Didn't realize the Hugging Face team is based in NYC. If the offices ever actually open again and your team is interested, @mdvandergon and I would be stoked to host you for lunch at FRBNY. <|||||>@jplu No worries, happens to the best of us! Thanks for all your hard work!<|||||>That's good to know, thanks for the offer @ZeroCool2u, @mdvandergon! |
transformers | 7,343 | closed | AttributeError: 'TFTrainingArguments' object has no attribute 'args' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.2.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
Trainer: @sgugger
tensorflow: @jplu
## Information
Model I am using (Bert, XLNet ...): `distilbert-base-uncased`
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
IMDB Sequence Classification
## To reproduce
Steps to reproduce the behavior:
1. Follow the [fine-tuning tutorial here and use TensorFlow](https://huggingface.co/transformers/master/custom_datasets.html#fine-tuning-with-trainer)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```python
AttributeError Traceback (most recent call last)
<ipython-input-10-c5306faf2c2f> in <module>()
12 )
13
---> 14 with training_args.strategy.scope():
15 model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
16
4 frames
/usr/local/lib/python3.6/dist-packages/transformers/training_args_tf.py in _setup_strategy(self)
120 logger.info("Tensorflow: setting up strategy")
121
--> 122 if self.args.xla:
123 tf.config.optimizer.set_jit(True)
124
AttributeError: 'TFTrainingArguments' object has no attribute 'args'
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 09-23-2020 16:27:16 | 09-23-2020 16:27:16 | |
transformers | 7,342 | closed | CentOS Error installing Transformers | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: CentOS
- Python version: 3.6.3
- PyTorch version (GPU?):1.6.0
- Tensorflow version (GPU?): tensorflow-gpu 2.3.0
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
@mfuntowicz
@jplu
## To reproduce
Steps to reproduce the behavior:
1. On a CentOS distribution try "pip install transformers" Python 3.6.3
pip install transformers
Looking in links: /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/nix/avx2, /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/nix/generic, /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic
Ignoring pip: markers 'python_version < "3"' don't match your environment
Collecting transformers
Using cached transformers-3.2.0-py3-none-any.whl (1.0 MB)
Collecting tokenizers==0.8.1.rc2
Using cached tokenizers-0.8.1rc2.tar.gz (97 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/dataclasses-0.7-py3-none-any.whl
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/filelock-3.0.12-py3-none-any.whl
Requirement already satisfied: tqdm>=4.27 in /home/-/ENV/lib/python3.6/site-packages (from transformers) (4.49.0)
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/packaging-20.4-py2.py3-none-any.whl
Requirement already satisfied: numpy in /home/-/ENV/lib/python3.6/site-packages (from transformers) (1.19.1)
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/requests-2.24.0-py2.py3-none-any.whl
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/sacremoses-0.0.43-py3-none-any.whl
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/nix/generic/sentencepiece-0.1.90-cp36-cp36m-linux_x86_64.whl
Requirement already satisfied: regex!=2019.12.17 in /home/-/ENV/lib/python3.6/site-packages (from transformers) (2019.11.1)
Requirement already satisfied: six in /home/-/ENV/lib/python3.6/site-packages (from packaging->transformers) (1.15.0)
Requirement already satisfied: pyparsing>=2.0.2 in /home/-/ENV/lib/python3.6/site-packages (from packaging->transformers) (2.4.7)
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/certifi-2020.6.20-py2.py3-none-any.whl
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/chardet-3.0.4-py2.py3-none-any.whl
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/idna-2.10-py2.py3-none-any.whl
Processing /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic/urllib3-1.25.10-py2.py3-none-any.whl
Requirement already satisfied: joblib in /home/-/ENV/lib/python3.6/site-packages (from sacremoses->transformers) (0.16.0)
Requirement already satisfied: click in /home/-/ENV/lib/python3.6/site-packages (from sacremoses->transformers) (7.1.2)
Building wheels for collected packages: tokenizers
Building wheel for tokenizers (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: /home/--/ENV/bin/python /home/--/ENV/lib/python3.6/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmp_ut164h5
cwd: /tmp/pip-install-7krg2wb2/tokenizers
Complete output (38 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib
creating build/lib/tokenizers
copying tokenizers/__init__.py -> build/lib/tokenizers
creating build/lib/tokenizers/models
copying tokenizers/models/__init__.py -> build/lib/tokenizers/models
creating build/lib/tokenizers/decoders
copying tokenizers/decoders/__init__.py -> build/lib/tokenizers/decoders
creating build/lib/tokenizers/normalizers
copying tokenizers/normalizers/__init__.py -> build/lib/tokenizers/normalizers
creating build/lib/tokenizers/pre_tokenizers
copying tokenizers/pre_tokenizers/__init__.py -> build/lib/tokenizers/pre_tokenizers
creating build/lib/tokenizers/processors
copying tokenizers/processors/__init__.py -> build/lib/tokenizers/processors
creating build/lib/tokenizers/trainers
copying tokenizers/trainers/__init__.py -> build/lib/tokenizers/trainers
creating build/lib/tokenizers/implementations
copying tokenizers/implementations/__init__.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/base_tokenizer.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/bert_wordpiece.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/byte_level_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/char_level_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/sentencepiece_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/__init__.pyi -> build/lib/tokenizers
copying tokenizers/models/__init__.pyi -> build/lib/tokenizers/models
copying tokenizers/decoders/__init__.pyi -> build/lib/tokenizers/decoders
copying tokenizers/normalizers/__init__.pyi -> build/lib/tokenizers/normalizers
copying tokenizers/pre_tokenizers/__init__.pyi -> build/lib/tokenizers/pre_tokenizers
copying tokenizers/processors/__init__.pyi -> build/lib/tokenizers/processors
copying tokenizers/trainers/__init__.pyi -> build/lib/tokenizers/trainers
running build_ext
running build_rust
/tmp/pip-build-env-1p_8fw9e/overlay/lib/python3.6/site-packages/setuptools/dist.py:452: UserWarning: Normalizing '0.8.1.rc2' to '0.8.1rc2'
warnings.warn(tmpl.format(**locals()))
error: Can not find Rust compiler
----------------------------------------
ERROR: Failed building wheel for tokenizers
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly
| 09-23-2020 15:48:08 | 09-23-2020 15:48:08 | The error message indicated that you need to first install Rust compiler (https://www.rust-lang.org/tools/install).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,341 | closed | data_collator.py - line 326, in mask tokens - xlnet finetuning error | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: version: 3.2.0
- Platform: Linux-4.15.0-118-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
--> TransfoXL/XLNet: @TevenLeScao
## Information
Model I am using (XLNet):
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
Testing simple example in 'language-modeling/examples/README' using recommended wiki-2-raw dataset and xlnet-base cased model
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
Same error occurs using simple one sentence per line text file (10 megs)
## To reproduce
Steps to reproduce the behavior:
1. Run all steps in 'language-modeling/examples/README' using xlnet-base-cased (cached or local)
2. Model loads with warnings and process begins before quickly exiting with the following error:
File "/home/pixelhead/Desktop/xlnet/transformers-master/transformers/data/data_collator.py", line 326, in mask_tokens
"This collator requires that sequence lengths be even to create a leakage-free perm_mask. Please see relevant comments in source code for details."
ValueError: This collator requires that sequence lengths be even to create a leakage-free perm_mask. Please see relevant comments in source code for details.
Epoch: 0%| | 0/3 [00:00<?, ?it/s]
Iteration: 0%|
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Expect 'run_language_modeling.py' to work for xlnet as per 'language-modeling/examples/README'
Have tested addition of '--line_by_line' and 'block_size=128, 256, 512' etc. Same error.
Could be missing something here 'Please see relevant comments in source code for details.' but not clear.
Cheers, | 09-23-2020 14:32:35 | 09-23-2020 14:32:35 | @sgugger might be interested in this issue as well.<|||||>I have the same issue.
Does anybody know any "workarounds" to bypass this issue? <|||||>@GenTxt did you find any workaround for this error ?<|||||>No, unfortuantely. Was hoping others more familiar with the problem would
offer solutions.
On Wed, Oct 7, 2020 at 8:46 AM Mihai Dobri <[email protected]> wrote:
> @GenTxt <https://github.com/GenTxt> did you find any workaround for this
> error ?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7341#issuecomment-704910726>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AFMAWPJSWOH7MWHGL52UXATSJRPJHANCNFSM4RXDWVGA>
> .
>
<|||||>@LysandreJik or @sgugger I am wondering if you could please let us know if is a workaround for this issue ? or if a code fix is planned in the near future?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Solved by using TPU instructions on GPU:
**Note:** On TPU , you should use the flag `--pad_to_max_length` in conjunction with the `--line_by_line` flag to make
sure all your batches have the same length.
Works now.
Encountered similar issue with fine-tuning Bert.
Solved by using:
--max_seq_length=512 with --line_by_line<|||||>How to solve the proble
ile "/anaconda3/envs/pytorch-gpu/lib/python3.6/site-packages/transformers/data/data_collator.py", line 615, in mask_tokens
"This collator requires that sequence lengths be even to create a leakage-free perm_mask. Please see relevant comments in source code for details."
ValueError: This collator requires that sequence lengths be even to create a leakage-free perm_mask. Please see relevant comments in source code for details.
0%| <|||||>I ended up adding <pad> if token length is not even. Is this ok? |
transformers | 7,340 | closed | Fixed evaluation_strategy on epoch end bug | moved the evaluation script outside the iteration loop
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #7339
| 09-23-2020 14:13:07 | 09-23-2020 14:13:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7340?src=pr&el=h1) Report
> Merging [#7340](https://codecov.io/gh/huggingface/transformers/pull/7340?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28cf873036d078b47fb9dd38ac3421a7c874da44?el=desc) will **increase** coverage by `2.24%`.
> The diff coverage is `33.33%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7340?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7340 +/- ##
==========================================
+ Coverage 76.58% 78.83% +2.24%
==========================================
Files 181 181
Lines 34828 34828
==========================================
+ Hits 26674 27456 +782
+ Misses 8154 7372 -782
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7340?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.62% <33.33%> (ø)` | |
| [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |
| [src/transformers/modeling\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.62% <0.00%> (-69.31%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/configuration\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.37% <0.00%> (-4.87%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `81.81% <0.00%> (-4.55%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `96.76% <0.00%> (+0.20%)` | :arrow_up: |
| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/7340/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7340?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7340?src=pr&el=footer). Last update [28cf873...48c72a9](https://codecov.io/gh/huggingface/transformers/pull/7340?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks a lot for the fix! |
transformers | 7,339 | closed | Trainer Evaluates at each step (Not of epoch end) , indentation bug | ## Environment info
- `transformers` version: 3.2.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: NA
### Who can help
Trainer: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): BERT
The problem arises when using:
- [ ] the official example scripts:
- [x] my own modified scripts:
The tasks I am working on is:
- [ ] an official GLUE/SQUaD task:
- [x] my own task or dataset:
Basic Single Sentence Classification Dataset loaded via a Dataset class
## To reproduce
Steps to reproduce the behavior:
1. Use the trainer training function with `training_args.evaluation_strategy = EvaluationStrategy.EPOCH`
## Expected behavior
Evaluation should happen after each epoch ends, but instead it happens after each step (batch).
Indentation bug
## Suggested Fix
Move the if condition on line https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L829 to before line https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L834 and remove 1 indentation block
| 09-23-2020 13:30:24 | 09-23-2020 13:30:24 | |
transformers | 7,338 | closed | BufferedWriter takes most of the time | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: macOS-10.14.6-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Speed and Memory Benchmarks: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
english = pipeline(
"question-answering",
model="distilbert-base-uncased-distilled-squad",
tokenizer="distilbert-base-uncased-distilled-squad"
)
text1 = """It comes as pubs, bars, restaurants and other hospitality venues in England are told they must have a 22:00 closing time from Thursday.
Full details will be set out by the prime minister in Parliament later.
Boris Johnson is meeting the first ministers of Scotland, Wales and Northern Ireland and will address the nation in a live broadcast at 20:00 BST on Tuesday.
As well as the early closing time for hospitality venues, he is expected to announce they will be restricted by law to table service only.
"""
%%prun
english({'question': 'Which country is the news about?', 'context': text1})
```
The profiling result is
```
6256 function calls (6155 primitive calls) in 1.097 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.713 0.713 0.713 0.713 {method 'write' of '_io.BufferedWriter' objects}
37 0.229 0.006 0.229 0.006 {method 'matmul' of 'torch._C._TensorBase' objects}
12 0.030 0.002 0.030 0.002 {built-in method matmul}
5 0.020 0.004 0.020 0.004 {method 'dump' of '_pickle.Pickler' objects}
6 0.019 0.003 0.019 0.003 {method 'softmax' of 'torch._C._TensorBase' objects}
33 0.012 0.000 0.012 0.000 {method 'acquire' of '_thread.lock' objects}
3 0.009 0.003 0.009 0.003 {built-in method posix.waitpid}
6 0.009 0.002 0.009 0.002 {method 'masked_fill_' of 'torch._C._TensorBase' objects}
6 0.009 0.001 0.009 0.001 {built-in method torch._C._nn.gelu}
37 0.006 0.000 0.235 0.006 functional.py:1655(linear)
94/1 0.005 0.000 0.325 0.325 module.py:710(_call_impl)
...
```
## Expected behavior
Most time is spent on inference such as the method `matmul`.
| 09-23-2020 12:41:09 | 09-23-2020 12:41:09 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,337 | closed | Trainer.py module 'datasets' has no attribute 'Dataset' | I'm trying to use a Trainer, but I get this error:
```
c:\users\francois\appdata\local\programs\python\python37\lib\site-packages\transformers\trainer.py in __init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, tb_writer, optimizers, **kwargs)
287
288 if is_datasets_available():
--> 289 if isinstance(train_dataset, datasets.Dataset):
290 self._remove_unused_columns(self.train_dataset, description="training")
291 if isinstance(eval_dataset, datasets.Dataset):
AttributeError: module 'datasets' has no attribute 'Dataset'
```
My guess is that `datasets.Dataset` should be replaced by `torch.utils.data.Dataset` but I haven't checked the source file. Maybe the person responsible for the `Trainer` development should look into that.
I'm using transformers version 3.2.0 btw. | 09-23-2020 09:47:15 | 09-23-2020 09:47:15 | I'm not sure which `datasets` module is installed in your env, but the Hugging Face `datasets` definitely has a `Dataset` attribute. And no, this part is not using PyTorch `Dataset`.<|||||>Apparently, the `datasets` module wasn't even installed on my environment. But installing it just replaced the error by another one. It's upgrading PyTorch that fixed the issue, might be cool to be notified during the installation or the execution of transformers that we don't have the required PyTorch version for it to works. It feels awkward having to fixe dependencies myself, that's what a package manager like pip is usually used for. Maybe it's better handled by anaconda...<|||||>> Apparently, the `datasets` module wasn't even installed on my environment. But installing it just replaced the error by another one. It's upgrading PyTorch that fixed the issue, might be cool to be notified during the installation or the execution of transformers that we don't have the required PyTorch version for it to works. It feels awkward having to fixe dependencies myself, that's what a package manager like pip is usually used for. Maybe it's better handled by anaconda...
hi, could you please tell me which pytorch version you have been upgraded to solve this problem? I got the same problem.<|||||>@ericdoug-qi Can't remember, did you check you are using the latest version of PyTorch and Transformers? Otherwise, try anaconda.<|||||>may be you use the wrong moudule, try to run conmand "pip uninstall datasets" , then it can use Hugging Face datasets |
transformers | 7,336 | closed | Error when fine-tune RoBERTa on NSP using Trainer | ## Environment info
- `transformers` version: 3.1.0
- Platform: Linux-5.4.0-45-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: IDK.
- Using distributed or parallel set-up in script?: IDK.
### Who can help
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
nlp datasets: [different repo](https://github.com/huggingface/nlp)
## Information
Model I am using RoBERTa trained for Polish lanugage: [polish-roberta](https://github.com/sdadas/polish-roberta), version [robreta_base_transformers](https://github.com/sdadas/polish-roberta/releases/download/models/roberta_base_transformers.zip).
The problem arises when using:
* [ ] my own modified scripts:
```python
from transformers import (BertForNextSentencePrediction,
BertTokenizer,
RobertaModel, RobertaTokenizer, Trainer,
TrainingArguments)
from transformers.data.datasets.language_modeling import TextDatasetForNextSentencePrediction
from transformers.data.data_collator import DataCollatorForNextSentencePrediction
from argparse import ArgumentParser
def parse_args():
parser = ArgumentParser("Fine-tune RoBERTa in Next Sentence Prediction.")
parser.add_argument("-m", "--model_path", dest="model_path", required=True, help="Path to RoBERTa model.")
parser.add_argument("-o", "--output_path", dest="output_path", required=True, help="Path to directory of fine-tuned model.")
parser.add_argument("-d", "--dataset_path", dest="dataset_path", required=True, help="Path to dataset.")
args = parser.parse_args()
return args
if __name__ == "__main__":
args = parse_args()
tokenizer = RobertaTokenizer.from_pretrained(args.model_path)
finetune_model = BertForNextSentencePrediction.from_pretrained(args.model_path)
training_args = TrainingArguments(
output_dir=args.output_path,
num_train_epochs=3,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
warmup_steps=500,
weight_decay=0.01,
logging_dir='./logs',
)
data_collator = DataCollatorForNextSentencePrediction(
tokenizer=tokenizer,
mlm=False,
block_size=512,
nsp_probability=0.5,
)
train_dataset = TextDatasetForNextSentencePrediction(
tokenizer=tokenizer,
file_path=args.dataset_path,
block_size=512,
)
trainer = Trainer(
model=finetune_model,
args=training_args,
train_dataset=train_dataset,
data_collator=data_collator,
)
trainer.train()
trainer.save_model(args.output_path)
```
The tasks I am working on is:
* [ ] my own task or dataset based on TextDatasetForNextSentencePrediction input format:
```bash
<doc1_turn1>
<doc1_turn2>
<doc2_turn1>
<doc2_turn2>
...
```
## To reproduce
Steps to reproduce the behavior:
1. `python finetune_roberta.py -m <model_dir> -o output/ -d <dataset_path>`
```bash
Special tokens have been added in the vocabulary, make sure the associated word emebedding are fine-tuned or trained.
Some weights of the model checkpoint at roberta_base/ were not used when initializing BertForNextSentencePrediction: ['roberta.embeddings.word_embeddings.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.encoder.layer.0.attention.self.query.weight', 'roberta.encoder.layer.0.attention.self.query.bias', 'roberta.encoder.layer.0.attention.self.key.weight', 'roberta.encoder.layer.0.attention.self.key.bias', 'roberta.encoder.layer.0.attention.self.value.weight', 'roberta.encoder.layer.0.attention.self.value.bias', 'roberta.encoder.layer.0.attention.output.dense.weight', 'roberta.encoder.layer.0.attention.output.dense.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.intermediate.dense.weight', 'roberta.encoder.layer.0.intermediate.dense.bias', 'roberta.encoder.layer.0.output.dense.weight', 'roberta.encoder.layer.0.output.dense.bias', 'roberta.encoder.layer.0.output.LayerNorm.weight', 'roberta.encoder.layer.0.output.LayerNorm.bias', 'roberta.encoder.layer.1.attention.self.query.weight', 'roberta.encoder.layer.1.attention.self.query.bias', 'roberta.encoder.layer.1.attention.self.key.weight', 'roberta.encoder.layer.1.attention.self.key.bias', 'roberta.encoder.layer.1.attention.self.value.weight', 'roberta.encoder.layer.1.attention.self.value.bias', 'roberta.encoder.layer.1.attention.output.dense.weight', 'roberta.encoder.layer.1.attention.output.dense.bias', 'roberta.encoder.layer.1.attention.output.LayerNorm.weight', 'roberta.encoder.layer.1.attention.output.LayerNorm.bias', 'roberta.encoder.layer.1.intermediate.dense.weight', 'roberta.encoder.layer.1.intermediate.dense.bias', 'roberta.encoder.layer.1.output.dense.weight', 'roberta.encoder.layer.1.output.dense.bias', 'roberta.encoder.layer.1.output.LayerNorm.weight', 'roberta.encoder.layer.1.output.LayerNorm.bias', 'roberta.encoder.layer.2.attention.self.query.weight', 'roberta.encoder.layer.2.attention.self.query.bias', 'roberta.encoder.layer.2.attention.self.key.weight', 'roberta.encoder.layer.2.attention.self.key.bias', 'roberta.encoder.layer.2.attention.self.value.weight', 'roberta.encoder.layer.2.attention.self.value.bias', 'roberta.encoder.layer.2.attention.output.dense.weight', 'roberta.encoder.layer.2.attention.output.dense.bias', 'roberta.encoder.layer.2.attention.output.LayerNorm.weight', 'roberta.encoder.layer.2.attention.output.LayerNorm.bias', 'roberta.encoder.layer.2.intermediate.dense.weight', 'roberta.encoder.layer.2.intermediate.dense.bias', 'roberta.encoder.layer.2.output.dense.weight', 'roberta.encoder.layer.2.output.dense.bias', 'roberta.encoder.layer.2.output.LayerNorm.weight', 'roberta.encoder.layer.2.output.LayerNorm.bias', 'roberta.encoder.layer.3.attention.self.query.weight', 'roberta.encoder.layer.3.attention.self.query.bias', 'roberta.encoder.layer.3.attention.self.key.weight', 'roberta.encoder.layer.3.attention.self.key.bias', 'roberta.encoder.layer.3.attention.self.value.weight', 'roberta.encoder.layer.3.attention.self.value.bias', 'roberta.encoder.layer.3.attention.output.dense.weight', 'roberta.encoder.layer.3.attention.output.dense.bias', 'roberta.encoder.layer.3.attention.output.LayerNorm.weight', 'roberta.encoder.layer.3.attention.output.LayerNorm.bias', 'roberta.encoder.layer.3.intermediate.dense.weight', 'roberta.encoder.layer.3.intermediate.dense.bias', 'roberta.encoder.layer.3.output.dense.weight', 'roberta.encoder.layer.3.output.dense.bias', 'roberta.encoder.layer.3.output.LayerNorm.weight', 'roberta.encoder.layer.3.output.LayerNorm.bias', 'roberta.encoder.layer.4.attention.self.query.weight', 'roberta.encoder.layer.4.attention.self.query.bias', 'roberta.encoder.layer.4.attention.self.key.weight', 'roberta.encoder.layer.4.attention.self.key.bias', 'roberta.encoder.layer.4.attention.self.value.weight', 'roberta.encoder.layer.4.attention.self.value.bias', 'roberta.encoder.layer.4.attention.output.dense.weight', 'roberta.encoder.layer.4.attention.output.dense.bias', 'roberta.encoder.layer.4.attention.output.LayerNorm.weight', 'roberta.encoder.layer.4.attention.output.LayerNorm.bias', 'roberta.encoder.layer.4.intermediate.dense.weight', 'roberta.encoder.layer.4.intermediate.dense.bias', 'roberta.encoder.layer.4.output.dense.weight', 'roberta.encoder.layer.4.output.dense.bias', 'roberta.encoder.layer.4.output.LayerNorm.weight', 'roberta.encoder.layer.4.output.LayerNorm.bias', 'roberta.encoder.layer.5.attention.self.query.weight', 'roberta.encoder.layer.5.attention.self.query.bias', 'roberta.encoder.layer.5.attention.self.key.weight', 'roberta.encoder.layer.5.attention.self.key.bias', 'roberta.encoder.layer.5.attention.self.value.weight', 'roberta.encoder.layer.5.attention.self.value.bias', 'roberta.encoder.layer.5.attention.output.dense.weight', 'roberta.encoder.layer.5.attention.output.dense.bias', 'roberta.encoder.layer.5.attention.output.LayerNorm.weight', 'roberta.encoder.layer.5.attention.output.LayerNorm.bias', 'roberta.encoder.layer.5.intermediate.dense.weight', 'roberta.encoder.layer.5.intermediate.dense.bias', 'roberta.encoder.layer.5.output.dense.weight', 'roberta.encoder.layer.5.output.dense.bias', 'roberta.encoder.layer.5.output.LayerNorm.weight', 'roberta.encoder.layer.5.output.LayerNorm.bias', 'roberta.encoder.layer.6.attention.self.query.weight', 'roberta.encoder.layer.6.attention.self.query.bias', 'roberta.encoder.layer.6.attention.self.key.weight', 'roberta.encoder.layer.6.attention.self.key.bias', 'roberta.encoder.layer.6.attention.self.value.weight', 'roberta.encoder.layer.6.attention.self.value.bias', 'roberta.encoder.layer.6.attention.output.dense.weight', 'roberta.encoder.layer.6.attention.output.dense.bias', 'roberta.encoder.layer.6.attention.output.LayerNorm.weight', 'roberta.encoder.layer.6.attention.output.LayerNorm.bias', 'roberta.encoder.layer.6.intermediate.dense.weight', 'roberta.encoder.layer.6.intermediate.dense.bias', 'roberta.encoder.layer.6.output.dense.weight', 'roberta.encoder.layer.6.output.dense.bias', 'roberta.encoder.layer.6.output.LayerNorm.weight', 'roberta.encoder.layer.6.output.LayerNorm.bias', 'roberta.encoder.layer.7.attention.self.query.weight', 'roberta.encoder.layer.7.attention.self.query.bias', 'roberta.encoder.layer.7.attention.self.key.weight', 'roberta.encoder.layer.7.attention.self.key.bias', 'roberta.encoder.layer.7.attention.self.value.weight', 'roberta.encoder.layer.7.attention.self.value.bias', 'roberta.encoder.layer.7.attention.output.dense.weight', 'roberta.encoder.layer.7.attention.output.dense.bias', 'roberta.encoder.layer.7.attention.output.LayerNorm.weight', 'roberta.encoder.layer.7.attention.output.LayerNorm.bias', 'roberta.encoder.layer.7.intermediate.dense.weight', 'roberta.encoder.layer.7.intermediate.dense.bias', 'roberta.encoder.layer.7.output.dense.weight', 'roberta.encoder.layer.7.output.dense.bias', 'roberta.encoder.layer.7.output.LayerNorm.weight', 'roberta.encoder.layer.7.output.LayerNorm.bias', 'roberta.encoder.layer.8.attention.self.query.weight', 'roberta.encoder.layer.8.attention.self.query.bias', 'roberta.encoder.layer.8.attention.self.key.weight', 'roberta.encoder.layer.8.attention.self.key.bias', 'roberta.encoder.layer.8.attention.self.value.weight', 'roberta.encoder.layer.8.attention.self.value.bias', 'roberta.encoder.layer.8.attention.output.dense.weight', 'roberta.encoder.layer.8.attention.output.dense.bias', 'roberta.encoder.layer.8.attention.output.LayerNorm.weight', 'roberta.encoder.layer.8.attention.output.LayerNorm.bias', 'roberta.encoder.layer.8.intermediate.dense.weight', 'roberta.encoder.layer.8.intermediate.dense.bias', 'roberta.encoder.layer.8.output.dense.weight', 'roberta.encoder.layer.8.output.dense.bias', 'roberta.encoder.layer.8.output.LayerNorm.weight', 'roberta.encoder.layer.8.output.LayerNorm.bias', 'roberta.encoder.layer.9.attention.self.query.weight', 'roberta.encoder.layer.9.attention.self.query.bias', 'roberta.encoder.layer.9.attention.self.key.weight', 'roberta.encoder.layer.9.attention.self.key.bias', 'roberta.encoder.layer.9.attention.self.value.weight', 'roberta.encoder.layer.9.attention.self.value.bias', 'roberta.encoder.layer.9.attention.output.dense.weight', 'roberta.encoder.layer.9.attention.output.dense.bias', 'roberta.encoder.layer.9.attention.output.LayerNorm.weight', 'roberta.encoder.layer.9.attention.output.LayerNorm.bias', 'roberta.encoder.layer.9.intermediate.dense.weight', 'roberta.encoder.layer.9.intermediate.dense.bias', 'roberta.encoder.layer.9.output.dense.weight', 'roberta.encoder.layer.9.output.dense.bias', 'roberta.encoder.layer.9.output.LayerNorm.weight', 'roberta.encoder.layer.9.output.LayerNorm.bias', 'roberta.encoder.layer.10.attention.self.query.weight', 'roberta.encoder.layer.10.attention.self.query.bias', 'roberta.encoder.layer.10.attention.self.key.weight', 'roberta.encoder.layer.10.attention.self.key.bias', 'roberta.encoder.layer.10.attention.self.value.weight', 'roberta.encoder.layer.10.attention.self.value.bias', 'roberta.encoder.layer.10.attention.output.dense.weight', 'roberta.encoder.layer.10.attention.output.dense.bias', 'roberta.encoder.layer.10.attention.output.LayerNorm.weight', 'roberta.encoder.layer.10.attention.output.LayerNorm.bias', 'roberta.encoder.layer.10.intermediate.dense.weight', 'roberta.encoder.layer.10.intermediate.dense.bias', 'roberta.encoder.layer.10.output.dense.weight', 'roberta.encoder.layer.10.output.dense.bias', 'roberta.encoder.layer.10.output.LayerNorm.weight', 'roberta.encoder.layer.10.output.LayerNorm.bias', 'roberta.encoder.layer.11.attention.self.query.weight', 'roberta.encoder.layer.11.attention.self.query.bias', 'roberta.encoder.layer.11.attention.self.key.weight', 'roberta.encoder.layer.11.attention.self.key.bias', 'roberta.encoder.layer.11.attention.self.value.weight', 'roberta.encoder.layer.11.attention.self.value.bias', 'roberta.encoder.layer.11.attention.output.dense.weight', 'roberta.encoder.layer.11.attention.output.dense.bias', 'roberta.encoder.layer.11.attention.output.LayerNorm.weight', 'roberta.encoder.layer.11.attention.output.LayerNorm.bias', 'roberta.encoder.layer.11.intermediate.dense.weight', 'roberta.encoder.layer.11.intermediate.dense.bias', 'roberta.encoder.layer.11.output.dense.weight', 'roberta.encoder.layer.11.output.dense.bias', 'roberta.encoder.layer.11.output.LayerNorm.weight', 'roberta.encoder.layer.11.output.LayerNorm.bias', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias', 'lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.decoder.bias']
- This IS expected if you are initializing BertForNextSentencePrediction from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing BertForNextSentencePrediction from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForNextSentencePrediction were not initialized from the model checkpoint at roberta_base/ and are newly initialized: ['embeddings.word_embeddings.weight', 'embeddings.position_embeddings.weight', 'embeddings.token_type_embeddings.weight', 'embeddings.LayerNorm.weight', 'embeddings.LayerNorm.bias', 'encoder.layer.0.attention.self.query.weight', 'encoder.layer.0.attention.self.query.bias', 'encoder.layer.0.attention.self.key.weight', 'encoder.layer.0.attention.self.key.bias', 'encoder.layer.0.attention.self.value.weight', 'encoder.layer.0.attention.self.value.bias', 'encoder.layer.0.attention.output.dense.weight', 'encoder.layer.0.attention.output.dense.bias', 'encoder.layer.0.attention.output.LayerNorm.weight', 'encoder.layer.0.attention.output.LayerNorm.bias', 'encoder.layer.0.intermediate.dense.weight', 'encoder.layer.0.intermediate.dense.bias', 'encoder.layer.0.output.dense.weight', 'encoder.layer.0.output.dense.bias', 'encoder.layer.0.output.LayerNorm.weight', 'encoder.layer.0.output.LayerNorm.bias', 'encoder.layer.1.attention.self.query.weight', 'encoder.layer.1.attention.self.query.bias', 'encoder.layer.1.attention.self.key.weight', 'encoder.layer.1.attention.self.key.bias', 'encoder.layer.1.attention.self.value.weight', 'encoder.layer.1.attention.self.value.bias', 'encoder.layer.1.attention.output.dense.weight', 'encoder.layer.1.attention.output.dense.bias', 'encoder.layer.1.attention.output.LayerNorm.weight', 'encoder.layer.1.attention.output.LayerNorm.bias', 'encoder.layer.1.intermediate.dense.weight', 'encoder.layer.1.intermediate.dense.bias', 'encoder.layer.1.output.dense.weight', 'encoder.layer.1.output.dense.bias', 'encoder.layer.1.output.LayerNorm.weight', 'encoder.layer.1.output.LayerNorm.bias', 'encoder.layer.2.attention.self.query.weight', 'encoder.layer.2.attention.self.query.bias', 'encoder.layer.2.attention.self.key.weight', 'encoder.layer.2.attention.self.key.bias', 'encoder.layer.2.attention.self.value.weight', 'encoder.layer.2.attention.self.value.bias', 'encoder.layer.2.attention.output.dense.weight', 'encoder.layer.2.attention.output.dense.bias', 'encoder.layer.2.attention.output.LayerNorm.weight', 'encoder.layer.2.attention.output.LayerNorm.bias', 'encoder.layer.2.intermediate.dense.weight', 'encoder.layer.2.intermediate.dense.bias', 'encoder.layer.2.output.dense.weight', 'encoder.layer.2.output.dense.bias', 'encoder.layer.2.output.LayerNorm.weight', 'encoder.layer.2.output.LayerNorm.bias', 'encoder.layer.3.attention.self.query.weight', 'encoder.layer.3.attention.self.query.bias', 'encoder.layer.3.attention.self.key.weight', 'encoder.layer.3.attention.self.key.bias', 'encoder.layer.3.attention.self.value.weight', 'encoder.layer.3.attention.self.value.bias', 'encoder.layer.3.attention.output.dense.weight', 'encoder.layer.3.attention.output.dense.bias', 'encoder.layer.3.attention.output.LayerNorm.weight', 'encoder.layer.3.attention.output.LayerNorm.bias', 'encoder.layer.3.intermediate.dense.weight', 'encoder.layer.3.intermediate.dense.bias', 'encoder.layer.3.output.dense.weight', 'encoder.layer.3.output.dense.bias', 'encoder.layer.3.output.LayerNorm.weight', 'encoder.layer.3.output.LayerNorm.bias', 'encoder.layer.4.attention.self.query.weight', 'encoder.layer.4.attention.self.query.bias', 'encoder.layer.4.attention.self.key.weight', 'encoder.layer.4.attention.self.key.bias', 'encoder.layer.4.attention.self.value.weight', 'encoder.layer.4.attention.self.value.bias', 'encoder.layer.4.attention.output.dense.weight', 'encoder.layer.4.attention.output.dense.bias', 'encoder.layer.4.attention.output.LayerNorm.weight', 'encoder.layer.4.attention.output.LayerNorm.bias', 'encoder.layer.4.intermediate.dense.weight', 'encoder.layer.4.intermediate.dense.bias', 'encoder.layer.4.output.dense.weight', 'encoder.layer.4.output.dense.bias', 'encoder.layer.4.output.LayerNorm.weight', 'encoder.layer.4.output.LayerNorm.bias', 'encoder.layer.5.attention.self.query.weight', 'encoder.layer.5.attention.self.query.bias', 'encoder.layer.5.attention.self.key.weight', 'encoder.layer.5.attention.self.key.bias', 'encoder.layer.5.attention.self.value.weight', 'encoder.layer.5.attention.self.value.bias', 'encoder.layer.5.attention.output.dense.weight', 'encoder.layer.5.attention.output.dense.bias', 'encoder.layer.5.attention.output.LayerNorm.weight', 'encoder.layer.5.attention.output.LayerNorm.bias', 'encoder.layer.5.intermediate.dense.weight', 'encoder.layer.5.intermediate.dense.bias', 'encoder.layer.5.output.dense.weight', 'encoder.layer.5.output.dense.bias', 'encoder.layer.5.output.LayerNorm.weight', 'encoder.layer.5.output.LayerNorm.bias', 'encoder.layer.6.attention.self.query.weight', 'encoder.layer.6.attention.self.query.bias', 'encoder.layer.6.attention.self.key.weight', 'encoder.layer.6.attention.self.key.bias', 'encoder.layer.6.attention.self.value.weight', 'encoder.layer.6.attention.self.value.bias', 'encoder.layer.6.attention.output.dense.weight', 'encoder.layer.6.attention.output.dense.bias', 'encoder.layer.6.attention.output.LayerNorm.weight', 'encoder.layer.6.attention.output.LayerNorm.bias', 'encoder.layer.6.intermediate.dense.weight', 'encoder.layer.6.intermediate.dense.bias', 'encoder.layer.6.output.dense.weight', 'encoder.layer.6.output.dense.bias', 'encoder.layer.6.output.LayerNorm.weight', 'encoder.layer.6.output.LayerNorm.bias', 'encoder.layer.7.attention.self.query.weight', 'encoder.layer.7.attention.self.query.bias', 'encoder.layer.7.attention.self.key.weight', 'encoder.layer.7.attention.self.key.bias', 'encoder.layer.7.attention.self.value.weight', 'encoder.layer.7.attention.self.value.bias', 'encoder.layer.7.attention.output.dense.weight', 'encoder.layer.7.attention.output.dense.bias', 'encoder.layer.7.attention.output.LayerNorm.weight', 'encoder.layer.7.attention.output.LayerNorm.bias', 'encoder.layer.7.intermediate.dense.weight', 'encoder.layer.7.intermediate.dense.bias', 'encoder.layer.7.output.dense.weight', 'encoder.layer.7.output.dense.bias', 'encoder.layer.7.output.LayerNorm.weight', 'encoder.layer.7.output.LayerNorm.bias', 'encoder.layer.8.attention.self.query.weight', 'encoder.layer.8.attention.self.query.bias', 'encoder.layer.8.attention.self.key.weight', 'encoder.layer.8.attention.self.key.bias', 'encoder.layer.8.attention.self.value.weight', 'encoder.layer.8.attention.self.value.bias', 'encoder.layer.8.attention.output.dense.weight', 'encoder.layer.8.attention.output.dense.bias', 'encoder.layer.8.attention.output.LayerNorm.weight', 'encoder.layer.8.attention.output.LayerNorm.bias', 'encoder.layer.8.intermediate.dense.weight', 'encoder.layer.8.intermediate.dense.bias', 'encoder.layer.8.output.dense.weight', 'encoder.layer.8.output.dense.bias', 'encoder.layer.8.output.LayerNorm.weight', 'encoder.layer.8.output.LayerNorm.bias', 'encoder.layer.9.attention.self.query.weight', 'encoder.layer.9.attention.self.query.bias', 'encoder.layer.9.attention.self.key.weight', 'encoder.layer.9.attention.self.key.bias', 'encoder.layer.9.attention.self.value.weight', 'encoder.layer.9.attention.self.value.bias', 'encoder.layer.9.attention.output.dense.weight', 'encoder.layer.9.attention.output.dense.bias', 'encoder.layer.9.attention.output.LayerNorm.weight', 'encoder.layer.9.attention.output.LayerNorm.bias', 'encoder.layer.9.intermediate.dense.weight', 'encoder.layer.9.intermediate.dense.bias', 'encoder.layer.9.output.dense.weight', 'encoder.layer.9.output.dense.bias', 'encoder.layer.9.output.LayerNorm.weight', 'encoder.layer.9.output.LayerNorm.bias', 'encoder.layer.10.attention.self.query.weight', 'encoder.layer.10.attention.self.query.bias', 'encoder.layer.10.attention.self.key.weight', 'encoder.layer.10.attention.self.key.bias', 'encoder.layer.10.attention.self.value.weight', 'encoder.layer.10.attention.self.value.bias', 'encoder.layer.10.attention.output.dense.weight', 'encoder.layer.10.attention.output.dense.bias', 'encoder.layer.10.attention.output.LayerNorm.weight', 'encoder.layer.10.attention.output.LayerNorm.bias', 'encoder.layer.10.intermediate.dense.weight', 'encoder.layer.10.intermediate.dense.bias', 'encoder.layer.10.output.dense.weight', 'encoder.layer.10.output.dense.bias', 'encoder.layer.10.output.LayerNorm.weight', 'encoder.layer.10.output.LayerNorm.bias', 'encoder.layer.11.attention.self.query.weight', 'encoder.layer.11.attention.self.query.bias', 'encoder.layer.11.attention.self.key.weight', 'encoder.layer.11.attention.self.key.bias', 'encoder.layer.11.attention.self.value.weight', 'encoder.layer.11.attention.self.value.bias', 'encoder.layer.11.attention.output.dense.weight', 'encoder.layer.11.attention.output.dense.bias', 'encoder.layer.11.attention.output.LayerNorm.weight', 'encoder.layer.11.attention.output.LayerNorm.bias', 'encoder.layer.11.intermediate.dense.weight', 'encoder.layer.11.intermediate.dense.bias', 'encoder.layer.11.output.dense.weight', 'encoder.layer.11.output.dense.bias', 'encoder.layer.11.output.LayerNorm.weight', 'encoder.layer.11.output.LayerNorm.bias', 'pooler.dense.weight', 'pooler.dense.bias', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Epoch: 0%| | 0/3 [00:00<?, ?it/s/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
{'loss': 0.676176025390625, 'learning_rate': 5e-05, 'epoch': 0.3427004797806717, 'step': 500} | 499/1459 [04:30<08:09, 1.96it/s]
/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler.
warnings.warn(SAVE_STATE_WARNING, UserWarning)
{'loss': 0.671025390625, 'learning_rate': 4.355171524374517e-05, 'epoch': 0.6854009595613434, 'step': 1000}███████████▎ | 999/1459 [08:47<03:53, 1.97it/s]
Traceback (most recent call last):███████████████████████████████████████████████████████████████████████████████████████ | 1033/1459 [09:06<03:38, 1.95it/s]
File "finetune_roberta.py", line 75, in <module>
trainer.train()
File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/transformers/trainer.py", line 699, in train
for step, inputs in enumerate(epoch_iterator):
File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
data = self._next_data()
File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/transformers/data/data_collator.py", line 358, in __call__
input_id, segment_id, attention_mask, label = self.create_examples_from_document(doc, i, examples)
File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/site-packages/transformers/data/data_collator.py", line 446, in create_examples_from_document
random_start = random.randint(0, len(random_document) - 1)
File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/random.py", line 248, in randint
return self.randrange(a, b+1)
File "/home/awawrzynski/miniconda3/envs/polish_roberta/lib/python3.8/random.py", line 226, in randrange
raise ValueError("empty range for randrange() (%d, %d, %d)" % (istart, istop, width))
ValueError: empty range for randrange() (0, 0, 0)
Epoch: 0%| | 0/3 [09:09<?, ?it/s]
Iteration: 71%|████████████████████████████████████████████████████████████████████████████████████████████████████████ | 1033/1459 [09:09<03:46, 1.88it/s]
```
## Expected behavior
Model is fine-tuned on NSP taks on given dataset and after that model is saved.
| 09-23-2020 08:16:30 | 09-23-2020 08:16:30 | Hey @adamwawrzynski,
Could you create a google-colab where we can reproduce the error? It is quite difficult to reproduce your error since it seems to be very specific to your usecase.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi @adamwawrzynski, @patrickvonplaten,
The issue is caused by `random_start = random.randint(0, len(random_document) - 1)` having a zero length `random_document`.
Just run the following and you will get the same error:
```
import random
random.randint(0, 0 - 1)
```
The zero length `random_document` can occur if for example the data file's last line is an empty line.
You can solve for this either by ensuring that there is no empty line at the end of the data file and/or by monkey patching the `TextDatasetForNextSentencePrediction.__init__()` method (https://github.com/huggingface/transformers/blob/master/src/transformers/data/datasets/language_modeling.py#L353), by adding a line like this:
```
self.documents = [d for d in self.documents if len(d)]
```
NOTE: Obviously because this failure pops up when random documents are picked, this error will NOT come up at every run due to this randomness, so if you want to reproduce, you might need to run your example multiple times.
|
transformers | 7,335 | closed | is there a tokenizer only used whitespace for spliting chinese sentence? | i want to use bert masked language model to pre train for chinese sentence, i have alweady splited chinese sentence into meaninful word, the data file like follows:
我 是 一个 队员
他 不是 一个 合格 的 老师
......
i only want to use whitespace split them, but BertWordPieceTokenizer will split them to character level , the final vocabulary like follows:
{'[SEP]': 3,
'一': 7,
'是': 15,
'员': 12,
'[CLS]': 2,
'[UNK]': 1,
'</S>': 6,
'[MASK]': 4,
'<S>': 5,
'队': 19,
'的': 17,
'不': 8,
'我': 14,
'他': 10,
'老': 18,
'[PAD]': 0,
'格': 16,
'个': 9,
'师': 13,
'合': 11}
how to correct it ? | 09-23-2020 03:14:56 | 09-23-2020 03:14:56 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,334 | closed | [testing] skip decorators: docs, tests, bugs | This PR:
* fixes a bug in `require_torch_and_cuda`
* makes all skip decorators consistent code-wise
* adds a test for testing combinations of skip decorators and other decorators
* clarifies `testing.rst` notes
OK, so other than a small bug in `require_torch_and_cuda` our skip decorators can be used in any order.
The only problem I found so far is when they are used together with `@parameterized`, which has to come first and skip decorators last. It rewrites test names, to create a unique test name for each parameter group. and then it runs them - it has no idea it may have any skip decorators before it (The decorators get all stacked, and one below has no idea what the one above does).
If you find other unusual decorators, please let me know and I will investigate.
<!-- This line specifies which issue to close after the pull request is merged. -->
Partially fixes #7326
@LysandreJik, @sgugger | 09-23-2020 01:04:57 | 09-23-2020 01:04:57 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7334?src=pr&el=h1) Report
> Merging [#7334](https://codecov.io/gh/huggingface/transformers/pull/7334?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/25b0463d0ba3fcbcf7fff8aa4027a2d8e959364b?el=desc) will **decrease** coverage by `3.75%`.
> The diff coverage is `43.75%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7334?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7334 +/- ##
==========================================
- Coverage 80.48% 76.73% -3.76%
==========================================
Files 181 181
Lines 34827 34827
==========================================
- Hits 28032 26724 -1308
- Misses 6795 8103 +1308
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7334?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/testing\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `67.28% <43.75%> (-1.24%)` | :arrow_down: |
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `19.02% <0.00%> (-74.21%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `81.81% <0.00%> (-4.55%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.28%)` | :arrow_down: |
| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/7334/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7334?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7334?src=pr&el=footer). Last update [25b0463...c17c310](https://codecov.io/gh/huggingface/transformers/pull/7334?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,333 | closed | Cannot import transformers with TF version 2.1.0 | The installation README says that transformers library requires Tensorflow version >2.0, but I can't seem to import the latest transformers 3.2 release even with Tensorflow v2.1.
```
>>> import transformers
wandb: WARNING W&B installed but not logged in. Run `wandb login` or set the WANDB_API_KEY env variable.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/amog/dev/ray/lib/python3.7/site-packages/transformers/__init__.py", line 121, in <module>
from .pipelines import (
File "/Users/amog/dev/ray/lib/python3.7/site-packages/transformers/pipelines.py", line 47, in <module>
from .modeling_tf_auto import (
File "/Users/amog/dev/ray/lib/python3.7/site-packages/transformers/modeling_tf_auto.py", line 45, in <module>
from .modeling_tf_albert import (
File "/Users/amog/dev/ray/lib/python3.7/site-packages/transformers/modeling_tf_albert.py", line 24, in <module>
from .activations_tf import get_tf_activation
File "/Users/amog/dev/ray/lib/python3.7/site-packages/transformers/activations_tf.py", line 53, in <module>
"swish": tf.keras.activations.swish,
AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.activations' has no attribute 'swish'
```
Upgrading to TF 2.2 works fine, but I think this should be made more clear in the docs.
cc @jplu @sgugger
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.2.0
- Platform: Mac OS
- Python version: 3.7.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.1.0. On CPU only.
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 09-23-2020 00:01:24 | 09-23-2020 00:01:24 | Hello !
Indeed, the requirements have to be updated.<|||||>Has the problem been solved? I met the same issue when loading the transformers.<|||||>Hello, you have to have TF 2.3 at min. This will be fixed in the next release.<|||||>This breaks at least a couple of the tutorial notebooks. Even with TF 2.3.0 I get the same error.<|||||>If you get this message error:
```
AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.activations' has no attribute 'swish'
```
It means you don't have at least TF 2.2 installed.<|||||>The problem can be seen as Transformers uses the tf _swish activation function_ by default (that does not exists in tf 2.1: https://www.tensorflow.org/versions/r2.1/api_docs/python/tf/keras/activations).
A workaround, instead of upgrading tf to 2.2 (unavailable at this time with `conda`), is to downgrade Transformers to a version that was developed with tf 2.1.
For example, I had this warning with TF 2.1.0 and transformers 3.5.1 and it desappeared with transformers 3.0.2 [Warning : this version installs a specific version of pytorch so it is better to use a virtual environment].<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 7,332 | closed | data_collator error: AttributeError: 'dict' object has no attribute 'size' | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**:
I am trying to run a language model that is very similar to the [tutorial ](https://huggingface.co/blog/how-to-train). I have a custom dataset class that returns a dict with fields: dict_keys(['input_ids', 'token_type_ids', 'attention_mask']). When I run the training I get this error message:
``` File "prod2vec/train-from-scratch.py", line 289, in <module>
sys.exit(main())
File "prod2vec/train-from-scratch.py", line 265, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/site-packages/transformers/trainer.py", line 456, in train
for step, inputs in enumerate(epoch_iterator):
File "/usr/local/lib/python3.6/site-packages/tqdm/std.py", line 1127, in __iter__
for obj in iterable:
File "/usr/local/lib64/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
data = self._next_data()
File "/usr/local/lib64/python3.6/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib64/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 35, in fetch
return self.collate_fn(data)
File "/usr/local/lib/python3.6/site-packages/transformers/data/data_collator.py", line 79, in __call__
batch = self._tensorize_batch(examples)
File "/usr/local/lib/python3.6/site-packages/transformers/data/data_collator.py", line 91, in _tensorize_batch
length_of_first = examples[0].size(0)
AttributeError: 'dict' object has no attribute 'size'
```
The error message is not surprising as examples[0] is a dictionary with previously mentioned three fields. I am curious what and where i am doing a mistake.
| 09-22-2020 22:55:36 | 09-22-2020 22:55:36 | It looks like you're not using the latest version of transformers (from the stack trace). This bug as been fixed, so you shouldn't have the problems with transformers 3.1.1.
In general, when reporting a bug/asking a question, make sure you include your version of transformers so we can help more efficiently. You can get it by running the command `transformers-cli env` and pasting the results.<|||||>I will add the version from now on. Your suggestion worked, thanks a lot!<|||||>i use the transformer 3.4.0, and met the same error<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,331 | closed | [s2s] only save metrics.json from rank zero | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-22-2020 22:17:00 | 09-22-2020 22:17:00 | |
transformers | 7,330 | closed | Ensure that integrations are imported before transformers or ml libs | This PR fixes a problem with some 3rd-party integrations that need to be imported before any transformers or other machine learning framework Python modules.
This PR makes the following changes:
1. Moves `import .integrations` in `__init__.py` before any other transformers imports
2. Moves ML imports in .integrations below 3rd-party imports
3. Used math.ceil() rather than numpy.ceil() as that was overkill
Before PR:
* failed with comet_ml
After PR:
* works with comet_ml | 09-22-2020 22:03:01 | 09-22-2020 22:03:01 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7330?src=pr&el=h1) Report
> Merging [#7330](https://codecov.io/gh/huggingface/transformers/pull/7330?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f5518e56318a79056ba3c80cbece29d9fe98558c?el=desc) will **decrease** coverage by `0.47%`.
> The diff coverage is `88.88%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7330?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7330 +/- ##
==========================================
- Coverage 79.30% 78.83% -0.48%
==========================================
Files 181 181
Lines 34828 34828
==========================================
- Hits 27620 27456 -164
- Misses 7208 7372 +164
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7330?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/integrations.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9pbnRlZ3JhdGlvbnMucHk=) | `29.00% <87.50%> (ø)` | |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.38% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |
| [src/transformers/modeling\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.62% <0.00%> (-69.31%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/configuration\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.37% <0.00%> (-4.87%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `81.81% <0.00%> (-4.55%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.34%)` | :arrow_down: |
| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/7330/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7330?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7330?src=pr&el=footer). Last update [f5518e5...0554448](https://codecov.io/gh/huggingface/transformers/pull/7330?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,329 | closed | Problem loading a dynamic quantized distilbert model. | Hello and thanks for your awesome library,
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.15.0-117-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@VictorSanh
@stefan-it
## Information
I'm trying to optimize a fine-tuned (for token classification, NER) distilBert model through Dynamic Quantization.
I use this line:
```python
quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
```
The model size goes from: 540 MB to 411 MB.
The quantized model works fine when I use it straight away in the script to make predictions, however I'm having trouble saving it and reloading it.
I tried few things, first using save_pretrained:
```python
quantized_model.save_pretrained(quantized_output_dir)
```
And then loading it using:
```python
model = AutoModelForTokenClassification.from_pretrained(quantized_output_dir)
```
When I use it to make predictions, I get the warning:
``
Some weights of the model checkpoint at data/model3/quantized3/ were not used when initializing DistilBertForTokenClassification: ['distilbert.transformer.layer.0.attention.q_lin.scale', 'distilbert.transformer.layer.0.attention.q_lin.zero_point', 'distilbert.transformer.layer.0.attention.q_lin._packed_params.dtype', 'distilbert.transformer.layer.0.attention.q_lin._packed_params._packed_params', 'distilbert.transformer.layer.0.attention.k_lin.scale', 'distilbert.transformer.layer.0.attention.k_lin.zero_point', 'distilbert.transformer.layer.0.attention.k_lin._packed_params.dtype', 'distilbert.transformer.layer.0.attention.k_lin._packed_params._packed_params', 'distilbert.transformer.layer.0.attention.v_lin.scale', 'distilbert.transformer.layer.0.attention.v_lin.zero_point', 'distilbert.transformer.layer.0.attention.v_lin._packed_params.dtype', 'distilbert.transformer.layer.0.attention.v_lin._packed_params._packed_params', 'distilbert.transformer.layer.0.attention.out_lin.scale', 'distilbert.transformer.layer.0.attention.out_lin.zero_point', 'distilbert.transformer.layer.0.attention.out_lin._packed_params.dtype', 'distilbert.transformer.layer.0.attention.out_lin._packed_params._packed_params', 'distilbert.transformer.layer.0.ffn.lin1.scale', 'distilbert.transformer.layer.0.ffn.lin1.zero_point', 'distilbert.transformer.layer.0.ffn.lin1._packed_params.dtype', 'distilbert.transformer.layer.0.ffn.lin1._packed_params._packed_params', 'distilbert.transformer.layer.0.ffn.lin2.scale', 'distilbert.transformer.layer.0.ffn.lin2.zero_point', 'distilbert.transformer.layer.0.ffn.lin2._packed_params.dtype', 'distilbert.transformer.layer.0.ffn.lin2._packed_params._packed_params', 'distilbert.transformer.layer.1.attention.q_lin.scale',
``
For all the layers.
And of course I got wrong predictions because it's as if the model isn't fine-tuned.
I tried saving it using:
```python
torch.save(quantized_model.state_dict(), path)
```
loading it using:
```python
config = DistilBertConfig.from_pretrained("distilbert-base-multilingual-cased", num_labels=5)
model = DistilBertForTokenClassification.from_pretrained("distilbert-base-multilingual-cased", config=config)
model.load_state_dict(torch.load(path))
```
and I got this runtime error:
``
RuntimeError: Error(s) in loading state_dict for DistilBertForTokenClassification:
Missing key(s) in state_dict: "distilbert.transformer.layer.0.attention.q_lin.weight", "distilbert.transformer.layer.0.attention.q_lin.bias", "distilbert.transformer.layer.0.attention.k_lin.weight", "distilbert.transformer.layer.0.attention.k_lin.bias", "distilbert.transformer.layer.0.attention.v_lin.weight", "distilbert.transformer.layer.0.attention.v_lin.bias", "distilbert.transformer.layer.0.attention.out_lin.weight", "distilbert.transformer.layer.0.attention.out_lin.bias", "distilbert.transformer.layer.0.ffn.lin1.weight", "distilbert.transformer.layer.0.ffn.lin1.bias", "distilbert.transformer.layer.0.ffn.lin2.weight", "distilbert.transformer.layer.0.ffn.lin2.bias", "distilbert.transformer.layer.1.attention.q_lin.weight",
Unexpected key(s) in state_dict: "distilbert.transformer.layer.0.attention.q_lin.scale", "distilbert.transformer.layer.0.attention.q_lin.zero_point", "distilbert.transformer.layer.0.attention.q_lin._packed_params.dtype", "distilbert.transformer.layer.0.attention.q_lin._packed_params._packed_params", "distilbert.transformer.layer.0.attention.k_lin.scale", "distilbert.transformer.layer.0.attention.k_lin.zero_point", "distilbert.transformer.layer.0.attention.k_lin._packed_params.dtype", "distilbert.transformer.layer.0.attention.k_lin._packed_params._packed_params", "distilbert.transformer.layer.0.attention.v_lin.scale", "distilbert.transformer.layer.0.attention.v_lin.zero_point", "distilbert.transformer.layer.0.attention.v_lin._packed_params.dtype", "distilbert.transformer.layer.0.attention.v_lin._packed_params._packed_params", "distilbert.transformer.layer.0.attention.out_lin.scale", "distilbert.transformer.layer.0.attention.out_lin.zero_point", "distilbert.transformer.layer.0.attention.out_lin._packed_params.dtype", "distilbert.transformer.layer.0.attention.out_lin._packed_params._packed_params", "distilbert.transformer.layer.0.ffn.lin1.scale", "distilbert.transformer.layer.0.ffn.lin1.zero_point", "distilbert.transformer.layer.0.ffn.lin1._packed_params.dtype", "distilbert.transformer.layer.0.ffn.lin1._packed_params._packed_params", "distilbert.transformer.layer.0.ffn.lin2.scale", "distilbert.transformer.layer.0.ffn.lin2.zero_point", "distilbert.transformer.layer.0.ffn.lin2._packed_params.dtype", "distilbert.transformer.layer.0.ffn.lin2._packed_params._packed_params", "distilbert.transformer.layer.1.attention.q_lin.scale", "classifier._packed_params.dtype", "classifier._packed_params._packed_params".
``
For all the layers also (didn't put it all to shorten the text).
Here is the text when printing the quantized model:
```JSON
DistilBertForTokenClassification(
(distilbert): DistilBertModel(
(embeddings): Embeddings(
(word_embeddings): Embedding(119547, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(transformer): Transformer(
(layer): ModuleList(
(0): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(k_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(v_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(out_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): DynamicQuantizedLinear(in_features=768, out_features=3072, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(lin2): DynamicQuantizedLinear(in_features=3072, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(1): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(k_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(v_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(out_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): DynamicQuantizedLinear(in_features=768, out_features=3072, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(lin2): DynamicQuantizedLinear(in_features=3072, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(2): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(k_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(v_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(out_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): DynamicQuantizedLinear(in_features=768, out_features=3072, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(lin2): DynamicQuantizedLinear(in_features=3072, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(3): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(k_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(v_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(out_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): DynamicQuantizedLinear(in_features=768, out_features=3072, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(lin2): DynamicQuantizedLinear(in_features=3072, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(4): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(k_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(v_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(out_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): DynamicQuantizedLinear(in_features=768, out_features=3072, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(lin2): DynamicQuantizedLinear(in_features=3072, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(5): TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(k_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(v_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(out_lin): DynamicQuantizedLinear(in_features=768, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): DynamicQuantizedLinear(in_features=768, out_features=3072, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
(lin2): DynamicQuantizedLinear(in_features=3072, out_features=768, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
)
)
)
(dropout): Dropout(p=0.1, inplace=False)
(classifier): DynamicQuantizedLinear(in_features=768, out_features=5, dtype=torch.qint8, qscheme=torch.per_tensor_affine)
)
```
## Expected behavior
You can successfully load the quantized fine-tuned model to make predictions.
Can be the "DynamicQuantizedLinear" instead of "Linear" be causing this problem ?
Thanks in advance for your help. | 09-22-2020 21:19:29 | 09-22-2020 21:19:29 | I got the same issue here, it would be great to know why<|||||>You are trying to load into a not-quantized module (ModelForTokenClassification) some quantized weights (`quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)`)
You should make sure first that the instance you are loading into is actually a quantized model.<|||||>Thanks for your response. So if I understood correctly, I have to write the code to load the quantized model ? something similar to DistilBertForTokenClassification ?<|||||>any updates on thhis?<|||||>It is a matter of adding a few lines:
```python
# Transform your model into a quantized model
quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
# Load the quantized weights into the quantized model (module in torch)
quantized_model.load_state_dict(torch.load(YOUR_PATH_TO_THE_QUANTIZED_WEIGHTS))
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,328 | closed | Add PRADO model | # 🌟 New model addition
## Model description
PRADO is a model made by google, performing as bert with 100x less parameters
[link to the paper](https://www.aclweb.org/anthology/D19-1506.pdf)
[git to the model code](https://github.com/tensorflow/models/tree/master/research/sequence_projection)
## Open source status
* [X] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [X] who are the authors: (mention them, if possible by @gh-username)
Prabhu Kaliamoorthi / Sujith Ravi / Zornitsa Kozareva
| 09-22-2020 20:33:36 | 09-22-2020 20:33:36 | Yeah, This is a good model to go with if the the text sequence is too long.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,327 | closed | PegasusTokenizer: Newline symbol | Ported models generate the `<n>` token at the beginning of sentences, whereas ours do not. The pegasus [original code](https://github.com/google-research/pegasus/blob/master/pegasus/ops/public_parsing_ops.py#L40) replaces `\n` newline symbol with `<n>`. `PegasusTokenizer` should probably do this.
```python
_NEWLINE_SYMBOL = "<n>"
text = tf.strings.regex_replace(text, "\n", _NEWLINE_SYMBOL)
```
| 09-22-2020 18:03:06 | 09-22-2020 18:03:06 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,326 | closed | Check decorator order | As @stas00 pointed out, the slow decorator is ignore if it's not put last. To make sure we don't make the mistake unintentionally and to fix the places where this si not the case, I wrote a script to check the decorators order and fail on `make quality` if there is a wrong order somewhere.
| 09-22-2020 16:59:29 | 09-22-2020 16:59:29 | @sgugger, let me investigate this. @slow should be the same as any other skip decorators, so the order there shouldn't matter. They should be able to stack up. If they don't, it's probably a bug somewhere.
It's possible that some other decorators don't play well with our skip decorators, which would require all the skip decorators to be in the last group. But all the ones under our control should be interchangeable order-wise.
I initially discovered this issue having `@slow`, followed by `@parametrized` and had to swap the order for `@slow` to work.
I will look at it today.<|||||>Thanks @stas00!<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7326?src=pr&el=h1) Report
> Merging [#7326](https://codecov.io/gh/huggingface/transformers/pull/7326?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d6bc72c469c38a611fb99c3d61807f59b43fe2c9?el=desc) will **decrease** coverage by `0.37%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/7326?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7326 +/- ##
==========================================
- Coverage 77.40% 77.03% -0.38%
==========================================
Files 181 181
Lines 34827 34827
==========================================
- Hits 26958 26828 -130
- Misses 7869 7999 +130
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7326?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |
| [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `20.38% <0.00%> (-67.72%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `83.11% <0.00%> (-10.39%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `83.58% <0.00%> (-8.96%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.60% <0.00%> (-7.31%)` | :arrow_down: |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.77% <0.00%> (-2.55%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |
| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/7326/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7326?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7326?src=pr&el=footer). Last update [d6bc72c...7290951](https://codecov.io/gh/huggingface/transformers/pull/7326?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@sgugger, please see https://github.com/huggingface/transformers/pull/7334
Implications for this PR: At the moment the check needs to do that only for `@parameterized.*` - it has to be first. All other skip decorators require no special order.
For `@parameterized` we have the following possible imported decorators (let's hope they all are consistently imported):
```
@parameterized
@parameterized.expand
@parameterized_class
```
The full doc is here: https://pypi.org/project/parameterized/
There is no problem whatsoever with `@pytest.mark.parametrize` (but it only works with non-unittests) - can use it in any order.
That's an awesome validator! Thanks for adding this to our magic toolkit, @sgugger <|||||>Ok, I changed the script to detect this then.<|||||>the swapping of order in the first parts of the PR is not needed, but there is no harm in it either. You can just reset those or not - up to you.<|||||>Feeling lazy so since it doesn't matter, let's keep those.<|||||>LGTM |
transformers | 7,325 | closed | Mark big downloads slow | This PR adds the slow decorator for models we don't want to download at each CI run. | 09-22-2020 16:17:22 | 09-22-2020 16:17:22 | |
transformers | 7,324 | closed | [s2s] Marian beam search slow for en-de | Hey @sshleifer (tagging because of translation) - I'm not sure whether I am misunderstanding something or this is an actual issue so apologies, but it seems like validation/eval is significantly slower than training and is a serious bottleneck when fine-tuning translation models.
I am running on Colab with V100, and trying to finetune a MarianMT model on a data set of ~10k sentences of length of up to 300, training on 90% of the data takes about 2 minutes per epoch whereas validation/eval on the remaining 10% of the data takes about 6-7 minutes without any fancy eval flags, no beamsearch etc. This results in fine-tuning being ~5x slower if performing validation every epoch (and still significantly slower even if only performing partial validation or every other epoch etc).
I am using apex and pytorch 1.5.1 as instructed in the readme and in the issues regarding apex fp16 training and bs=16 for both train and validation, different batch sizes did not seem to help. Happy to post more info but the rest is pretty similar to the seq2seq examples. | 09-22-2020 16:09:18 | 09-22-2020 16:09:18 | Interesting, a few pointers:
+ Beam search happens by default -- the fancy eval flags (`--eval_beams=2 --eval_max_gen_length=128 --num_val_sanity_steps=0`) make beam search faster.
+ fp16 should work with or without apex.
+ Shorter sequences make val faster.
+ When the marian models are not well trained, they can get into infinite loops and generate forever. That's why `--eval_max_gen_length` is essential.
Still, 6-7 minutes to run beam search on 1,000 sentences is shockingly slow. Try running a marian checkpoint on your val set and seeing how long that takes.
If you share a colab
<|||||>Hey, sorry for not clarifying some things:
- I'm aware of the eval flags and I did set them such as eval_beams=1, didn't seem to make a difference. I also explicitly specify `--check_val_every_n_epoch=1 --limit_val_batches=1.0 --val_check_interval=1.0` (and checked PL Trainer docs) to make sure that I am doing a single pass over my validation set once every epoch (or X epochs).
- Should I disable apex then? Could it be the culprit?
- Sequence length also did not seem to make much of a difference, something like linear improvement (so from length of 300 and 7 minutes to length of 200 and 5 minutes). The original models are trained with a maximum length of 500 so model should support 300 technically.
- The length flags are all set (for source, target and for train,eval). The fine-tuned model starts off at ~10 BLEU before fine-tuning and ends up at ~45 BLEU after fine-tuning, so clearly the training is working, but the validation still takes ~7 minutes throughout the process.
- Running eval.py using both a pre-trained Marian model and my own fine-tuned version still takes 7 minutes, so the same as the validation that happens during the fine-tuning process.
- Will it help if I share a Colab which reproduces this problem? Should I share it as an .ipynb file or as a link to the actual Colab URL?<|||||>
Can you send me a run_eval.py command that I can run on my machine that you expect to be faster? This is hopefully going to be the simplest manifestation of the problem. Clearly apex not the culprit if run_eval is slow.
Marian generation is a much smaller surface area than translation finetuning. Many fewer moving parts.<|||||>Thanks for the fast response and willingness to help! I made a short notebook that demonstrates the problem [here](https://colab.research.google.com/drive/11HNlWfFjzBJXDEadswwkeEhUzoh6tWFm?usp=sharing). The only meaningful difference from the actual env I run my code on is apex, which as you said should not affect eval speed.<|||||>I am busy today and tomorrow but will circle back to this if you are still stuck.
One admittedly unlikely possibility is that the numbers at the beginning of the source sentences throw the model off.
Another is that this is expected performance/speed for 1000 examples * 300 tokens * 5 beams. For example, running marian on 2000 shorter wmt examples with max_len=128, 1 GPU takes about 3 minutes. So if the seqlen cost is superlinear, as theory suggests, 6-7 minutes might not be unreasonable. For example, on [CPU](https://huggingface.co/Helsinki-NLP/opus-mt-en-de?text=230%29+There+are+two+corridors+in+the+right+hand+side+wing+of+Malchut%2C+which+divide+from+this+wing+into+two+other+nations%2C+which+are+close+to+Israel+in+the+unification%2C+to+bring+them+into+the+corridors.+Under+the+left+wing+are+two+other+corridors%2C+which+divide+into+two+other+nations%2C+Amon+and+Moav%2C+and+they+are+all+called+%E2%80%9Cliving+soul.%E2%80%9D+Previously+it+was+said+that+there+are+several+corridors%2C+and+here+it+is+said+that+there+are+only+two+on+the+right+and+two+on+the+left.+The+thing+is+that+here+it+is+only+about+the+inclusive%2C+meaning+that+there+are+two+inclusive+corridors+on+the+right%2C+for+the+nations+that+belong+to+the+right%2C+and+there+are+two+inclusive+corridors+on+the+left%2C+for+the+nations+that+belong+to+the+left.+The+two+nations+on+the+right+include+all+the+nations+on+the+right+that+relate+to+the+two+general+corridors+on+the+right+wing%2C+but+The+Zohar+does+not+explain+which+are+they.+The+two+nations+on+the+left+include+all+the+nations+on+the+left%2C+which+are+Amon+and+Moav%2C+and+relate+to+the+two+general+corridors+on+the+left+wing.+All+of+them+are+called+%E2%80%9CLiving+souls.%E2%80%9D+All+the+souls+of+proselytes+that+come+from+all+the+nations+are+called+%E2%80%9Cliving+souls.%E2%80%9D+This+is+because+they+can+receive+only+from+the+Zivug+of+Gadlut+de+ZON%2C+when+ZON+are+in+the+place+of+upper+AVI.+Then+Malchut+is+called+%E2%80%9Cliving+soul%E2%80%9D+because+there+is+the+light+of+AVI+in+her%2C+which+is+the+light+of+Haya.+And+since+the+souls+of+the+proselytes+receive+from+the+wings+of+the+living+soul%2C+they+are+called+%E2%80%9Cliving+souls%2C%E2%80%9D+as+well.) your first examples takes 13 seconds.
You could try sentence splitting:
### Sentence Splitting starter code
```python
SRC = """230) There are two corridors in the right hand side wing of Malchut, which divide from this wing into two other nations, which are close to Israel in the unification, to bring them into the corridors. Under the left wing are two other corridors, which divide into two other nations, Amon and Moav, and they are all called “living soul.” Previously it was said that there are several corridors, and here it is said that there are only two on the right and two on the left. The thing is that here it is only about the inclusive, meaning that there are two inclusive corridors on the right, for the nations that belong to the right, and there are two inclusive corridors on the left, for the nations that belong to the left. The two nations on the right include all the nations on the right that relate to the two general corridors on the right wing, but The Zohar does not explain which are they. The two nations on the left include all the nations on the left, which are Amon and Moav, and relate to the two general corridors on the left wing. All of them are called “Living souls.” All the souls of proselytes that come from all the nations are called “living souls.” This is because they can receive only from the Zivug of Gadlut de ZON, when ZON are in the place of upper AVI. Then Malchut is called “living soul” because there is the light of AVI in her, which is the light of Haya. And since the souls of the proselytes receive from the wings of the living soul, they are called “living souls,” as well.
"""
model = MarianMTModel.from_pretrained('Helsinki-NLP/opus-mt-en-de')
tokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-de')
splat = SRC.split('.')
batch = tok.prepare_seq2seq_batch(splat)
batch1 = tok.prepare_seq2seq_batch([SRC])
# time these
g0 = model.generate(**batch)
g1 = model.generate(**batch1)
```
<|||||>Thanks for taking time to check it out. Wanted to add a few more things:
- It's not specific to en-de (I just put a random Marian model in the example).
- It's difficult to split longer sentences in training/validation phase, because often periods or line breaks are in different places and the chunks do not necessarily correspond, so generating such language-paired data from real text is more difficult.
- Longer sentences often contain references and topics which would be lost when breaking them down, and thus the quality of the translation would be degraded.
Regarding the experiments you've suggested:
- I tried running eval on 1k length 128 sentences as you suggested, and it took 5.5 minutes without changing the eval parameters and under 2 minutes (~3x faster) when forcing num_beams=1.
- However when I run fine-tuning, I see the dramatic slowdown I reported before whenever validation is included in the run, with the parameter eval_beams=1 vs eval_beams=5 helping but still not entirely accounting for the issue. I have added apex to the notebook (which seems to be required to run finetuning with fp16, otherwise I get the error `You set 'use_amp=True' but do not have apex installed`), and the full details can be seen [here](https://colab.research.google.com/drive/11HNlWfFjzBJXDEadswwkeEhUzoh6tWFm?usp=sharing).
I use the same 1k dataset for both train and val with MAX_LEN=128 for the model.
Time for 1 epoch without validation: 5 seconds.
Time for 1 epoch with validation with eval_beams=1: ~100 seconds.
Time for 1 epoch with validation with eval_beams=5: ~200 seconds.
This seems to indicate that validation/eval is ~20x slower than the actual training as a baseline, which also aligns with my experience from my actual fine-tuning experiments, so I'm wondering if this is expected behavior (for example the train_mbart_enro script performs validation 4 times per epoch, which therefore must be incredibly slow?). If this is the expected performance then feel free to close this issue and I'll just try to run less validation :)<|||||>+ For summarization, I set `--n_val 100` or so if I want faster validation.
+ the wmt_enro dataset has a very different train to validation ratio: 610,319 vs 2,000. So val is not a dominant fraction of the total cost.
+ In the new `Seq2SeqTrainer` from @patil-suraj, evaluation with beam search will be optional. Hopefully this will improve your experience.<|||||>I see, thanks again! |
transformers | 7,323 | closed | T5 Cross-attention Decoder - Possible bug with relative_bias | @patrickvonplaten - investigate whether this is a possible bug: https://github.com/google-research/text-to-text-transfer-transformer/issues/415
| 09-22-2020 15:48:11 | 09-22-2020 15:48:11 | |
transformers | 7,322 | closed | Add num workers cli arg | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #6316
| 09-22-2020 15:31:29 | 09-22-2020 15:31:29 | I've addressed the comments regarding the docstring and help message.
I'm a little less familiar with TensorFlow 2.0, but it seems like any preprocessing is done by the user before passing `train_dataset` and `eval_dataset` to the `TFTrainer` so there isn't an opportunity to set `num_parallel_calls` (I wasn't able to find any calls to `map` or `interleave` save for some markdown examples).<|||||>Indeed there is no call directly in the trainer and these functions have to be run directly in the example script. Nevertheless, this parameter is still useful as it can be used directly in the example script, instead of updating it manually. |
transformers | 7,321 | closed | Example Format of Data for token classification | Hi!! I'd like to train the token classification model but I don't know what is the right format of data for token classification training. Thank you. | 09-22-2020 15:19:10 | 09-22-2020 15:19:10 | Hi @Michael95-m ,
the dataset format for e.g. the normal NER task is relatively simple: one token - label pair per line and an empty line specified a new sentence.
So here's a good example from Spanish CoNLL dataset for NER:
```bash
Melbourne B-LOC
( O
Australia B-LOC
) O
, O
25 O
may O
( O
EFE B-ORG
) O
. O
- O
El O
Abogado B-PER
General I-PER
del I-PER
Estado I-PER
, O
Daryl B-PER
Williams I-PER
, O
subrayó O
hoy O
la O
necesidad O
de O
tomar O
medidas O
para O
proteger O
al O
sistema O
judicial O
australiano O
frente O
a O
una O
página O
de O
internet O
que O
imposibilita O
el O
cumplimiento O
de O
los O
principios O
básicos O
de O
la O
Ley B-MISC
. O
```
Each line consists of token/word and its corresponding label, delimited by a space. An empty line denotes a new sentence.
Technically, the parsing of your input file is done here:
https://github.com/huggingface/transformers/blob/f5518e56318a79056ba3c80cbece29d9fe98558c/examples/token-classification/tasks.py#L18-L44
I hope this helps :)
<|||||>Thanks for your kind answer !!! |
transformers | 7,320 | closed | Test CI with higher timeout | Test.
| 09-22-2020 15:03:40 | 09-22-2020 15:03:40 | |
transformers | 7,319 | closed | [Bug Fix] Fix run_squad.py evaluation code doesn't use probabilities | Modification of run_squad.py fine tuning example so it will use the answer correctness probabilities the models produce while evaluating the model and calculating the best thresholds.
Evaluation was done without the evaluated model probabilities but rather with default zero values. It corrupted the evaluation results and the best thresholds (which evaluated always as 0.0).
**Notice: many squad models were evaluated without the probabilities, therefore, the results published in their model cards are possibly wrong.**
Example: [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512)
The results with current evaluation script:
```
"exact": 87.09677419354838,
"f1": 89.98343832723452,
"total": 11873,
"HasAns_exact": 84.66599190283401,
"HasAns_f1": 90.44759839056285,
"HasAns_total": 5928,
"NoAns_exact": 89.52060555088309,
"NoAns_f1": 89.52060555088309,
"NoAns_total": 5945,
"best_exact": 87.09677419354838,
"best_exact_thresh": 0.0,
"best_f1": 89.98343832723432,
"best_f1_thresh": 0.0
```
The results after the fix:
```
'exact': 87.00412701086499,
'f1': 89.77725380276271,
'total': 11873,
'HasAns_exact': 83.80566801619433,
'HasAns_f1': 89.35987422405582,
'HasAns_total': 5928,
'NoAns_exact': 90.19343986543313,
'NoAns_f1': 90.19343986543313,
'NoAns_total': 5945,
'best_exact': 87.34102585698643,
'best_exact_thresh': 0.09882385462915344,
'best_f1': 90.07804792988485,
'best_f1_thresh': 0.09882385462915344
```
| 09-22-2020 14:34:33 | 09-22-2020 14:34:33 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7319?src=pr&el=h1) Report
> Merging [#7319](https://codecov.io/gh/huggingface/transformers/pull/7319?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e4b94d8e581e547eaf9e47b76fd1a6497e911905?el=desc) will **decrease** coverage by `2.73%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/7319?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7319 +/- ##
==========================================
- Coverage 81.59% 78.85% -2.74%
==========================================
Files 174 174
Lines 33671 33671
==========================================
- Hits 27474 26552 -922
- Misses 6197 7119 +922
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7319?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `19.71% <0.00%> (-51.90%)` | :arrow_down: |
| [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |
| [src/transformers/activations\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.17% <0.00%> (-15.39%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.08% <0.00%> (+0.24%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.88% <0.00%> (+0.38%)` | :arrow_up: |
| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/7319/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7319?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7319?src=pr&el=footer). Last update [e4b94d8...be00f57](https://codecov.io/gh/huggingface/transformers/pull/7319?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Opened Issue: [[BUG] Wrong Scores for many SQUAD models ](https://github.com/huggingface/transformers/issues/8710) #8710<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 7,318 | closed | Fixes for LayoutLM | Adds the commands from the new script to check for model copies and clean up a bit the docstrings.
| 09-22-2020 14:31:36 | 09-22-2020 14:31:36 | |
transformers | 7,317 | closed | Create README.md | <!-- add model card to blinoff/roberta-base-russian-v0 -->
| 09-22-2020 13:34:27 | 09-22-2020 13:34:27 | |
transformers | 7,316 | closed | Support for Windows in check_copies | This is (hopefully) all what is necessary to make the script `check_copies.py` work on Windows.
@jplu if you can checkout this PR locally and confirm, that would be great!
| 09-22-2020 13:11:37 | 09-22-2020 13:11:37 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7316?src=pr&el=h1) Report
> Merging [#7316](https://codecov.io/gh/huggingface/transformers/pull/7316?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e46108817e13f5612cfe798570d38a44a9e65ba0?el=desc) will **decrease** coverage by `1.78%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/7316?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7316 +/- ##
==========================================
- Coverage 81.46% 79.67% -1.79%
==========================================
Files 174 174
Lines 33670 33670
==========================================
- Hits 27428 26828 -600
- Misses 6242 6842 +600
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7316?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.40% <0.00%> (-42.32%)` | :arrow_down: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.96% <0.00%> (-30.18%)` | :arrow_down: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.97% <0.00%> (-24.78%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.81% <0.00%> (-22.62%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |
| [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `40.00% <0.00%> (-18.89%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.17% <0.00%> (-15.39%)` | :arrow_down: |
| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `83.58% <0.00%> (-2.99%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.08% <0.00%> (+0.24%)` | :arrow_up: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.20% <0.00%> (+0.27%)` | :arrow_up: |
| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/7316/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7316?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7316?src=pr&el=footer). Last update [e461088...5c2a962](https://codecov.io/gh/huggingface/transformers/pull/7316?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>It works! No issue at all. |
transformers | 7,315 | closed | Memory leak | ## Environment info
- `transformers` version: 3.1.0
- Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-debian-buster-sid
- Python version: 3.7.0
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed
### Who can help
@LysandreJik, @sgugger, @patrickvonplaten
## Information
Model I am using (Bert, GPT2):
The problem arises when using:
* [ X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X ] my own task or dataset: (give details below)
## To reproduce
When I pretrain or fine tune a model (in my case BERT and GPT2) using torch.distributed.launch, the CPU memory usage will grow up to the memory limit (>500GB) until the first process is killed due to this issue. If I train bert-base, it takes around 30 epochs until the first process is killed, but when I train gpt-large, it just need 3 epochs until it is killed. Following is the command line I run to train/fine tune the bert-base (similar with gpt2). The script run_language_modeling.py is a copy of transformers/examples/language-modeling/run_language_modeling.py (vers. 3.1.0)
python -m torch.distributed.launch --nproc_per_node=8 \
../run_language_modeling.py \
--output_dir $model_target \
--model_name_or_path $model_source \
--config_name $model_source \
--tokenizer_name $model_source \
--train_data_file $target_train \
--eval_data_file $target_test \
--save_total_limit 5 \
--block_size 128 \
--overwrite_output_dir \
--fp16 \
--num_train_epochs 50 \
--do_train --do_eval \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 4 \
--mlm
## Expected behavior
I would expect that the distributed training run until it is done without any memory issue.
Thanks for checking it.
| 09-22-2020 11:00:37 | 09-22-2020 11:00:37 | This looks to be a duplicate of #7169<|||||>But I think my problem is run out of the cpu memory, not the GPU memory<|||||>Ah my bad, I misread one letter ;-)
To fully understand your error, what's the dataset (particularly its size) you are training on?<|||||>The size of dataset (indonesian Wikipedia) is around 522MB.<|||||>just additional info, running the script in single process doesn't have this issue. In my case, the memory usage is stable, and stay at 16GB after few epochs.
But I want to run it in multiple GPU, it is just too slow with only one :-)<|||||>#6999 <|||||>I tried the fix from #6999 manually (which is just a one liner `return loss` to `return loss.detach()`, and it seems to solve my memory leak issue. The fix is actually available since version 3.2.0, but when I used the version 3.2.0 with multi gpu, the process just stuck after the 500 steps, maybe there is deadlock among processes? Maybe I will write another ticket regarding this issue. |
transformers | 7,314 | closed | Text generation with xlnet | how to use xlnet for text generation?
| 09-22-2020 10:45:29 | 09-22-2020 10:45:29 | ```python
from transformers import pipeline
xlnet_generator = pipeline("text-generation", model="xlnet-base-cased", tokenizer="xlnet-base-cased")
print(xlnet_generator("Today is a nice day and"))
```
Also this should help: https://huggingface.co/transformers/task_summary.html#text-generation |
transformers | 7,313 | closed | Fixed results of SQuAD-FR evaluation | The score for the F1 metric was reported as the Exact Match and vice-versa.
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-22-2020 10:35:45 | 09-22-2020 10:35:45 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7313?src=pr&el=h1) Report
> Merging [#7313](https://codecov.io/gh/huggingface/transformers/pull/7313?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e2964b8a190a8852e54ef07e03cc491cd570d0d1?el=desc) will **decrease** coverage by `1.16%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/7313?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7313 +/- ##
==========================================
- Coverage 79.70% 78.54% -1.17%
==========================================
Files 174 174
Lines 33670 33670
==========================================
- Hits 26837 26445 -392
- Misses 6833 7225 +392
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7313?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |
| [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |
| [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.77% <0.00%> (-2.55%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.20% <0.00%> (+0.27%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.58% <0.00%> (+2.41%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.32% <0.00%> (+3.03%)` | :arrow_up: |
| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7313/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7313?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7313?src=pr&el=footer). Last update [e2964b8...5b6e2e6](https://codecov.io/gh/huggingface/transformers/pull/7313?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,312 | closed | Adds FSMT to LM head AutoModel | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-22-2020 10:31:55 | 09-22-2020 10:31:55 | |
transformers | 7,311 | closed | Create an XLA parameter and fix the mixed precision | This PR adds a new `XLA` parameter to activate/deactivate the XLA compilation and a bug in the mixed precision. These have to be set before the creation of the strategy and float16 is not compliant with TPU, where only bfloat16 is available. | 09-22-2020 09:24:04 | 09-22-2020 09:24:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7311?src=pr&el=h1) Report
> Merging [#7311](https://codecov.io/gh/huggingface/transformers/pull/7311?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/656c27c3a3345d0d2cf31c16f780b573c3dea09a?el=desc) will **increase** coverage by `0.31%`.
> The diff coverage is `11.11%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7311?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7311 +/- ##
==========================================
+ Coverage 81.43% 81.75% +0.31%
==========================================
Files 174 174
Lines 33452 33458 +6
==========================================
+ Hits 27243 27353 +110
+ Misses 6209 6105 -104
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7311?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.12% <ø> (+0.10%)` | :arrow_up: |
| [src/transformers/training\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `42.64% <11.11%> (-4.82%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `76.00% <0.00%> (-21.10%)` | :arrow_down: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.87% <0.00%> (-7.18%)` | :arrow_down: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.77% <0.00%> (-2.55%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |
| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/7311/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7311?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7311?src=pr&el=footer). Last update [656c27c...f9c67b1](https://codecov.io/gh/huggingface/transformers/pull/7311?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,310 | closed | [code quality] new make target that combines style and quality targets | **edit**: this post has been edited to reflect the outcome of the discussion.
Any reason why we don't run `flake8` in `make style`? I find myself needing to run `make style` followed by `make quality` all the time, but I need the latter just for the last 2 checks. Since we have no control over the source code why bother with separating checking and fixing - let's just have one target that fixes and then performs the remaining checks, as we know the first two have been done already.
This PR suggests to create a new target `fixup` that combines the 2 separate fix and check functions into one efficient target,
I will edit the docs if this change resonates with the team.
p.s. if it feels wrong to merge fixing and checking, can we add a 3rd target that is a merged one? `make best`
p.p.s. I know I can make my own alias, I love `make`! | 09-22-2020 05:11:35 | 09-22-2020 05:11:35 | I'm in favor. @sgugger @LysandreJik should this be a third target or should we just remove `make quality`?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7310?src=pr&el=h1) Report
> Merging [#7310](https://codecov.io/gh/huggingface/transformers/pull/7310?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0804d077c634b2149b833ecc7897959cab8bf650?el=desc) will **decrease** coverage by `1.53%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/7310?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7310 +/- ##
==========================================
- Coverage 78.14% 76.61% -1.54%
==========================================
Files 181 181
Lines 35759 35759
==========================================
- Hits 27945 27396 -549
- Misses 7814 8363 +549
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7310?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `24.25% <0.00%> (-73.56%)` | :arrow_down: |
| [src/transformers/modeling\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: |
| [src/transformers/activations\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |
| [src/transformers/configuration\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `83.58% <0.00%> (-8.96%)` | :arrow_down: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (-0.67%)` | :arrow_down: |
| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/7310/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7310?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7310?src=pr&el=footer). Last update [0804d07...3c59813](https://codecov.io/gh/huggingface/transformers/pull/7310?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I like having a command that does the same check as the CI without changing anything so I'd leave `make quality` (basically some docstring formatting is sometimes necessary to have beautiful docs but against what black wants so `make style` can be destructive, it's not jsut styling).
I have no objection with making `make style` do the checks too so that in most cases, we can just do `make style`.<|||||>OK, @sgugger - I changed it to `style` and re-used the same intermediary make target so it's one source to change.
BTW, the newly introduced `utils/check_copies.py` takes forever to run :( So perhaps I will still need some custom alias, as it is too slow for a quick check and push cycle.
It appears that the optimal target for quick check-n-push at the moment is:
```
black examples templates tests src utils
isort examples templates tests src utils
flake8 examples templates tests src utils
```
and then rely on CI to do the slow check.
<|||||>Oh? On my setup it's faster than flake8 so didn't try to optimize. But I can try to make some speedups to that script (basically one regex on the whole content to check whether the for loop is necessary and quickly dimiss files with no copies). It would still open all files if that's where the slowdown comes from though.<|||||>It's 2-3 times slower on my machine:
```
$ time python utils/check_copies.py
real 0m26.997s
user 0m24.928s
sys 0m2.052s
$ time flake8 examples templates tests src utils
real 0m11.735s
user 1m47.922s
sys 0m1.051s
```
flake is slow, and the new script is **very** slow<|||||>So, I'm not sure how welcome the change to `make style` will be if it's going to be 10 times slower.
Here an alt solution with a new 3rd target `fixup`:
```
quality_checks:
flake8 examples templates tests src utils
python utils/check_copies.py
python utils/check_repo.py
quality:
black --check examples templates tests src utils
isort --check-only examples templates tests src utils
${MAKE} quality_checks
# Format source code automatically and check is there are any problems left that need manual fixing
style:
black examples templates tests src utils
isort examples templates tests src utils
fixup: style quality_checks
```
I'm not attached to the name - just looking for something short and intuitive<|||||>I don't have a very strong opinion on either adding flake8 to style or having fixup, as long as we keep make quality as a script that does not make any change itself.<|||||>And I'm observing the same behaviour with the `utils/check_copies.py`. It takes a while now.<|||||>Will speed up the `utils/check_copies.py` today. The lag might be due to the fact we have more copies to check now.<|||||>OK, so I went with adding a new target `fixup` that performs automatic fixes and manual checks where automation is not possible. That will not take away the much quicker `make style` from those who don't make coding errors and want just the quick autoformatter.
Added documentation. |
transformers | 7,309 | closed | [code quality] fix confused flake8 | We run `black --target-version py35 ...` but flake8 doesn't know that we want this specific `target-version`, so currently with py38 flake8 fails suggesting that black should have reformatted 63 files. Indeed if I run:
```
black --line-length 119 --target-version py38 examples templates tests src utils
```
it indeed reformats 63 files.
The only solution I found is to create a black config file as explained at https://github.com/psf/black#configuration-format, which is what this PR adds.
Now flake8 knows that py35 is the standard and no longer gets confused regardless of the user's python version.
We can now edit out `--line-length 119 --target-version py35` from Makefile and the CI jobs, so that we have one config to rule them all. I pushed that change as well.
@sgugger, @LysandreJik
| 09-22-2020 05:03:03 | 09-22-2020 05:03:03 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7309?src=pr&el=h1) Report
> Merging [#7309](https://codecov.io/gh/huggingface/transformers/pull/7309?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/656c27c3a3345d0d2cf31c16f780b573c3dea09a?el=desc) will **increase** coverage by `0.49%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/7309?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7309 +/- ##
==========================================
+ Coverage 81.43% 81.93% +0.49%
==========================================
Files 174 174
Lines 33452 33452
==========================================
+ Hits 27243 27410 +167
+ Misses 6209 6042 -167
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7309?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `70.19% <0.00%> (-23.08%)` | :arrow_down: |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `72.31% <0.00%> (-14.52%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |
| [src/transformers/configuration\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `91.89% <0.00%> (-5.41%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.37% <0.00%> (-4.87%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `84.28% <0.00%> (-2.50%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.08% <0.00%> (+0.24%)` | :arrow_up: |
| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7309/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7309?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7309?src=pr&el=footer). Last update [656c27c...e06bd68](https://codecov.io/gh/huggingface/transformers/pull/7309?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,308 | closed | [s2s] metrics.json is wrong on multigpu | overwritten by last rank to save it.
@nateraw is there a way to check if my module `is_rank_zero` or some such? | 09-22-2020 04:29:30 | 09-22-2020 04:29:30 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,307 | closed | Cuda OOM training gpt2-xl with Trainer in multi-GPUs | # ❓ Questions & Help
I am currently trying to finetune the gpt2-xl. I have 2 tesla T4 GPUs. However, I get the CUDA OOM error... when I look at the use of the gpus I see that the first one is full, but the second one still has enough space. Here is my code:
```
from transformers import TextDataset,DataCollatorForLanguageModeling, AutoTokenizer
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
from transformers import GPT2LMHeadModel, Trainer, TrainingArguments
model = GPT2LMHeadModel.from_pretrained("gpt2-xl").to(device)
from transformers import TextDataset,DataCollatorForLanguageModeling, AutoTokenizer, TrainingArguments, Trainer
tokenizer = AutoTokenizer.from_pretrained("gpt2-xl")
train_dataset = TextDataset(
tokenizer=tokenizer,
file_path='dataset_training.txt',
block_size=128)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=False,
)
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=2, # total # of training epochs
per_device_train_batch_size=1, # batch size per device during training
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs',
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
prediction_loss_only=True,
)
trainer.train()
```
I get "CUDA out of memory. Tried to allocate 40.00 MiB (GPU 0; 14.73 GiB total capacity; 13.61 GiB already allocated; 31.88 MiB free; 13.98 GiB reserved in total by PyTorch)"
When I run nvidia-smi I see:
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.87.01 Driver Version: 418.87.01 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+================
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 75C P0 34W / 70W | 15047MiB / 15079MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla T4 Off | 00000000:00:05.0 Off | 0 |
| N/A 56C P0 29W / 70W | 9479MiB / 15079MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|===============================================
| 0 1756 C /opt/conda/bin/python 15037MiB |
| 1 1756 C /opt/conda/bin/python 9469MiB |
+-----------------------------------------------------------------------------+
```
**My question is:** Am I making a mistake? or how can a large model be trained with multiple GPUs?
| 09-22-2020 02:17:48 | 09-22-2020 02:17:48 | I want to make an update. I thought it might be possible that gpt2-xl was impossible to finu-tune, so I tested it with gpt2-large, but I got the same result: "CUDA out of memory".<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@fumpe Did you find a way around to this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 7,306 | closed | BertModel for 2 category classification - How to evaluate the performance | Hello there, I am building a fine-tuned BERT model for classification (with a linear layer in the end). The prediction should just be 1/0 (Yes, No).
When I am writing the evaluation part, I saw some people online did a F.log_softmax for the logits then use np.argmax to get the predicted label. However, I also saw people directly apply np.argmax on logits without the softmax activation. I am wondering which one should I follow and how to decide that?
Here's my model definition:
```python
class ReviewClassification(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = 2
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
embedding_size = config.hidden_size
self.classifier = nn.Linear(embedding_size, len(LABEL_NAME))
self.init_weights()
def forward(
self,
review_input_ids=None,
review_attention_mask=None,
review_token_type_ids=None,
agent_input_ids=None,
agent_attention_mask=None,
agent_token_type_ids=None,
labels=None,
):
review_outputs = self.bert(
review_input_ids,
attention_mask=review_attention_mask,
token_type_ids=review_token_type_ids,
position_ids=None,
head_mask=None,
inputs_embeds=None,
)
feature = review_outputs[1] # (batch_size, seq_len) -? Should it be (batch_size, hidden_size)
# nn.CrossEntropyLoss applies F.log_softmax and nn.NLLLoss internally on your input,
# so you should pass the raw logits to it.
logits = self.classifier(feature)
outputs = (logits,) # + outputs[2:] # add hidden states and attention if they are here
if labels is not None:
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss,) + outputs
return outputs # (loss, logits, hidden_states, attentions)
```
Then this is my validation code
```
def model_validate(model, data_loader):
# Put the model in evaluation mode--the dropout layers behave differently
# during evaluation.
model.eval()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
if torch.cuda.device_count() > 1:
model = nn.DataParallel(model)
label_prop = data_loader.dataset.dataset.label_prop()
total_valid_loss = 0
batch_size = data_loader.batch_size
num_batch = len(data_loader)
y_pred, y_true = [], []
# Evaluate data
for step, batch in tqdm(enumerate(data_loader), desc="Validation...", total=num_batch):
b_review_input_ids = batch["review_input_ids"].to(device)
b_review_attention_mask = batch["review_attention_mask"].to(device)
b_review_token_type_ids = batch["review_token_type_ids"].to(device)
b_binarized_label = batch["binarized_label"].to(device)
# Tell pytorch not to bother with constructing the compute graph during
# the forward pass, since this is only needed for backprop (training).
with torch.no_grad():
(loss, logits,) = model(review_input_ids=b_review_input_ids,
review_attention_mask=b_review_attention_mask,
review_token_type_ids=b_review_token_type_ids,
labels=b_binarized_label)
total_valid_loss += loss.item()
numpy_probas = logits.detach().cpu().numpy()
y_pred.extend(np.argmax(numpy_probas, axis=1).flatten())
y_true.extend(b_binarized_label.cpu().numpy())
# End of an epoch of validation
# put model to train mode again.
model.train()
ave_loss = total_valid_loss / (num_batch * batch_size)
# compute the various f1 score for each label
report = classification_report(y_true, y_pred, output_dict=True)
metrics_df = pd.DataFrame(report).transpose()
metrics_df = metrics_df.sort_index()
weighted_f1_score = metrics_df.loc['weighted avg', 'f1-score']
averaged_f1_score = metrics_df.loc['macro avg', 'f1-score']
return ave_loss, metrics_df, {
"weighted": weighted_f1_score,
"averaged": averaged_f1_score,
}
```
The other version I was trying is:
```
transfored_logits = F.log_softmax(logits,dim=1)
numpy_probas = transfored_logits.detach().cpu().numpy()
y_pred.extend(np.argmax(numpy_probas, axis=1).flatten())
y_true.extend(b_binarized_label.cpu().numpy())
```
The third version I was trying is:
```
transfored_logits = torch.sigmoid(logits)
numpy_probas = transfored_logits.detach().cpu().numpy()
y_pred.extend(np.argmax(numpy_probas, axis=1).flatten())
y_true.extend(b_binarized_label.cpu().numpy())
```
I also don't know how to understand the result. When I see online, people say if I set dim = 1 for log_softmax, the sum of probability for all feature (categories) should = 1. However, giving some examples below:
This is logits output: (for one batch - batch size = 16, num_labels = 2)
tensor([[ 1.1261, -1.8547],
[ 0.6066, -1.1498],
[ 1.3667, -2.0078],
[ 2.0652, -2.6669],
[ 1.0388, -1.7555],
[ 0.6801, -1.1652],
[ 0.8315, -1.3860],
[ 1.5685, -2.2362],
[ 0.1150, -0.3344],
[ 2.0751, -2.6166],
[ 1.5033, -2.1702],
[ 0.1115, -0.3096],
[ 0.8610, -1.4834],
[ 1.5544, -2.2773],
[ 2.1014, -2.6533],
[ 0.7789, -1.3748]], device='cuda:0')
If I apply softmax first, F.log_softmax(logits,dim=1), I get:
tensor([[-0.0495, -3.0302],
[-0.1593, -1.9157],
[-0.0337, -3.4082],
[-0.0088, -4.7409],
[-0.0594, -2.8537],
[-0.1467, -1.9920],
[-0.1033, -2.3209],
[-0.0220, -3.8267],
[-0.4935, -0.9429],
[-0.0091, -4.7008],
[-0.0251, -3.6985],
[-0.5046, -0.9257],
[-0.0916, -2.4360],
[-0.0214, -3.8531],
[-0.0086, -4.7632],
[-0.1098, -2.2635]], device='cuda:0')
The sum per row doesn't sum up to 1 and doesn't look like probability to me.
If I apply sigmoid, torch.sigmoid(logits)
tensor([[0.7551, 0.1353],
[0.6472, 0.2405],
[0.7969, 0.1184],
[0.8875, 0.0650],
[0.7386, 0.1474],
[0.6638, 0.2377],
[0.6967, 0.2000],
[0.8276, 0.0965],
[0.5287, 0.4172],
[0.8885, 0.0681],
[0.8181, 0.1025],
[0.5278, 0.4232],
[0.7029, 0.1849],
[0.8255, 0.0930],
[0.8910, 0.0658],
[0.6854, 0.2018]], device='cuda:0')
It does look like probability more, although it still doesn't sum up to 1.
No matter which version I use, the predicted result is always the same in this case (since my 1 (Yes) Label has a really low incidence rate)
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
| 09-22-2020 01:44:58 | 09-22-2020 01:44:58 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,305 | closed | Fix #7304 | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #7304
Correct order is tensors, name
@LysandreJik you need to teach me how to check the CI on TPU so we catch those rookie mistakes before merging :-) | 09-22-2020 00:31:44 | 09-22-2020 00:31:44 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7305?src=pr&el=h1) Report
> Merging [#7305](https://codecov.io/gh/huggingface/transformers/pull/7305?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/656c27c3a3345d0d2cf31c16f780b573c3dea09a?el=desc) will **decrease** coverage by `3.11%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/7305?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #7305 +/- ##
==========================================
- Coverage 81.43% 78.32% -3.12%
==========================================
Files 174 174
Lines 33452 33452
==========================================
- Hits 27243 26200 -1043
- Misses 6209 7252 +1043
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7305?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.72% <0.00%> (ø)` | |
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |
| [src/transformers/activations\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |
| [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-1.26%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <0.00%> (+0.16%)` | :arrow_up: |
| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7305/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7305?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7305?src=pr&el=footer). Last update [656c27c...0aedf4d](https://codecov.io/gh/huggingface/transformers/pull/7305?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 7,304 | closed | Wrong arg order for `nested_xla_mesh_reduce` in trainer.py | ## Environment info
Python 3.7 on Google Cloud TPUs
### Who can help
@sgugger
## Information
When training examples from [run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py) using Cloud TPUs, we run into this error:
```
TypeError: _xla_rendezvous(): incompatible function arguments. The following argument types are supported:
1. (arg0: int, arg1: str, arg2: str, arg3: List[int]) -> List[bytes]
```
This issue is just due to the wrong arg order in [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L1337) where the [args should be switched](https://github.com/huggingface/transformers/blob/656c27c3a3345d0d2cf31c16f780b573c3dea09a/src/transformers/trainer_utils.py#L162).
This was introduced in: https://github.com/huggingface/transformers/commit/492bb6aa486856f8243dfeb533ed1b23e996e403
## To reproduce
Steps to reproduce the behavior:
1. Running the provided example on Cloud TPU.
## Expected behavior
Should not fail with `TypeError`. | 09-21-2020 22:37:01 | 09-21-2020 22:37:01 | Indeed, I switched the args, sorry about that. Will make a PR to fix this tomorrow morning. |
transformers | 7,303 | closed | BART metrics.json and validation checkpoint metrics seem to disagree | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Linux 4.14
- Python version:3.7.9
- PyTorch version (GPU?):1.6.0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: True
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
examples/seq2seq: @sshleifer
## Information
Model I am using (BART):
The problem arises when using:
* [*] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [*] my own task or dataset: (give details below)
I am using a small subset of https://github.com/pubmedqa/pubmedqa to generate questions from the given passage.
## To reproduce
Steps to reproduce the behavior:
1. Do distributed training
2. Run the finetune script with appropriate arguments.
The saved checkpoint says:
val_avg_rouge2=13.0975-step_count=4.ckpt
However the metrics.json file says that val_avg_rouge2=6.46015 and there are better step_counts in comparison.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The best metric in metrics.json should be chosen for saving checkpoints.
| 09-21-2020 22:31:11 | 09-21-2020 22:31:11 | I can replicate in pl 0.8.5 and pl 0.9.0, great catch.
Smaller command to replicate:
```
export MAX_LEN=128
export m=export m=sshleifer/student_marian_en_ro_6_3
python finetune.py \
--learning_rate=3e-4 \
--do_train \
--do_predict \
--fp16 \
--val_check_interval 0.25 \
--data_dir $ENRO_DIR \
--max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \
--freeze_encoder --freeze_embeds \
--train_batch_size=64 --eval_batch_size=64 \
--tokenizer_name $m --model_name_or_path $m \
--warmup_steps 500 --sortish_sampler --logger_name wandb \
--fp16_opt_level=O1 --task translation --num_sanity_val_steps=0 \
--model_name_or_path $m --gpus 8 --num_train_epochs=1 \
--data_dir wmt_en_ro --output_dir dmar_pl_only_v2 --save_top_k=10
```
You will only call have 4-5 entries in metrics, but 10 checkpoints.
<|||||>Every single rank is saving checkpoints.<|||||>@sshleifer Wow, so is that like a race condition where the last RANK "wins"? What about the weights of model? Would they be same across all the ranks, or would they be a problem as well?<|||||>
Posted here https://github.com/PyTorchLightning/pytorch-lightning/issues/3597
Weights will be the same across all ranks. You just have suboptimal checkpoint saving logic. You can kind of workaround by passing --save_top_k=5 and then manually picking which one you like by looking at metrics.json.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.