repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 6,000 | closed | Bert german dbmdz uncased sentence stsb | Model Card for bert-german-dbmdz-uncased-sentence-stsb | 07-23-2020 19:40:20 | 07-23-2020 19:40:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6000?src=pr&el=h1) Report
> Merging [#6000](https://codecov.io/gh/huggingface/transformers/pull/6000?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e161955105f7e012dba5d51842923fc25fc5cdf&el=desc) will **increase** coverage by `1.18%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6000?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6000 +/- ##
==========================================
+ Coverage 77.32% 78.51% +1.18%
==========================================
Files 146 146
Lines 26242 26242
==========================================
+ Hits 20291 20603 +312
+ Misses 5951 5639 -312
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6000?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6000/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6000/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6000/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6000/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (ø)` | |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6000/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6000?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6000?src=pr&el=footer). Last update [6e16195...e043625](https://codecov.io/gh/huggingface/transformers/pull/6000?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great work!
@stefan-it maybe we need to implement a "Fine-tune" button similar to GitHub's "Fork" button at some point 😉
@PhilipMay What's your feedback on Optuna?<|||||>> @PhilipMay What's your feedback on Optuna?
@julien-c Optuna is awsome. I love it. Very good documentation, clean code, nice integrations and I like the pruning integration. |
transformers | 5,999 | closed | Fix #5974 | Small bug introduced by the model outputs PR. | 07-23-2020 16:04:55 | 07-23-2020 16:04:55 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5999?src=pr&el=h1) Report
> Merging [#5999](https://codecov.io/gh/huggingface/transformers/pull/5999?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/76f52324b1e2d2bb631c80895a5f16ddc303a099&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5999?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5999 +/- ##
=======================================
Coverage 78.66% 78.66%
=======================================
Files 146 146
Lines 26230 26229 -1
=======================================
Hits 20633 20633
+ Misses 5597 5596 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5999?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `79.12% <0.00%> (-0.04%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5999?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5999?src=pr&el=footer). Last update [76f5232...c6a5a2a](https://codecov.io/gh/huggingface/transformers/pull/5999?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,998 | closed | MbartTokenizer: do not hardcode vocab size | language codes should start at the end of the standard vocab. If the standard vocab is smaller, this number is not 250,001. | 07-23-2020 15:51:08 | 07-23-2020 15:51:08 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5998?src=pr&el=h1) Report
> Merging [#5998](https://codecov.io/gh/huggingface/transformers/pull/5998?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33d7506ea10ca92886fd1bb3b5306a1a720c58fe&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5998?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5998 +/- ##
=======================================
Coverage 78.48% 78.49%
=======================================
Files 146 146
Lines 26230 26232 +2
=======================================
+ Hits 20587 20591 +4
+ Misses 5643 5641 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5998?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.58% <100.00%> (+0.13%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.50%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5998?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5998?src=pr&el=footer). Last update [33d7506...39fdead](https://codecov.io/gh/huggingface/transformers/pull/5998?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,997 | closed | bug in trainer.py line297 | # 🐛 Bug
## Information
I found this bug while running example/token_classification/run_ner.py.
I fixed that by delete the 'self.' prefix.
## Reproduce
just execute `run_ner.py`
| 07-23-2020 15:42:34 | 07-23-2020 15:42:34 | Will be fixed by #5982<|||||>Fixed by #5982 |
transformers | 5,996 | closed | T5 pre training on different languages from scratch | Hi Team,
I was exploring the pre training procedure/document for T5 models(small,base, large) models on different languages from scratch. However did not come across anything which could help me. Please share if there is any resource for the same. If not, could you please consider to include it .
Thank you . | 07-23-2020 13:00:19 | 07-23-2020 13:00:19 | Hi @ashispapu, T5 pre-training is not yet available in transformers, I'm working on it but might take some time. Feel free to take a stab.<|||||>Hi @patil-suraj Thanks for the response. Let me try it.<|||||>@patil-suraj, thanks for working on the different T5 ropes. Can you point to the branch you 're working for pre-training? I'd be interested to contribute.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I'm interested in T5 pre-training from scratch. Any update on this project? |
transformers | 5,995 | closed | [WIP] Trainer supports all Datasets as train_dataset, with/without __len__ #5990 | A first PR.
Passed:
- `make test`
- `make style`
- `make quality`
Modifies:
- trainer.py: fixes issue
- test_trainer.py: calls `Trainer.train()`
It fixes only the case where the TRAINING dataset has not the `__len__` method.
The distinction is not between `Dataset` and `IterableDataset`, but between objects that are instances of class where `__len__` is implemented or not. This is pointed in [pytorch source](https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html#Dataset), implementation of `__len__` is up to the user.
The test is therefore: `isinstance(dataset, collections.Sized)`
NB: fixing for EVAL and TEST dataset will require more code refactor: all get funneled to `Trainer._prediction_loop()` without keeping track of whether it is EVAL or TEST, which makes the usage of `TrainingArguments.eval_steps` impossible to assume. (not to mention, there is no `test_steps` field in `TrainingArguments`)
Not all datasets have an implementation of `__len__` method, therefore the trainer should not assume it is available.
The use case is:
- Dataset with `__len__`: use `num_train_epochs` or `max_steps` to specify how long should training run
- Dataset without `__len__`: use only `max_steps`
The limitation is still valid on the EVAL / TEST dataset who still has to implement `__len__` | 07-23-2020 12:34:15 | 07-23-2020 12:34:15 | > Note that there is some moving around of the code in Trainer coming in #5982 (I'll probably merge it today) so you may need to adapt a bit the code.
I'll wait for the merge of #5982 and introduce the fix for #5990 after.
> Note that there is no test_steps field since the evaluation is supposed to be complete on the test set (users can always pass along a shorter dataset if they want).
I see.
At the moment, the functionality is that it will refuse a dataset that does not implement `__len__`, whether it is the EVAL dataset or the TEST dataset.
<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5995?src=pr&el=h1) Report
> Merging [#5995](https://codecov.io/gh/huggingface/transformers/pull/5995?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c69ea5efc4eac65b183e8d07b1bf91d20bbe0c8c&el=desc) will **increase** coverage by `0.40%`.
> The diff coverage is `78.57%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5995?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5995 +/- ##
==========================================
+ Coverage 78.50% 78.90% +0.40%
==========================================
Files 146 146
Lines 26249 26264 +15
==========================================
+ Hits 20606 20723 +117
+ Misses 5643 5541 -102
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5995?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5995/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `60.63% <78.57%> (+19.75%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5995/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.75%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5995/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.23% <0.00%> (+0.91%)` | :arrow_up: |
| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/5995/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `97.36% <0.00%> (+1.31%)` | :arrow_up: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5995/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `77.90% <0.00%> (+3.48%)` | :arrow_up: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5995/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `86.73% <0.00%> (+6.12%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5995?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5995?src=pr&el=footer). Last update [c69ea5e...4066671](https://codecov.io/gh/huggingface/transformers/pull/5995?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Hi,
- Merged to include all changes from #5982
- Now accepts `IterableDataset` for any dataset (TRAIN, EVAL, TEST)
- Displays information (how many steps, etc...)
- `test_trainer.py` includes an end-to-end test of `train()` and `predict()`
It fixes the issue #5990 with my code.
Checklist:
- `make test` is positive (all passed, no failed)
- `make style` and `make quality`
Looking forward to review.
I'm still unsure about how to test what the dataset is:
- the type hinting says `train_dataset: Dataset`
- `pytorch` indicates it is good practice to implement `__len__` on Map-Style dataset, but in the code there is no way this could be enforced
- `pytorch` relies only on the logic: user will inherit `Dataset` and implement `__len__` or user will inherit `IterableDataset` and not implement `__len__`. In `Dataloader` every time there is a doubt it is checking if the object is an instance of `Dataset` or `IterableDataset`
- in my code, I followed my first rule: either the object has `__len__` or it does not.
Still wondering if I should code it `pytorch`-style and trust the given answer blindly, or keep it the way it is done, which is a bit more paranoid ?
Any comment appreciated.<|||||>- the test is whether a `Dataset` object has `__len__` or not
- iterable dataset is OK for training, __only__ if `max_steps` has a strictly positive value
- iterable dataset is not acceptable for evaluation or prediction
The confusion about `eval_steps` has been purged.
The test `test_trainer_iterable_dataset` in `test_trainer.py` has been extended to check for corner cases and associated exceptions. Only the exception type is checked, not exception message.
Checklist:
- `make test` passed (no fail)
- `make style`
- `make quality`
Looking forward to review.<|||||>Nitpicking is fine in my book.
- removed redundant test on dataset (eval / test)
Checklist:
- `make test` passed (no fail)
- `make style`
- `make quality`
Up for review again.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi, is this PR still being reviewed? I would like to use `Trainer` with an `IterableDataset` and this looks like exactly what's needed to make that happen. If you have time, I would greatly appreciate this PR to get into the next version :) thank you!<|||||>I realize a part has been merged, but not everything.<|||||>And I can't seem to find a way to re-open this PR. So I guess, I should open a new one, and link to this one...<|||||>@carson-sestili The PR #7858 has been merged to master and fixes the bug.
You can already use it by installing from source.
<|||||>@j-rossi-nl Thank you very much! |
transformers | 5,994 | closed | [examples (seq2seq)] fix preparing decoder_input_ids for T5 | possible fix for #5987
`</s>` should be added at the end of target text
@sshleifer | 07-23-2020 10:34:54 | 07-23-2020 10:34:54 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5994?src=pr&el=h1) Report
> Merging [#5994](https://codecov.io/gh/huggingface/transformers/pull/5994?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33d7506ea10ca92886fd1bb3b5306a1a720c58fe&el=desc) will **decrease** coverage by `0.23%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5994?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5994 +/- ##
==========================================
- Coverage 78.48% 78.25% -0.24%
==========================================
Files 146 146
Lines 26230 26230
==========================================
- Hits 20587 20525 -62
- Misses 5643 5705 +62
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5994?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.71% <0.00%> (-1.51%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5994?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5994?src=pr&el=footer). Last update [33d7506...fbf4b03](https://codecov.io/gh/huggingface/transformers/pull/5994?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I will let you know ASAP!<|||||>As far as I know, @mrm8488 hasn't used finetune.py for T5. He has linked his training colabs in model cards and they just pass `labels` directly and `decoder_input_ids` are created by the `T5ForConditionalGeneration`<|||||> I usually follow this one https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb<|||||>Also I think #5866 should be merged before this, because if `</s>` is not added then it might miss the last token <|||||>> can we unittest this behavior somehow?
if we move this logic to dataset or collate function then we can unit test `__getitem__` or `collat_fn`. Would this be a good idea ?<|||||>Yes, great idea. Whichever seems more natural.<|||||>Hi @sshleifer , does this PR needs any other changes before merging ?<|||||>sorry for being slow! |
transformers | 5,993 | closed | Store Predictions on CPU in Every Prediction Iteration (Trainer) | # 🚀 Feature request
Store Predictions on CPU in Every Prediction Iteration (Trainer)
## Motivation
Currently, in [`Trainer._prediction_loop`,](https://github.com/huggingface/transformers/blob/33d7506ea10ca92886fd1bb3b5306a1a720c58fe/src/transformers/trainer.py#L785) the predictions (logits) of the model is stored on GPU/TPU in each iteration. After all iterations are finished, they will be concatenated together and sent to CPU. In this way, the GPU/TPU memory usage will increase linearly during prediction. If the test set is very large, there may be insufficient GPU/TPU memory to finish the prediction phase. To save GPU/TPU memory and allow larger-scale inference, the trainer should instead send a batch of prediction to CPU after each iteration.
However, if we do inference on multiple devices, we would need a `distributed_concat` function to aggregate all predictions from all devices. If the predictions are already stored on CPU, we can no longer aggregate them using NCCL. Therefore, this issue is unsolvable unless we introduce another CPU-based distributed communication mechanism. Still, we can at least solve it in a single-GPU scenario.
## Your contribution
If approved, I can write codes to solve this problem when the inference is not running in the distributed mode. | 07-23-2020 09:35:56 | 07-23-2020 09:35:56 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,992 | closed | ONNX documentation | - Rename the current **torchscript.rst** to **serialization.rst**
- Move torchscript documentation into a subsection of the above **serialization.rst**
- Introduce documentation for ONNX/ONNXRuntime in the above section
Signed-off-by: Morgan Funtowicz <[email protected]> | 07-23-2020 09:22:55 | 07-23-2020 09:22:55 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5992?src=pr&el=h1) Report
> Merging [#5992](https://codecov.io/gh/huggingface/transformers/pull/5992?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d32279438a73e71961f53baa4fb47d0f08c2984d&el=desc) will **increase** coverage by `0.25%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5992?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5992 +/- ##
==========================================
+ Coverage 78.25% 78.51% +0.25%
==========================================
Files 146 146
Lines 26214 26214
==========================================
+ Hits 20515 20581 +66
+ Misses 5699 5633 -66
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5992?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5992/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5992/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5992/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5992/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <0.00%> (+1.16%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5992/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5992/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5992/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5992?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5992?src=pr&el=footer). Last update [d322794...59c00a2](https://codecov.io/gh/huggingface/transformers/pull/5992?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Merging as the failures are not related to the changeset introduced in this PR |
transformers | 5,991 | closed | T5 Tensorflow: _shift_right returns wrong result | # 🐛 Bug
## Information
The [_shift_right](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_t5.py#L777) method of the `TFT5PreTrainedModel` returns all zeros instead of shifting the `input_ids`.
## To reproduce
```
import tensorflow as tf
def shape_list(x):
"""Deal with dynamic shape in tensorflow cleanly."""
static = x.shape.as_list()
dynamic = tf.shape(x)
return [dynamic[i] if s is None else s for i, s in enumerate(static)]
def _shift_right(input_ids):
decoder_start_token_id = 0
pad_token_id = 0
assert (
decoder_start_token_id is not None
), "self.model.config.decoder_start_token_id has to be defined. In TF T5 it is usually set to the pad_token_id. See T5 docs for more information"
# shift inputs to the right
shifted_input_ids = tf.zeros_like(input_ids, dtype=tf.int32)
shifted_input_ids = tf.roll(shifted_input_ids, 1, axis=-1)
start_tokens = tf.fill((shape_list(shifted_input_ids)[0], 1), decoder_start_token_id)
shifted_input_ids = tf.concat([start_tokens, shifted_input_ids[:, 1:]], -1)
assert pad_token_id is not None, "self.model.config.pad_token_id has to be defined."
# replace possible -100 values in labels by `pad_token_id`
shifted_input_ids = tf.where(
shifted_input_ids == -100, tf.fill(shape_list(shifted_input_ids), pad_token_id), shifted_input_ids
)
assert tf.math.reduce_any(
shifted_input_ids >= 0
).numpy(), "Verify that `labels` has only positive values and -100"
return shifted_input_ids
input_ids = tf.convert_to_tensor([[32000, 1, 2, 3, 0, 0, 0]])
print(_shift_right(input_ids))
```
## Expected behavior
Should shift the tensor to the right.
## Suggested solution
Replace line `shifted_input_ids = tf.zeros_like(input_ids, dtype=tf.int32)` with `shifted_input_ids = tf.cast(input_ids, tf.int32)`.
Further I'd recommend removing the assertion for positive label values, as it depends on the `numpy()` method, which is not available in some cases (e.g. when using datasets loaded from tfrecord files) and will throw an error then.
Shall I open a PR directly?
## Environment info
- `transformers` version: Master (file commit `4dc6559`)
- Tensorflow version: 2.1.0
| 07-23-2020 08:45:53 | 07-23-2020 08:45:53 | Hi, thanks for raising this issue! Indeed, this looks like an issue. Do you want to open a PR with the fix you propose?<|||||>Hey @maurice-g,
Thanks a lot for the fix! |
transformers | 5,990 | closed | Trainer: exception raised when calling len() on IterableDataset | # 🐛 Bug
## Information
While pre-training a Longformer model from scratch, the text is delivered through an `IterableDataset` object. The code which is called by `Trainer.train()` still calls `len()` on this object, which raises an exception.
#5829 addressed the proper creation of the Dataloader.
The problem arises when using:
* [x] my own modified scripts: see code
The tasks I am working on is:
* [x] my own task or dataset: pre-train a LM from scratch
## To reproduce
Here is my entire code, but it can be reproduced with any `PreTrainedModel` by using an `IterableDataset`.
```python
import logging
import random
from dataclasses import dataclass, field
from transformers import LongformerConfig, LongformerForMaskedLM, LongformerTokenizerFast
from transformers import Trainer, TrainingArguments
from transformers import TextDataset, DataCollatorForLanguageModeling
from transformers import HfArgumentParser
from sklearn.model_selection import train_test_split
from pathlib import Path
from utils_pretrain import MultiTextDataset
logger = logging.getLogger(__name__)
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.
"""
max_seq_len: int = field(
metadata={"help": "Input Sequence Length"}
)
num_hidden_layers: int = field(
metadata={'help': 'Number of transformer layers in Longformer'}
)
tok_dir: str = field(
metadata={
'help': 'Folder with tokenizer files'
}
)
txt_dir: str = field(
metadata={"help": "Folder with txt files for tokenizer training"}
)
filter_files: str = field(
default='[a-c]*.txt',
metadata={"help": "regex to select specific files"}
)
test_size: float = field(
default=0.05,
metadata={'help': 'proportion of the data that will be used for evaluation'}
)
def main():
parser = HfArgumentParser((ModelArguments, TrainingArguments))
model_args, train_args = parser.parse_args_into_dataclasses()
model_args: ModelArguments
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.WARN,
)
logger.warning(
"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
train_args.local_rank,
train_args.device,
train_args.n_gpu,
bool(train_args.local_rank != -1),
train_args.fp16,
)
logger.info("Training/evaluation parameters %s", train_args)
MODEL_NAME = 'allenai/longformer-base-4096'
tokenizer: LongformerTokenizerFast = LongformerTokenizerFast.from_pretrained(model_args.tok_dir)
# Customize an existing config rather than create from scratch
config: LongformerConfig = LongformerConfig.from_pretrained(MODEL_NAME)
config.max_position_embeddings = model_args.max_seq_len + 2
config.num_hidden_layers = model_args.num_hidden_layers
config.attention_window = [512] * model_args.num_hidden_layers
config.vocab_size = tokenizer.vocab_size
model = LongformerForMaskedLM(config)
data_files = list(Path(model_args.txt_dir).glob(model_args.filter_files))
shuffled_files = random.sample(data_files, len(data_files))
train_files, val_files = train_test_split(shuffled_files, test_size=model_args.test_size)
train_ds, val_ds = list(
map(
lambda x: MultiTextDataset(
files=x,
tokenizer=tokenizer,
block_size=model_args.max_seq_len
),
[train_files, val_files]
)
)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=True,
mlm_probability=0.15
)
train_args: TrainingArguments
train_args.do_train = True
train_args.evaluate_during_training = True
trainer = Trainer(
model=model,
args=train_args,
data_collator=data_collator,
train_dataset=train_ds,
eval_dataset=val_ds,
)
trainer.train(train_args.output_dir)
```
The class `MultiTextDataset` inherits `IterableDataset`. It has no `__len__` method, and the length would require the whole dataset to be parsed at once to be known.
Here is the exception and stack trace:
```
Traceback (most recent call last):
File "longformer_pretrain.py", line 131, in <module>
main()
File "longformer_pretrain.py", line 122, in main
trainer.train(train_args.output_dir)
File "/home/jrossi/anaconda3/envs/COLIEE/lib/python3.7/site-packages/transformers/trainer.py", line 392, in train
self.args.max_steps // (len(train_dataloader) // self.args.gradient_accumulation_steps) + 1
File "/home/jrossi/anaconda3/envs/COLIEE/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 313, in __len__
length = self._IterableDataset_len_called = len(self.dataset)
TypeError: object of type 'MultiTextDataset' has no len()
```
## Expected behavior
The call to `Trainer.train()` starts the training. A case has to be made in the code to accomodate the usage of `IterableDataset`, which means not assuming that `len()` can be called on the dataset at any point.
- If a number of epochs is given, one epoch corresponds to consuming the iterable dataset until StopIteration
- If a number of steps is given, training stops after performing MAX_STEPS or catching a StopIteration, whichever comes first
- During training, the progress bar should be either a % of epochs performed, or a % of steps performed
- (optional) If a number of epochs is given, register how many steps it took to consume the iterator so a better progress bar can be shown for the next epochs (each epoch will consume the same iterator once)
With regards to [Pytorch documentation](https://pytorch.org/docs/stable/data.html#), there is no certainty that `__len__` method will be implemented, even on `Dataset` objects.
The distinction should be made between objects that implement `__len__` and those that do not implement it.
The current code __assumes__ that the `Dataset` objects given when creating a `Trainer` implement `len()`, but there is no guarantee of this.
```python
import collections
if isinstance(bar, collections.Sized): (...)
```
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-5.7.8-1.el7.elrepo.x86_64-x86_64-with-centos-7.8.2003-Core
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO (for the moment)
## Fix
I can contribute. I will suggest a PR to fix this. | 07-23-2020 08:07:07 | 07-23-2020 08:07:07 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,989 | closed | Create README.md | Hello, we are making this pull request to add model card for our Italian sentiment political model. Thanks!
| 07-23-2020 08:07:04 | 07-23-2020 08:07:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5989?src=pr&el=h1) Report
> Merging [#5989](https://codecov.io/gh/huggingface/transformers/pull/5989?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33d7506ea10ca92886fd1bb3b5306a1a720c58fe&el=desc) will **increase** coverage by `0.17%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5989?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5989 +/- ##
==========================================
+ Coverage 78.48% 78.66% +0.17%
==========================================
Files 146 146
Lines 26230 26230
==========================================
+ Hits 20587 20634 +47
+ Misses 5643 5596 -47
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5989?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5989?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5989?src=pr&el=footer). Last update [33d7506...ec916e3](https://codecov.io/gh/huggingface/transformers/pull/5989?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,988 | closed | EncoderDecoderModel: weight can not be init from the checkpoint | I try to use EncoderDecoderModel to train a Chinese summary model.
```Python
from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel
encoder_config = BertConfig.from_pretrained('bert-base-chinese')
decoder_config = BertConfig.from_pretrained('bert-base-chinese', is_decoder=True)
encoder_config.max_length = 512
decoder_config.max_length = 128
model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-chinese', 'bert-base-chinese',
encoder_config=encoder_config,
decoder_config=decoder_config)
```
However, I get a warning, the whole encoder model doesn't init from checkpoint:
```
WARNING:transformers.modeling_utils:Some weights of the model checkpoint at bert-base-chinese were not used when initializing BertLMHeadModel: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
- This IS expected if you are initializing BertLMHeadModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing BertLMHeadModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
WARNING:transformers.modeling_utils:Some weights of BertLMHeadModel were not initialized from the model checkpoint at bert-base-chinese and are newly initialized: ['bert.encoder.layer.0.crossattention.self.query.weight', 'bert.encoder.layer.0.crossattention.self.query.bias',
'bert.encoder.layer.0.crossattention.self.key.weight', 'bert.encoder.layer.0.crossattention.self.key.bias',
'bert.encoder.layer.0.crossattention.self.value.weight', 'bert.encoder.layer.0.crossattention.self.value.bias', 'bert.encoder.layer.0.crossattention.output.dense.weight', 'bert.encoder.layer.0.crossattention.output.dense.bias', 'bert.encoder.layer.0.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.0.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.1.crossattention.self.query.weight', 'bert.encoder.layer.1.crossattention.self.query.bias', 'bert.encoder.layer.1.crossattention.self.key.weight', 'bert.encoder.layer.1.crossattention.self.key.bias', 'bert.encoder.layer.1.crossattention.self.value.weight', 'bert.encoder.layer.1.crossattention.self.value.bias', 'bert.encoder.layer.1.crossattention.output.dense.weight', 'bert.encoder.layer.1.crossattention.output.dense.bias', 'bert.encoder.layer.1.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.1.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.2.crossattention.self.query.weight', 'bert.encoder.layer.2.crossattention.self.query.bias', 'bert.encoder.layer.2.crossattention.self.key.weight', 'bert.encoder.layer.2.crossattention.self.key.bias', 'bert.encoder.layer.2.crossattention.self.value.weight', 'bert.encoder.layer.2.crossattention.self.value.bias', 'bert.encoder.layer.2.crossattention.output.dense.weight', 'bert.encoder.layer.2.crossattention.output.dense.bias', 'bert.encoder.layer.2.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.2.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.3.crossattention.self.query.weight', 'bert.encoder.layer.3.crossattention.self.query.bias', 'bert.encoder.layer.3.crossattention.self.key.weight', 'bert.encoder.layer.3.crossattention.self.key.bias', 'bert.encoder.layer.3.crossattention.self.value.weight', 'bert.encoder.layer.3.crossattention.self.value.bias', 'bert.encoder.layer.3.crossattention.output.dense.weight', 'bert.encoder.layer.3.crossattention.output.dense.bias', 'bert.encoder.layer.3.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.3.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.4.crossattention.self.query.weight', 'bert.encoder.layer.4.crossattention.self.query.bias', 'bert.encoder.layer.4.crossattention.self.key.weight', 'bert.encoder.layer.4.crossattention.self.key.bias', 'bert.encoder.layer.4.crossattention.self.value.weight', 'bert.encoder.layer.4.crossattention.self.value.bias', 'bert.encoder.layer.4.crossattention.output.dense.weight', 'bert.encoder.layer.4.crossattention.output.dense.bias', 'bert.encoder.layer.4.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.4.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.5.crossattention.self.query.weight', 'bert.encoder.layer.5.crossattention.self.query.bias', 'bert.encoder.layer.5.crossattention.self.key.weight', 'bert.encoder.layer.5.crossattention.self.key.bias', 'bert.encoder.layer.5.crossattention.self.value.weight', 'bert.encoder.layer.5.crossattention.self.value.bias', 'bert.encoder.layer.5.crossattention.output.dense.weight', 'bert.encoder.layer.5.crossattention.output.dense.bias', 'bert.encoder.layer.5.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.5.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.6.crossattention.self.query.weight', 'bert.encoder.layer.6.crossattention.self.query.bias', 'bert.encoder.layer.6.crossattention.self.key.weight', 'bert.encoder.layer.6.crossattention.self.key.bias', 'bert.encoder.layer.6.crossattention.self.value.weight', 'bert.encoder.layer.6.crossattention.self.value.bias', 'bert.encoder.layer.6.crossattention.output.dense.weight', 'bert.encoder.layer.6.crossattention.output.dense.bias', 'bert.encoder.layer.6.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.6.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.7.crossattention.self.query.weight', 'bert.encoder.layer.7.crossattention.self.query.bias', 'bert.encoder.layer.7.crossattention.self.key.weight', 'bert.encoder.layer.7.crossattention.self.key.bias', 'bert.encoder.layer.7.crossattention.self.value.weight', 'bert.encoder.layer.7.crossattention.self.value.bias', 'bert.encoder.layer.7.crossattention.output.dense.weight', 'bert.encoder.layer.7.crossattention.output.dense.bias', 'bert.encoder.layer.7.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.7.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.8.crossattention.self.query.weight', 'bert.encoder.layer.8.crossattention.self.query.bias', 'bert.encoder.layer.8.crossattention.self.key.weight', 'bert.encoder.layer.8.crossattention.self.key.bias', 'bert.encoder.layer.8.crossattention.self.value.weight', 'bert.encoder.layer.8.crossattention.self.value.bias', 'bert.encoder.layer.8.crossattention.output.dense.weight', 'bert.encoder.layer.8.crossattention.output.dense.bias', 'bert.encoder.layer.8.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.8.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.9.crossattention.self.query.weight', 'bert.encoder.layer.9.crossattention.self.query.bias', 'bert.encoder.layer.9.crossattention.self.key.weight', 'bert.encoder.layer.9.crossattention.self.key.bias', 'bert.encoder.layer.9.crossattention.self.value.weight', 'bert.encoder.layer.9.crossattention.self.value.bias', 'bert.encoder.layer.9.crossattention.output.dense.weight', 'bert.encoder.layer.9.crossattention.output.dense.bias', 'bert.encoder.layer.9.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.9.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.10.crossattention.self.query.weight', 'bert.encoder.layer.10.crossattention.self.query.bias', 'bert.encoder.layer.10.crossattention.self.key.weight', 'bert.encoder.layer.10.crossattention.self.key.bias', 'bert.encoder.layer.10.crossattention.self.value.weight', 'bert.encoder.layer.10.crossattention.self.value.bias', 'bert.encoder.layer.10.crossattention.output.dense.weight', 'bert.encoder.layer.10.crossattention.output.dense.bias', 'bert.encoder.layer.10.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.10.crossattention.output.LayerNorm.bias', 'bert.encoder.layer.11.crossattention.self.query.weight', 'bert.encoder.layer.11.crossattention.self.query.bias', 'bert.encoder.layer.11.crossattention.self.key.weight', 'bert.encoder.layer.11.crossattention.self.key.bias', 'bert.encoder.layer.11.crossattention.self.value.weight', 'bert.encoder.layer.11.crossattention.self.value.bias', 'bert.encoder.layer.11.crossattention.output.dense.weight', 'bert.encoder.layer.11.crossattention.output.dense.bias', 'bert.encoder.layer.11.crossattention.output.LayerNorm.weight', 'bert.encoder.layer.11.crossattention.output.LayerNorm.bias', 'cls.predictions.decoder.bias']
```
So, how to fix this warning | 07-23-2020 07:40:13 | 07-23-2020 07:40:13 | Hi @nghuyong , you won't need to fix this warning, the reason for this warning is that cross-attention layer is added newly in the model as both of these models are encoder models and cross-attention is not available for encoder only models.
This warning will go away when you train the model, after training EncoderDecoder model you can load it using just `EncoderDecoderModel.from_pretrained`
Hope this helps.<|||||>@patil-suraj Thanks for your reply
I still have a question, should I follow the instruction in the model card of [bert2bert-cnn_dailymail-fp16](https://github.com/huggingface/transformers/blob/master/model_cards/patrickvonplaten/bert2bert-cnn_dailymail-fp16/README.md#training-script):
**make sure you checkout to the branch more_general_trainer_metric**
to train a seq2seq model<|||||>yes, that branch has a change in `Trainer` class to make it work with `EncoderDecoder` models.<|||||>I will open a cleaner PR soon to integrate this branch into master.<|||||>@patrickvonplaten I'm also modifying `Trainer` to support generative metrics and other seq2seq functionalities like label smoothing loss etc in this PR #6769, it's for `examples/seq2seq` right now, but if you think it's useful then can try to move it into `Trainer`<|||||>I think it's fine to leave it separated for now! Eventually it would be nice to move everything to Trainer<|||||>That will be really COOL ! Thanks for your work, it will be very convenient to use~ @patrickvonplaten @patil-suraj <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,987 | closed | Possible bug in preparing deocder_input_ids for T5 in seq2seq finetune.y | In finetune.py in `_step` method ([link](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L131)) `decoder_input_ids` are prepared like this
```python3
source_ids, source_mask, target_ids = batch["input_ids"], batch["attention_mask"], batch["decoder_input_ids"]
decoder_input_ids = target_ids[:, :-1].contiguous() # Why this line?
lm_labels = target_ids[:, 1:].clone() # why clone?
```
The `T5ForConditionalGeneration` automatically prepares `decoder_input_ids` using `lables` when they are not passed. It uses the _shift_right [method](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_t5.py#L620).
```python3
decoder_start_token_id = self.config.decoder_start_token_id
pad_token_id = self.config.pad_token_id
# shift inputs to the right
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[..., 1:] = input_ids[..., :-1].clone()
shifted_input_ids[..., 0] = decoder_start_token_id
```
so finetune.py doesn't add decoder start token id as required by T5 model. This works for bart because for bart input starts with `bos` (`<s>`) which the tokenizer adds automatically. So for T5 model this removes the first token from `lm_labels` and adds no start token in `decoder_input_ids`.
### To reproduce
```python3
from transformers import T5ForConditionalGeneration, T5Tokenizer, T5Config
tokenizer = T5Tokenizer.from_pretrained("t5-base")
config = T5Config.from_pretrained("t5-base")
batch = tokenizer(["simple is better than complex </s>"], return_tensors="pt")
# from finetune.py
pad_token_id = tokenizer.pad_token_id
target_ids = batch["input_ids"]
decoder_input_ids = target_ids[:, :-1].contiguous() # Why this line?
lm_labels = target_ids[:, 1:].clone() # why clone?
print(decoder_input_ids[0])
# => tensor([ 650, 19, 394, 145, 1561])
print(tokenizer.convert_ids_to_tokens(decoder_input_ids[0]))
# => ['▁simple', '▁is', '▁better', '▁than', '▁complex']
print(tokenizer.decode(lm_labels[0]))
# => is better than complex
# from T5PreTrainedModel._shift_right
decoder_start_token_id = config.decoder_start_token_id
input_ids = batch["input_ids"]
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[..., 1:] = input_ids[..., :-1].clone()
shifted_input_ids[..., 0] = decoder_start_token_id
print(shifted_input_ids[0])
# => tensor([ 0, 650, 19, 394, 145, 1561])
print(tokenizer.convert_ids_to_tokens(shifted_input_ids[0]))
# => ['<pad>', '▁simple', '▁is', '▁better', '▁than', '▁complex']
print(tokenizer.decode(input_ids[0]))
# => simple is better than complex
```
@sshleifer | 07-23-2020 05:42:13 | 07-23-2020 05:42:13 | This could be one of the reasons for strange T5 behaviour.<|||||>Yes this is an excellent catch @patil-suraj ! |
transformers | 5,986 | closed | how can I download T5-11B pretrained model? | It gives me the following error.
**OSError: Can't load weights for 't5-11b'. Make sure that:
- 't5-11b' is a correct model identifier listed on 'https://huggingface.co/models'
- or 't5-11b' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.**
| 07-23-2020 01:36:51 | 07-23-2020 01:36:51 | I get an error when trying to use the hosted inference API too.
> ⚠️ Error loading model Can't load weights for 't5-11b'. Make sure that: - 't5-11b' is a correct model identifier listed on 'https://huggingface.co/models' - or 't5-11b' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt. OSError("Can't load weights for 't5-11b'. Make sure that:\n\n- 't5-11b' is a correct model identifier listed on 'https://huggingface.co/models'\n\n- or 't5-11b' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.\n\n")
<|||||>This is a known issue, and is due to the fact that the [checkpoint size is 42GB](https://huggingface.co/t5-11b#list-files) while the max supported file size for Cloudfront is [20GB](https://aws.amazon.com/blogs/aws/amazon-cloudfront-support-for-20-gb-objects/).
You should use the `use_cdn=False` flag in `AutoModel.from_pretrained(use_cdn=False)` to work around this for now.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Forgot to update this issue at the time, but note that this now works transparently<|||||>Hi. I am having trouble when downloading t5-11b model too but with bit different issue. I am keep getting 'connection rest by peer' error and failing to download the pretrained weights. Is there other way to download the model parameters?<|||||>Hi @nobellant215 you could try downloading the weights file directly: `https://huggingface.co/t5-11b/resolve/main/pytorch_model.bin` |
transformers | 5,985 | closed | Update doc of the model page | Fixes docstring to conform to sphinx syntax and rephrase when needed. Also fixed bad copy-pastes from PyTorch to TF.
Preview is [here](https://64023-155220641-gh.circle-artifacts.com/0/docs/_build/html/main_classes/model.html). | 07-22-2020 21:37:19 | 07-22-2020 21:37:19 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5985?src=pr&el=h1) Report
> Merging [#5985](https://codecov.io/gh/huggingface/transformers/pull/5985?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c3206eef44e9fbfca9ed4527f528107fcba31888&el=desc) will **decrease** coverage by `0.39%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5985?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5985 +/- ##
==========================================
- Coverage 78.65% 78.26% -0.40%
==========================================
Files 146 146
Lines 26227 26230 +3
==========================================
- Hits 20630 20530 -100
- Misses 5597 5700 +103
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5985?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5985/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.96% <100.00%> (+0.07%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5985/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.17% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5985/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5985/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5985/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5985/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5985/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5985?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5985?src=pr&el=footer). Last update [c3206ee...bfd8d3a](https://codecov.io/gh/huggingface/transformers/pull/5985?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,984 | closed | Albert pre-train from scratch convergence problem | # 🐛 Bug
Albert pre-train convergence problem
- The model training loss converged at 6.6 when using AlbertForMaskedLM as model class
- negative training loss when using AlbertForPretrain as model class
notice: I was deliberately set the eval dataset the same as training set for checking training loss at last run.
## Information
Using AlbertForMaskedLM as model class, figure showed below:

Using AlbertForPretrain as model class, figure showed below:

Besides, when I was using the official `run_lanugage_modeling.py`, the training loss on wikiText-2 is also not converge to 0, it converged at 6.6 for several epochs.
Model I am using (Bert, XLNet ...):
Albert
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official task: wikiText-2
* [ ] my own task or dataset: (give details below)
## To reproduce
```
from transformers import (
AlbertConfig,
AlbertTokenizer,
BertTokenizer,
AlbertForPreTraining,
AlbertForMaskedLM,
LineByLineTextDataset,
TextDataset,
DataCollatorForLanguageModeling,
Trainer,
TrainingArguments
)
import math
albert_base_configuration = AlbertConfig(
hidden_size=768,
num_attention_heads=12,
intermediate_size=3072,
)
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
# model = AlbertForPreTraining(config=albert_base_configuration)
model = AlbertForMaskedLM(config=albert_base_configuration)
train_dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="/home/ubuntu/data_local/wikitext-2-raw/wiki.train.raw",
block_size=512,
)
eval_dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="/home/ubuntu/data_local/wikitext-2-raw/wiki.test.raw",
block_size=512,
)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
training_args = TrainingArguments(
output_dir="./results/new",
overwrite_output_dir=True,
num_train_epochs=5,
per_gpu_train_batch_size=5,
save_steps=10_000,
save_total_limit=1,
logging_steps=100,
learning_rate=1.76e-3
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
prediction_loss_only=True,
)
trainer.train()
trainer.save_model("./results/new")
eval_output = trainer.evaluate()
perplexity = math.exp(eval_output["eval_loss"])
print({"loss": eval_output["eval_loss"]})
result = {"perplexity": perplexity}
print(result)
```
Steps to reproduce the behavior:
1.Download Albert here https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/
2.Run the script
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
- The training loss should converge to 0 in tiny datasets.
- Cross entropy should be always positive and eventually converge to zero.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: v3.02
- Platform: AWS instance
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1 with GPU
- Tensorflow version (GPU?): None
- Using GPU in script?: Yes, 8 GPU in total
- Using distributed or parallel set-up in script? Not sure.
| 07-22-2020 21:34:05 | 07-22-2020 21:34:05 | @patil-suraj @julien-c @sgugger @sshleifer <|||||>I tried different learning rate like 1.76e-3, 5e-5.(https://github.com/huggingface/transformers/issues/4727)
different dataset type: TextDataset, line_by_lineDataset.
None of them works.<|||||>I don't think your script will work like this, since the model requires two sets of labels (labels and sentence_order_label). The `DataCollatorForLanguageModeling` will generate the labels, but you will need to add something that generates the pair of sentences and adds the sentence_order_label.<|||||>> I don't think your script will work like this, since the model requires two sets of labels (labels and sentence_order_label). The `DataCollatorForLanguageModeling` will generate the labels, but you will need to add something that generates the pair of sentences and adds the sentence_order_label.
@sgugger Thanks for the reply! The `AlbertForPreTraining` model did requires 2 different labels. I am planning to create a PR for that, before that,
**Could I have any hints about why the `AlbertForMaskedLM` model didn't converge to 0?** Actually this is my major concern, I am planing to train the Albert using larger datasets like the whole wikipedia, if the MLM task even not converge on small dataset(say, wikiText-2), I think it impede my way to scale up the dataset.<|||||>@sgugger I used larger batch batch size the lower the learning rate did help the model converged to 0 in a very tiny text data(not wikiText-2), I repeated several articles several time to create that text dataset.<|||||>> I don't think your script will work like this, since the model requires two sets of labels (labels and sentence_order_label). The `DataCollatorForLanguageModeling` will generate the labels, but you will need to add something that generates the pair of sentences and adds the sentence_order_label.
@sgugger hi,I want to train a albert tiny model. Could you tell me is there any methods or class that can be used to generate the pair of sentences and adds the sentence_order_label ? Thanks a lot.
I have gotten it after read your commit. I think the class LineByLineWithSOPTextDataset in "transformers/data/datasets/language_modeling.py" can solve my problem. Thank yuo again,hh.<|||||>I also resolved with larger batch size (96) and lower learning_rate (5e-5). Thank you. |
transformers | 5,983 | closed | pipeline does not do truncation on long texts input, error message found | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): No specified
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. I have tried using pipeline on my own purpose, but I realized it will cause errors if I input long sentence on some tasks, it should do truncation automatically, but it does not. And the pipeline function does not take extra argument so we cannot add something like `truncation=True`. Here is an example on sentiment-analysis task:
```python
from transformers import pipeline
nlp = pipeline('sentiment-analysis')
text = "This is an example"*300
nlp(text)
```
2. The error message is below:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-31-0a8ce849cc29> in <module>()
2 nlp = pipeline('sentiment-analysis')
3 text = "This is an example"*300
----> 4 nlp(text)
11 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1722 # remove once script supports set_grad_enabled
1723 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1724 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1725
1726
IndexError: index out of range in self
```
3. same error on NER task, maybe some other task as well.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Google Colab
- Python version: Python 3.6.9
- PyTorch version (GPU?):1.5.1+cu101, no GPU
- Tensorflow version (GPU?): no
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| 07-22-2020 20:40:48 | 07-22-2020 20:40:48 | Duplicate of #4224 |
transformers | 5,982 | closed | Cleanup Trainer and expose customization points | Clean up some parts of the code of `Trainer` and expose some function as customization points (doc will follow if you agree on this). | 07-22-2020 20:03:58 | 07-22-2020 20:03:58 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5982?src=pr&el=h1) Report
> Merging [#5982](https://codecov.io/gh/huggingface/transformers/pull/5982?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2c0da7803a75f0fc6e6d484e23ca283faa32d785&el=desc) will **decrease** coverage by `0.15%`.
> The diff coverage is `60.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5982?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5982 +/- ##
==========================================
- Coverage 78.66% 78.51% -0.16%
==========================================
Files 146 146
Lines 26227 26240 +13
==========================================
- Hits 20632 20602 -30
- Misses 5595 5638 +43
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5982?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `40.87% <60.00%> (+2.41%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5982?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5982?src=pr&el=footer). Last update [2c0da78...e583bf0](https://codecov.io/gh/huggingface/transformers/pull/5982?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,981 | closed | [WIP] Proposal for TF model outputs | This picks up on the work @thomwolf did on #5740 to have self-documented outputs in TensorFlow that are compatible with the AutoGraph system.
`TFModelOuput` subclasses `OrderedDict` while still being a dataclass, with some tweaks in the `post_init`:
- only the not-None attributes are set as values for the dictionary because tensorflow refuses None as outputs.
- a `TFModelOutput` can be instantiated with the regular keyword arguments but also with an iterator passed as the first argument (as a dict would) like @thomwolf suggested in #5740, with a fix to make sure the first input is not a tensor (because tensors are iterables).
This breaks two things for the TensorFlow side of the library:
1. when unpacking `outputs`, a slice needs to be used, otherwise the keys of the dictionary are returned, not the values:
```
loss, logits = outputs
```
will fail, it needs to be changed to
```
loss, logits = outputs[:2]
```
2. when loading and saving a model using `SavedModel`, the subclass is lost and the output becomes a regular dictionary (see the change in `test_keras_save_load`).
Apart from those, these model outputs are fully backward compatible (you can index with an int or a slice and get the same behavior as before).
If this is accepted, I would strongly recommend using the same base class for PyTorch model outputs, which would imply the breaking change number 1 but would have the added benefit of:
1. fixing the problem with `DataParallel`
2. have consistent outputs between TF and PyTorch. | 07-22-2020 18:19:50 | 07-22-2020 18:19:50 | Sadly changing the `__iter__` function does not work as it's used inside TensorFlow (specifically in `tensorflow/python/util/nest.py`).<|||||>Ok closing this prototype now that we have a way forward (see [the forum](https://discuss.huggingface.co/t/new-model-output-types/195/8)). |
transformers | 5,980 | closed | add fine-tuned mobilebert squad v1 and squad v2 model cards | 07-22-2020 17:55:20 | 07-22-2020 17:55:20 | ||
transformers | 5,979 | closed | dynamic masking for RoBERTa model | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
<!-- Description of your issue -->
I saw that "dynamic masking" was mentioned on the README file for language modeling:
"In accordance to the RoBERTa paper, we use dynamic masking rather than static masking. The model may, therefore, converge slightly slower (over-fitting takes more epochs)."
I couldn't find which class this method is implemented in and how to enable this feature during pre-training using the Trainer class. Could someone please help me?
Thank you very much in advance.
| 07-22-2020 17:20:45 | 07-22-2020 17:20:45 | Hi @mingyang3github
masking is implemented in ` DataCollatorForLanguageModeling`'s `mask_tokens` method, [here](https://github.com/huggingface/transformers/blob/master/src/transformers/data/data_collator.py#L103)
For pre-training you can use the language modeling script, it takes of the dataset and masking. <|||||>> Hi @mingyang3github
> masking is implemented in ` DataCollatorForLanguageModeling`'s `mask_tokens` method, [here](https://github.com/huggingface/transformers/blob/master/src/transformers/data/data_collator.py#L103)
>
> For pre-training you can use the language modeling script, it takes of the dataset and masking.
Hi @patil-suraj ,
Thank you for your reply. If I understand this correctly, DataCollatorForLanguageModeling's method is static masking, right?
Based on RoBERTa model's paper, the dynamic masking refers to "training data was duplicated 10 times so that each sequence is masked in 10 different ways over the 40 epochs of training."
Has this method implemented been implemented in DataCollatorForLanguageModeling ?<|||||>It is dynamic masking, as the masking is handled in the collate function, the same examples gets masked differently. Because every batch first goes through collate function before the batch is returned from the loader, so each time the examples goes through collate it gets masked differently.<|||||>Hi @patil-suraj , is that mean we **always use dynamic masking**, when use `DataCollatorForLanguageModeling`, no matter pre-training for Bert, RoBERTa or else.<|||||>@buaapengbo
Yes, the `DataCollatorForLanguageModeling` always does dynamic masking no matter the model<|||||>I thought I responded, but it turns out I didn't. Sorry.
@patil-suraj Thank you very much! That makes sense!
<|||||>Thanks for the answer. For the original roberta, they seem to use the same mask per sentence 4 times during training and 10 different masks for the same sentence during the whole training. But this function always returns different masks for the same sentence during training. This means the model always receives different masks for the same sentence during the whole training process right? |
transformers | 5,978 | closed | Training data format | I have text on which I want to fine tune the gpt2 model for text autocompletion on my text the text sentences are separated by new line is there any format I should follow. When I trained on the data as it is it is not giving me proper results with the default training parameters. I have nearly after split 25k sentences for training. Please suggest. The training data looks like this
<img width="1220" alt="Screenshot 2020-07-22 at 10 24 01 PM" src="https://user-images.githubusercontent.com/33617789/88205241-18a09c00-cc6a-11ea-924e-a8df103c8b94.png">
| 07-22-2020 16:54:37 | 07-22-2020 16:54:37 | ```
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("language-modeling/output/")
model = AutoModelWithLMHead.from_pretrained("language-modeling/output/")
input_text="organic che"
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'])
tokenizer.decode(output[0])
```<|||||>I want to make a query auto complete these are the user queries separated by new line
<|||||>@patil-suraj <|||||>should I add some special token at the end and start of every search query
<|||||>as far as I can see, your dataset is format is correct, also you don't need to add any special tokens, tokenizer adds that by default.<|||||>--line_by_line I added then the error is coming
You are attempting to pad samples but the tokenizer you are using (GPT2Tokenizer) does not have one.<|||||>I want to make a text auto complete am I using correct model ? do I have sufficient training sentences? should I add --line_by_line while training? Please help!!
@patil-suraj <|||||>Hi @vyaslkv you can use GPT-2 for auto complete, as for training examples you will need to experiment.
pinging @sgugger for the error.<|||||>LineByLineDataset is not really suitable for GPT2: you should concatenate your texts with the separation token and feed chunks of the the model size (can't remember if it's 512 or 1024 at the top of my mind but it should be in the config of the model). Like the error message says, GPT2 does not know padding.<|||||>@sgugger can you explain me a bit which token to use and how the code will look like in that case so sorry if I am asking too much or can you give me some reference which I could use
Thanks for responding <|||||>The separation token will automatically be added by the tokenizer. The rest is just standard python: concatenate all your lists of tokens in a big numpy array, then reshape it to `something x model_len`, something being the number of "sequences" (they'll actually span over several lines of your dataset) you can build with your dataset. You can then iterate through the rows of that array as a dataset.<|||||>In this what changes I need to do
```
[python run_language_modeling.py \
--output_dir=output \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE](url)
```<|||||>The script will do this automatically for you if you don't add the line by line flag. (Except the sentences are separated by new lines and not the special token.) You can try to replace the new lines by "<|endoftext|>"<|||||>cool Thanks @sgugger Just to clarify If I add "<|endoftext|>" in place of new line I don't need to make any changes right?<|||||>Normally, no.<|||||>Thanks @sgugger Thanks a ton really for help so quick <|||||>@sgugger @patil-suraj I trained with the format you shared but it is generating some irrelevant text not from the training data I gave. What I am missing in this case
```
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("output1/")
model = AutoModelWithLMHead.from_pretrained("output1/")
input_ids = tokenizer.encode('Vegetative reproduction of Agave', return_tensors='pt')
# set return_num_sequences > 1
beam_outputs = model.generate(
input_ids,
max_length=50,
num_beams=10,
no_repeat_ngram_size=2,
num_return_sequences=10,
early_stopping=True
)
# now we have 3 output sequences
print("Output:\n" + 100 * '-')
for i, beam_output in enumerate(beam_outputs):
print("{}: {}".format(i, tokenizer.decode(beam_output, skip_special_tokens=False)))
```<|||||>I want to generate text autocomplete from the training data text
<|||||>@sgugger can you please help<|||||>Hi @vyaslkv , I think the best place to ask this question is [HF forums](https://discuss.huggingface.co/) someone who has already worked on similar task can answer it better. Although @sgugger might have some answers :)<|||||>@patil-suraj Thanks I will put my question there as well
<|||||>https://discuss.huggingface.co/t/search-query-autocomplete-from-the-queries-i-have-in-my-data/546<|||||>@sgugger @patil-suraj no one has responded on the forum 😔<|||||>@patil-suraj I didn't get any response can you please help
<|||||>Hi @vyaslkv , I'll see if anyone I know has worked on similar problem and get back to you.<|||||>@patil-suraj Thanks<|||||>@patil-suraj ?<|||||>Hello, @patil-suraj we found anything related to that?
Thanks!!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,977 | closed | [demo] Broken fp16 test | 07-22-2020 16:27:16 | 07-22-2020 16:27:16 | ||
transformers | 5,976 | closed | [test] partial coverage for train_mbart_enro_cc25.sh | CI covers all the examples logic that doesn't run on CUDA.
Adds coverage for:
- user facing bash script train_mbart_cc25_enro.sh
- the idea that seq2seq/finetune.py should lead to models getting better/ val BLEU increasing.
This only takes 10s to run on brutasse, so I gave it a minute for github actions CI, the only automated tester that will run this.
Will add coverage for `--do_predict` once [this](https://github.com/PyTorchLightning/pytorch-lightning/issues/2673) is fixed. | 07-22-2020 16:00:53 | 07-22-2020 16:00:53 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5976?src=pr&el=h1) Report
> Merging [#5976](https://codecov.io/gh/huggingface/transformers/pull/5976?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2c0da7803a75f0fc6e6d484e23ca283faa32d785&el=desc) will **decrease** coverage by `1.38%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5976?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5976 +/- ##
==========================================
- Coverage 78.66% 77.28% -1.39%
==========================================
Files 146 146
Lines 26227 26227
==========================================
- Hits 20632 20270 -362
- Misses 5595 5957 +362
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5976?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5976?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5976?src=pr&el=footer). Last update [2c0da78...f83fdd3](https://codecov.io/gh/huggingface/transformers/pull/5976?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,975 | closed | Transformer-XL: Fixed returned outputs when using `return_tuple=True` | Fixes #5974
`mems` are returned again when using `return_tuple=True`. | 07-22-2020 14:43:26 | 07-22-2020 14:43:26 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5975?src=pr&el=h1) Report
> Merging [#5975](https://codecov.io/gh/huggingface/transformers/pull/5975?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ae67b2439fb15954bfd8f0fdf521cf1a650bafb9&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5975?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5975 +/- ##
=======================================
Coverage 78.51% 78.51%
=======================================
Files 146 146
Lines 26214 26214
=======================================
+ Hits 20581 20582 +1
+ Misses 5633 5632 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5975?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `79.16% <0.00%> (ø)` | |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5975?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5975?src=pr&el=footer). Last update [ae67b24...ed8722d](https://codecov.io/gh/huggingface/transformers/pull/5975?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Was fixed in different PR: #5999 |
transformers | 5,974 | closed | Transformer-XL: no mems are return when using 'return_tuple' | # 🐛 Bug
## Information
The forward pass of the `TransfoXLLMHeadModel` returns no `mems` when using `return_tuple=True`.
Model I am using: Transformer-XL
Language I am using the model on: English
The problem arises when using:
* [x] my own modified scripts: (give details below)
## To reproduce
```Python
from transformers import TransfoXLLMHeadModel, TransfoXLTokenizer
model = TransfoXLLMHeadModel.from_pretrained("transfo-xl-wt103")
model.train()
tokenizer = TransfoXLTokenizer.from_pretrained("transfo-xl-wt103")
encoded = tokenizer("Max is walking the dog in the streets", return_tensors='pt')
outputs = model(input_ids=encoded['input_ids'], mems=None, labels=encoded['input_ids'], return_tuple=True)
loss, _, mems = outputs
print(loss.size())
print(len(mems)) # should be 18 due to the 18 layers
```
Output:
```
Traceback (most recent call last):
File "user/script.py", line 10, in <module>
loss, _, mems = outputs
ValueError: not enough values to unpack (expected 3, got 2)
```
## Expected behavior
Output:
```
torch.Size([1, 7])
18
```
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.6.10
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 07-22-2020 14:27:26 | 07-22-2020 14:27:26 | Thanks for flagging the issue, the fix is on its way to review.<|||||>Oh, I opened a PR for this, but it seems you fixed it yourself. I will close my PR then, |
transformers | 5,973 | closed | [cleanup] much cruft in unittests | Anti patterns:
- making a result dict and then using each of it's keys. Why use the dict?
- delete all mentions of `check_loss_output`
- use tuple equality: `self.assertEqual(tensor.shape, (bs, seq_len)` instead of
```python
self.assertListEqual(list(tensor.size()), [bs, seq_len])
```
This does not need to be done for all test files at once.
fix `templates/testing_xxx ` to reflect the new best practice. | 07-22-2020 14:17:30 | 07-22-2020 14:17:30 | @stas00, this might be up your alley! <|||||>will do, thank you!<|||||>a large part is done: https://github.com/huggingface/transformers/pull/6196
<|||||>problem: not all `results` in tests are objects, some are plain dict and can't be called with `.key_name`. So there is a large chunk of tests that haven't been re-written because of that.
So we could add a wrapper to the test utils that will make the code consistent with those tests where `results` is an object.
```
class DictAttr:
def __init__(self, args):
for k in args:
setattr(self, k, args[k])
def __getitem__(self, item):
return getattr(self, item)
```
and then in the tests:
```
# import DictAttr and then
data = {
"loss_1": 1,
"mems_1": 2,
}
result = DictAttr(data)
```
now it works either way:
```
print(result["loss_1"]) # 1
print(result.loss_1) # 1
```
Not sure about the best class name for this, suggestions?
So practically, with this change the test code will look
```
--- a/tests/test_modeling_tf_albert.py
+++ b/tests/test_modeling_tf_albert.py
@@ -136,14 +136,12 @@ class TFAlbertModelTester:
sequence_output, pooled_output = model(input_ids)
- result = {
+ result = DictAttr({
"sequence_output": sequence_output.numpy(),
"pooled_output": pooled_output.numpy(),
- }
- self.parent.assertListEqual(
- list(result["sequence_output"].shape), [self.batch_size, self.seq_length, self.hidden_size]
- )
- self.parent.assertListEqual(list(result["pooled_output"].shape), [self.batch_size, self.hidden_size])
+ })
+ self.parent.assertEqual(result.sequence_output.shape, (self.batch_size, self.seq_length, self.hidden_size))
+ self.parent.assertEqual(result.pooled_output.shape, (self.batch_size, self.hidden_size))
```
plus an extra import of whatever the final class name will be.<|||||>1) can we just never create a `result` dict? It just creates unneeded indirection.
2) If we need to create a results dict, cant we just do key lookup with `[key]`
3) checkout `collections.UserDict`
<|||||>> * can we just never create a `result` dict? It just creates unneeded indirection.
From looking at the existing tests, it sort of mimics the returns objects, but doesn't have the accessors for the keys.
So I'm not quite sure what you propose. A short code sample is usually most demonstrative.
> * If we need to create a results dict, cant we just do key lookup with `[key]`
I lost you, unless I am misunderstanding what you're suggesting, isn't the big part of this "issue" - replacing `[key]` with `.key`. otherwise nothing else needs to be done and this ticket can be closed - except now it's a mish-mash of results.key (most pt tests) and results["key"] (most tf tests)
> * checkout `collections.UserDict`
I checked - it doesn't provide `[key]` and `.key` functionality.<|||||>ignore #1, I was confused.
when I made this issue, sylvain hadn't merged #6155 , so I guess what remains of the issue is
Sorry for the miscommunication!
<|||||>Your #6196 will completely close the issue.<|||||>So I just need to complete: "delete all mentions of check_loss_output" then - there is one remaining test there.
edit: now done<|||||>I plan to do part 2 PR to make the rest of the tests consistent with this change, but I have to wait for this to be merged as it impacts too many files to proceed easily. |
transformers | 5,972 | closed | Update to match renamed attributes in fairseq master | Fix #5917 : RobertaModel no longer have model.encoder and args.num_classes attributes as of 5/28/20. | 07-22-2020 13:52:56 | 07-22-2020 13:52:56 | Do you mind doing the code quality so that we can merge? You can do `pip install -e ".[quality]` followed by `make style && make quality`.
If you don't have access to your fork, I can push on it directly to update the code quality and merge after.<|||||>Hi @LysandreJik,
I tried to do the code quality but couldn't pass `make quality` because `black` and `flake8` return with a non-zero exit code for some reason (i'm on windows). Here are their outputs:
```shell
black --check --line-length 119 --target-version py35 examples templates tests src utils
would reformat C:\Users\u165983\Documents\transformers\src\transformers\__init__.py
would reformat C:\Users\u165983\Documents\transformers\templates\adding_a_new_example_script\run_xxx.py
would reformat C:\Users\u165983\Documents\transformers\templates\adding_a_new_example_script\utils_xxx.py
Oh no! 💥 💔 💥
3 files would be reformatted, 340 files would be left unchanged.
flake8 examples templates tests src utils
tests\test_tokenization_common.py:31:5: F401 'transformers.PretrainedConfig' imported but unused
tests\test_tokenization_common.py:31:5: F401 'transformers.PreTrainedModel' imported but unused
tests\test_tokenization_common.py:31:5: F401 'transformers.TFPreTrainedModel' imported but unused
src\transformers\pipelines.py:72:5: F401 '.modeling_utils.PreTrainedModel' imported but unused
src\transformers\pipelines.py:73:5: F401 '.modeling_tf_utils.TFPreTrainedModel' imported but unused
```
I still commited and pushed on my fork. Tell me if there is anything more I can do.<|||||>`make style` checks `black` and `isort` and updates the files, while `make quality` checks `black`, `isort` and `flake8`, but doesn't update the files and instead tells you what fails.
I ran `make style` on your repository and pushed directly on it, thanks for iterating! |
transformers | 5,971 | closed | ImportError: cannot import name 'MODEL_WITH_LM_HEAD_MAPPING' | such error really annoys me. | 07-22-2020 13:14:03 | 07-22-2020 13:14:03 | |
transformers | 5,970 | closed | [WIP] Ner pipeline grouped_entities fixes | There are many issues with ner pipeline using grouped_entities=True
https://github.com/huggingface/transformers/issues/5077
https://github.com/huggingface/transformers/issues/4816
https://github.com/huggingface/transformers/issues/5730
https://github.com/huggingface/transformers/issues/5609
https://github.com/huggingface/transformers/issues/6514
https://github.com/huggingface/transformers/issues/5541
- [x] [Bug Fix] add an option `ignore_subwords` to ignore subsequent ##wordpieces in predictions. Because some models train on only the first token of a word and not on the subsequent wordpieces (BERT NER default). So it makes sense doing the same thing at inference time.
- The simplest fix is to just group the subwords with the first wordpiece.
- [TODO] how to handle ignored scores? just set them to 0 and calculate zero invariant mean ?
- [TODO] handle different wordpiece_prefix ## ? possible approaches:
get it from tokenizer? but currently most tokenizers dont have a wordpiece_prefix property?
have an _is_subword(token)
- [x] [Feature add] added option to `skip_special_tokens`. Cause It was harder to remove them after grouping.
- [x] [Additional Changes] remove B/I prefix on returned grouped_entities
Edit: Ignored subwords' scores are also ignored by setting them to nan and using nanmean
Edit: B entities of different type are separated (as per BIO tag definition)
Edit: skip_special_tokens is now the default behavior
Edit: ignore_subwords is now the default behavior
Edit: more flexibility for custom non-standard tokenizers through tokenizer.is_subword_fn, tokenizer.convert_tokens_to_string
Edit: [fix UNK token related bugs by mapping UNK tokens to the correct original string] Use fast tokenizer or pass offset_mapping
# Usage
`pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[], grouped_entities=True, ignore_subwords=True)`
| 07-22-2020 13:02:59 | 07-22-2020 13:02:59 | I'm wondering should the B & I part maybe separated from the entity type part? In the sense that you average the entities (disregarding the B/I part) and vice-versa. I now have the feeling that only the first subtoken decides whether the complete word is a B or an I. <|||||>I want to complete this but ran into another issue while working it:
All [UNK] tokens get mapped to [UNK] in the output, instead of the actual input token (because the code is getting from ids->tokens), Also [UNK]s gets lost when using skip_special_tokens (https://github.com/huggingface/transformers/issues/6863)
While this is a simple token alignment issue and can be solved by using `offset_mappings`. `offset_mappings` is only available with fast tokenizers, I'm wondering what would be a more general approach to solving this?<|||||>Dear @cceyda,
In the last couple of days I started to work with Huggingface's transformers and especially NER-classification. I ran into issues that has been previously addressed in other issues you just mentioned at the beginning. Especially that subtokens that were classified with 'O' were not properly merged with the full token.
For example (Dutch):
sentence = "Als we Volkswagens OR-voorzitter **Bernd Osterloh** moeten geloven, dan moet dat binnen drie jaar het geval zijn."
Gives me as group entities:
[{'entity_group': 'B-per', 'score': 0.9999980926513672, 'word': 'Bern'},
{'entity_group': 'I-per', 'score': 0.9999990463256836, 'word': 'Ost'}]
I expect:
[{'entity_group': 'B-per', 'score': 0.9999980926513672, 'word': 'Bernd'},
{'entity_group': 'I-per', 'score': 0.9999990463256836, 'word': 'Osterloh'}]
However, the considered subtokens are classified as 'O':
{'word': '[CLS]', 'score': 0.9999999403953552, 'entity': 'O', 'index': 0}
{'word': 'Als', 'score': 0.9999999403953552, 'entity': 'O', 'index': 1}
{'word': 'we', 'score': 0.9999999403953552, 'entity': 'O', 'index': 2}
{'word': 'Volkswagen', 'score': 0.9999955296516418, 'entity': 'B-misc', 'index': 3}
{'word': '##s', 'score': 0.9999999403953552, 'entity': 'O', 'index': 4}
{'word': 'O', 'score': 0.9981945157051086, 'entity': 'I-misc', 'index': 5}
{'word': '##R', 'score': 0.9999998807907104, 'entity': 'O', 'index': 6}
{'word': '-', 'score': 0.9999999403953552, 'entity': 'O', 'index': 7}
{'word': 'voorzitter', 'score': 0.9999998807907104, 'entity': 'O', 'index': 8}
{'word': 'Bern', 'score': 0.9999980926513672, 'entity': 'B-per', 'index': 9}
**{'word': '##d', 'score': 0.9999998807907104, 'entity': 'O', 'index': 10}**
{'word': 'Ost', 'score': 0.9999990463256836, 'entity': 'I-per', 'index': 11}
**{'word': '##er', 'score': 0.9999998807907104, 'entity': 'O', 'index': 12}
{'word': '##lo', 'score': 0.9999997615814209, 'entity': 'O', 'index': 13}
{'word': '##h', 'score': 0.9999998807907104, 'entity': 'O', 'index': 14}**
{'word': 'moeten', 'score': 0.9999999403953552, 'entity': 'O', 'index': 15}
{'word': 'geloven', 'score': 0.9999998807907104, 'entity': 'O', 'index': 16}
{'word': ',', 'score': 0.9999999403953552, 'entity': 'O', 'index': 17}
{'word': 'dan', 'score': 0.9999999403953552, 'entity': 'O', 'index': 18}
{'word': 'moet', 'score': 0.9999999403953552, 'entity': 'O', 'index': 19}
{'word': 'dat', 'score': 0.9999999403953552, 'entity': 'O', 'index': 20}
{'word': 'binnen', 'score': 0.9999999403953552, 'entity': 'O', 'index': 21}
{'word': 'drie', 'score': 0.9999999403953552, 'entity': 'O', 'index': 22}
{'word': 'jaar', 'score': 0.9999999403953552, 'entity': 'O', 'index': 23}
{'word': 'het', 'score': 0.9999999403953552, 'entity': 'O', 'index': 24}
{'word': 'geval', 'score': 0.9999999403953552, 'entity': 'O', 'index': 25}
{'word': 'zijn', 'score': 0.9999999403953552, 'entity': 'O', 'index': 26}
{'word': '.', 'score': 0.9999999403953552, 'entity': 'O', 'index': 27}
{'word': '[SEP]', 'score': 0.9999999403953552, 'entity': 'O', 'index': 28}
I believe your pull request addresses these issues properly.
However, I saw the merge did not complete since it failed on some tasks.
I was wondering if there is still the intention to solve these issues.
Disclaimer: I am a total newbie to git (just set up an account), so please be mild, haha.
Any help is much appreciated!
Thank you in advance,
Monique<|||||>@cceyda I actually want this PR to move forward. Are you okay collaborating on your fork (can add me as collaborator)? I can help out with some of the issues failing so we can get this merged :smile:
<|||||>@enzoampil I have added you as a collaborator.
Also pushed some additional changes addressing the [UNK] token mapping problem I mentioned before.
Still there are some things I'm not very satisfied with:
1. subword prefix was fixed to '##' before. with the latest change I added a check to see if the tokenizer has an `is_subword_fn `defined (still dont like handling it this way). I know some tokenizers have `subword_prefix` but most don't and this was the most flexible solution for now.
2. `offset_mappings` is needed to resolve [UNK] tokens, but is only available with fast tokenizers. Fast tokenizers don't have `convert_ids_to_tokens` so had to implement a hacky solution for those aswell.
3. `skip_special_tokens` also dropped [UNK] tokens so I had to change things and rely on `special_tokens_mask`.
It is not optimal but it worked for my use cases.
Haven't had a chance to look at the failing tests yet :/
<|||||>I have changed the `ignore_subwords` default to True which covers cases like
```
[
{'word': 'Cons', 'score': 0.9994944930076599, 'entity': 'B-PER', 'index': 1},
{'word': '##uelo', 'score': 0.802545428276062, 'entity': 'B-PER', 'index': 2}
]
```
And honestly I don't know why subwords shouldn't be ignored for most cases. (Unless there is need for some custom logic that determines a words tag; ie by averaging the wordpieces etc etc. In which case grouped_entities shouldn't be used 🤔 )
IMO Mid-word inconsistencies made by the model while `ignore_subwords = False` shouldn't effect pipelines output logic.
[todo]
- torch tests are passing for now but probably should add more cases? (I can't see why the tf tests are failing though, don't have dev env for that)
- should add the new parameters to the doc strings.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5970?src=pr&el=h1) Report
> Merging [#5970](https://codecov.io/gh/huggingface/transformers/pull/5970?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7087d9b1c07781cc2eee45c97d3eadf6a1ba2b44?el=desc) will **increase** coverage by `26.30%`.
> The diff coverage is `71.87%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5970?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5970 +/- ##
===========================================
+ Coverage 52.05% 78.36% +26.30%
===========================================
Files 236 168 -68
Lines 43336 32338 -10998
===========================================
+ Hits 22560 25341 +2781
+ Misses 20776 6997 -13779
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5970?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `80.59% <71.87%> (+61.46%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.94% <0.00%> (-60.01%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-56.29%)` | :arrow_down: |
| [src/transformers/tokenization\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `37.03% <0.00%> (-29.23%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-11.00%)` | :arrow_down: |
| [src/transformers/data/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL19faW5pdF9fLnB5) | `100.00% <0.00%> (ø)` | |
| [src/transformers/modeling\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `23.47% <0.00%> (ø)` | |
| [src/transformers/modeling\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYmFydC5weQ==) | `100.00% <0.00%> (ø)` | |
| [src/transformers/modeling\_outputs.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vdXRwdXRzLnB5) | `100.00% <0.00%> (ø)` | |
| [src/transformers/modeling\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19wZWdhc3VzLnB5) | `100.00% <0.00%> (ø)` | |
| ... and [217 more](https://codecov.io/gh/huggingface/transformers/pull/5970/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5970?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5970?src=pr&el=footer). Last update [7087d9b...47797d1](https://codecov.io/gh/huggingface/transformers/pull/5970?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Dear @cceyda,
Last two days I worked on your branch to see how it performs on my own input texts.
However, I came accross the following issue I would like to point out to you:
When I use the following line of code (as you suggest under 'Usage' above):
pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[], grouped_entities=True, skip_special_tokens=True, ignore_subwords=True)
I get the error:
TypeError: __init__() got an unexpected keyword argument 'skip_special_tokens'.
When looking in the file transformer.pipelines and looking specifically for the tokenclassificationpipeline, it seems that it is not yet implemented. Or am I missing something?
Best,
Monique<|||||>@Monique497 sorry for the delay
A couple of things have changed since I first wrote that example:
- special tokens ([CLS][PAD][SEP]) are always skipped (per comments above) so you don't need that kwarg. This is also valid for `grouped_entities=False`
``` py
from transformers import (
AutoModelForTokenClassification,
AutoTokenizer,
pipeline,
)
model = AutoModelForTokenClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True) # note the fast tokenizer use
# ignore_subwords = True by default
nlp = pipeline("ner",model=model,tokenizer=tokenizer, grouped_entities=True)
inputs="test sentence"
output=nlp(inputs)
```
- Another important thing is you have to **use a fast tokenizer** OR pass `offset_mapping` as a parameter because the [UNK] token resolution depends on this. (maybe I should rename this to offset_mappings). This is also valid for `grouped_entities=False`
```py
# you can pass it like this
nlp(inputs,offset_mapping=mappings_you_calculate)
```
- If you are using a custom tokenizer that treats subwords differently (ie not starting with '##'), you can pass a function implementing your custom logic through `tokenizer.is_subword_fn` and `tokenizer.convert_tokens_to_string`
I don't know if this is the best way to handle non standard tokenizations, but I use some custom non-standard tokenizers for Korean and this solution gave me enough flexibility.
something like this:
```py
def sub_fn(token):
if token.starts_with("%%"): return True
tokenizer.is_subword_fn=sub_fn
def convert_tokens_to_string(self, tokens):
out_string = " ".join(tokens).replace(" %%", "").strip()
return out_string
tokenizer.convert_tokens_to_string=convert_tokens_to_string
```
@enzoampil what are your thoughts on this?
<|||||>@cceyda Sorry for taking a while, lemme do another review!<|||||>@LysandreJik @julien-c This looks good to me. Please advise if this is good to merge or if you think there's still anything missing before merging :grin:<|||||>Thanks for iterating! I'll check this today.<|||||>Merging this as soon as it's green, thank you for iterating on the PR! Sorry this took so long to merge.<|||||>Thanks @LysandreJik and congrats @cceyda !! :smile:<|||||>fix_bart_gpu<|||||>FYI this broke the NER pipeline:
```py
from transformers import pipeline
nlp = pipeline("ner")
nlp("My name is Alex and I live in New York")
```
crashes with the following error:
```
raise Exception("To decode [UNK] tokens use a fast tokenizer or provide offset_mapping parameter")
Exception: To decode [UNK] tokens use a fast tokenizer or provide offset_mapping parameter
```
Trying to see if this can be quickly patched, otherwise we'll revert the PR while we patch this.<|||||>oops! although returning unk tokens with slow tokenizers are not the best, I agree not forcing a fast tokenizer with a default of ignore_subword=True looks better for keeping the compatibility. I saw a bit late the _args_parser line was mis-merged during this pr merge and I see it is fixed/improved on the patch. I wasn't sure on how to test for the offset_mapping argument with the new test structure (which looks to be good at the patch). Sorry for the trouble 😅 @LysandreJik <|||||>No worries, thanks for taking a look at the patch! |
transformers | 5,969 | closed | run_squad example doesn't work with XLM model | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): XLM
Language I am using the model on (English, Chinese ...): Korean
The problem arises when using:
* [ ] the official example scripts: run_squad.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: KorQuAD v1.0
## To reproduce
Steps to reproduce the behavior:
1. Download the KorQuAD v1.0 data (it is formatted exactly as SQuAD v1.0)
2. Run the run_squad script with KorQuAD instead of SQuAD
3. Use the XLM model type (with specific model: xlm-mlm-100-1280_384)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Since the data is formatted exactly the same as the english version, except its in Korean now, I expect the script to be able to fully train and evaluate on the new data. In fact, the script successfully does this when you use multi-lingual BERT instead of XLM. However, when you use XLM, you get invalid input in the forward call:
"cls_index" and "p_mask" are unexpected inputs.
I tried hacking in a fix for this, specifically just deleting cls_index and p_mask from the inputs when using XLM instead of Bert. This caused the model to be able to train correctly, but when we try to evaluate, we end up crashing on a new error. I don't know if its related or not, but this is in the squad metrics, not within run_squad.py, so I was less excited to start messing with that and instead decided to put up this issue.
It specifically crashes here:
transformers/data/metrics/squad_metrics.py", line 629, in compute_predictions_log_probs
cur_null_score = result.cls_logits
AttributeError: 'SquadResult' object has no attribute 'cls_logits'
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-3.10.0-862.14.4.el7.x86_64-x86_64-with-centos-7.5.1804-Core
- Python version: 3.7.3
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No (Just 1 GPU)
| 07-22-2020 13:01:26 | 07-22-2020 13:01:26 | see also #3535<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,968 | closed | Loss becoming nearly zero in first 5K steps when training LM from scratch | I am training the ALBERT LM model from scratch.
I have already trained it for Hindi and Bangla and it was working fine but when I am training on Gujarati, the loss is becoming zero in 5K steps.
What could be the reason for the sudden drop in the loss? Can anyone suggest what could be the cause or how to debug such issue?
Any suggestions?
| 07-22-2020 12:02:59 | 07-22-2020 12:02:59 | Hi! This is an interesting question, have you tried asking it over on the forums at https://discuss.huggingface.co ? You'll probably get more answers over there.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,967 | closed | Actually the extra_id are from 0-99 and not from 1-100 | ```
a = tokenizer.encode("we got a <extra_id_99>", return_tensors='pt',add_special_tokens=True)
print(a)
>tensor([[ 62, 530, 3, 9, 32000]])
a = tokenizer.encode("we got a <extra_id_100>", return_tensors='pt',add_special_tokens=True)
print(a)
>tensor([[ 62, 530, 3, 9, 3, 2, 25666, 834, 23, 26,
834, 2915, 3155]])
``` | 07-22-2020 11:39:24 | 07-22-2020 11:39:24 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5967?src=pr&el=h1) Report
> Merging [#5967](https://codecov.io/gh/huggingface/transformers/pull/5967?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ae67b2439fb15954bfd8f0fdf521cf1a650bafb9&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5967?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5967 +/- ##
=======================================
Coverage 78.51% 78.51%
=======================================
Files 146 146
Lines 26214 26214
=======================================
Hits 20581 20581
Misses 5633 5633
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5967?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5967?src=pr&el=footer). Last update [ae67b24...e8e003f](https://codecov.io/gh/huggingface/transformers/pull/5967?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Hi @patrickvonplaten, can you have a look?<|||||>Thansk @orena1 !<|||||>pinging @patrickvonplaten for notification |
transformers | 5,966 | closed | Bug fix: NER pipeline shouldn't group separate entities of same type | ## Effects
nlp=pipeline('ner', ... , grouped_entities=True)
## Fixes
Separate entities of same type shouldn't be grouped together even if they are same type
( B-type1 B-type1 ) != ( B-type1 I-type1 )
## Example
"something something Istanbul Los Angeles something something"
Current output: [ (O O) (B-type1 B-type1 I-type1) (O O) ]
Fixed output: [ (O O) (B-type1) (B-type1 I-type1) (O O) ]
| 07-22-2020 11:16:19 | 07-22-2020 11:16:19 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,965 | closed | BerTweet tokenizer issue | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Bertweet
## To reproduce
Steps to reproduce the behavior:
1. tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
2.
3.
OSError: Model name 'vinai/bertweet-base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'vinai/bertweet-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
## Expected behavior
the tokenizer should be loaded correctly.
https://huggingface.co/vinai/bertweet-base?text=Paris+is+the+%3Cmask%3E+of+France.
## Environment info
- `transformers` version: 2.10.0
- Platform: ubuntu 18.04
- Python version: 3.7
- PyTorch version (GPU?): 1.5
- Tensorflow version (GPU?):
- Using GPU in script?: v100
- Using distributed or parallel set-up in script?: no
| 07-22-2020 08:53:22 | 07-22-2020 08:53:22 | This model is defined as a `roberta` model but its tokenizer seems to be a Wordpiece tokenizer (based on the `vocab.txt` file), whereas Roberta uses a Byte-level BPE.
This is not currently supported out of the box by our `AutoTokenizer/AutoModel` features (model type ≠ tokenizer type) nor by our Pipelines but I'd like to support this in the future.<|||||>For now, you'll have to initialize this tokenizer + model independently.
```
BertTokenizer.from_pretrained("...")
AutoModel.from_pretrained("...")
```
Also cc'ing model author @datquocnguyen<|||||>I am working on it (I just have uploaded the model to huggingface yesterday).
I will create pull requests soon, so that users can make use of the following scripts:
tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-base")
bertweet = BertweetModel.from_pretrained("vinai/bertweet-base")
Please stay tuned!
<|||||>@julien-c @datquocnguyen Thanks for your answer.
I just tried the AutoModel, I had some weird "CUDA illegal memory access error" after 2 steps. It works fine with other models such as electra or roberta. I do not know if it is related to some wrong encoding with the tokenizer (I am using the fairseq tokenizer as the tokenizer from huggingface is not working even with BertTokenizer) or something else.
update: I may have found the issue. It may come from the max length which seems to be 130, contrary to regular Bert Base model. I was using a longer length sequence.<|||||>> I am working on it (I just have uploaded the model to huggingface yesterday).
> I will create pull requests soon, so that users can make use of the following scripts:
>
> ```
> tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-base")
> bertweet = BertweetModel.from_pretrained("vinai/bertweet-base")
> ```
>
> Please stay tuned!
Looking forward to it !<|||||>@nightlessbaron @Shiro-LK @julien-c FYI, I have just created a pull request #6129 for adding BERTweet and PhoBERT into transformers
@nightlessbaron In case you want to use BERTweet right away, you might have a look at this fork https://github.com/datquocnguyen/transformers
Cheers,
Dat.<|||||>@datquocnguyen Looks like the error still exists. From https://huggingface.co/vinai/bertweet-base, I run
tokenizer = AutoTokenizer.from_pretrained("vinai/bertweet-base")
It gives:
OSError: Model name 'vinai/bertweet-base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'vinai/bertweet-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
Also, https://huggingface.co/vinai/bertweet-base?text=Paris+is+the+%3Cmask%3E+of+France gives an error<|||||>I had the same error @steveguang had. Is there any solution?<|||||>@steveguang @SergioBarretoJr your issue has now been solved.
Also to @Shiro-LK @nightlessbaron Please check https://github.com/datquocnguyen/transformers
@julien-c Please help review this pull request #6129 BERTweet now works in Auto mode and without an additional dependency fastBPE.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This was solved by @datquocnguyen |
transformers | 5,964 | closed | text classification reuse without classifier | thanks in advance.
I want to train first more data, it mans more label.
(its like pretraining)
after that I choose some label only, and train again.
But error happened during below code
tokenizer = RobertaTokenizer.from_pretrained(pretrained_path, do_lower_case=False)
model = RobertaForSequenceClassification.from_pretrained(pretrained_path, num_labels=10)
error message is like below.
Error(s) in loading state_dict for RobertaForSequenceClassification:
size mismatch for classifier.out_proj.weight: copying a param with shape torch.Size([19, 768]) from checkpoint, the shape in current model is torch.Size([10, 768]).
size mismatch for classifier.out_proj.bias: copying a param with shape torch.Size([19]) from checkpoint, the shape in current model is torch.Size([10]).
how can I do it?? | 07-22-2020 08:39:41 | 07-22-2020 08:39:41 | Hi! This doesn't currently work, because it's trying to instantiate a classification layer with 10 labels, while the checkpoint's classification layer has 19 labels.
What you want is to keep the base layers, but ignore the classification layers. In order to do so, you can load your checkpoint in a base `RobertaModel`. This will only load the base model. You can then save that model to a checkpoint, and load that checkpoint from a `RobertaForSequenceClassification` model.
Here's how to do this:
```py
from transformers import RobertaForSequenceClassification, RobertaConfig, RobertaModel
model = RobertaModel.from_pretrained(pretrained_path)
model.save_pretrained(f"{pretrained_path}-base-model")
config = RobertaConfig(num_labels=10)
model = RobertaForSequenceClassification.from_pretrained(f"{pretrained_path}-base-model", config=config)
```
You should see the output:
```
Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at here-base-model and are newly initialized: ['classifier.dense.weight', 'classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
This means that it has correctly loaded the entire base model, and has randomly initialized the classifier weights. Hope that helps! |
transformers | 5,963 | closed | Bert forward reports error on GPU; but runs fine on CPU | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I would like to encode articles using Bert. My code runs fine on CPU, but failed on GPU. The GPU is a on a remote server.
```
def assign_gpu(token):
token_tensor = token['input_ids'].to('cuda')
token_typeid = token['token_type_ids'].to('cuda')
attention_mask = token['attention_mask'].to('cuda')
output = {'input_ids': token_tensor,
'token_type_ids': token_typeid,
'attention_mask': attention_mask}
return output
bs = 16
data_dl = DataLoader(PatentDataset(df), batch_size=bs, shuffle=False)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased/')
model = BertModel.from_pretrained('bert-base-uncased/')
inputs_list = []
for a in data_dl:
inputs_list.append(tokenizer.batch_encode_plus(a, pad_to_max_length=True, return_tensors='pt'))
# GPU part
model.cuda()
model.eval()
out_list = []
with torch.no_grad():
for i, inputs in enumerate(inputs_list):
inputs = assign_gpu(inputs)
output = model(**inputs)[0][:, 0, :]
out_list.append(output)
```
The error message is
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-3-f6979d077f60> in <module>
6 for i, inputs in enumerate(inputs_list):
7 inputs = assign_gpu(inputs)
----> 8 output = model(**inputs)[0][:, 0, :]
9 out_list.append(output)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/volta_pypkg/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states)
751
752 embedding_output = self.embeddings(
--> 753 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
754 )
755 encoder_outputs = self.encoder(
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/volta_pypkg/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
180 token_type_embeddings = self.token_type_embeddings(token_type_ids)
181
--> 182 embeddings = inputs_embeds + position_embeddings + token_type_embeddings
183 embeddings = self.LayerNorm(embeddings)
184 embeddings = self.dropout(embeddings)
RuntimeError: CUDA error: device-side assert triggered
```
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: https://stackoverflow.com/questions/63026701/bert-forward-reports-error-on-gpu-but-runs-fine-on-cpu | 07-22-2020 05:51:56 | 07-22-2020 05:51:56 | CUDA errors are always very cryptic 😕.
Do you mind giving us an example that we can reproduce (e.g. with an article you're trying to encode that fails), so that we can see what's going on?<|||||>Thank you for the comment. I split my code into 2 script files: one is tokenize; the other is transform. Somehow it works. Probably it was caused by some issue in the remote server, or my custom data class. I will close the topic. Thank you for your time. |
transformers | 5,962 | closed | tensorflow转为pytorch的两个文件在哪里? | 
大佬您好:
请问这两个文件在哪里?discriminator.json和model.bin | 07-22-2020 03:47:23 | 07-22-2020 03:47:23 | If I understand correctly and you're asking what/where are those two files:
- `discriminator.json` is the configuration file for the model you want to convert
- `model.bin` is the location where the converted checkpoint will be saved. |
transformers | 5,961 | closed | [docs] Add integration test example to copy pasta template | Encourage testing practices that have been encouraged since last update, namely:
- @slow tests that run on cuda and fp16 if possible and show that your model produces good outputs.
- more tests ~= better
- unindent the ModelTester
- call get_extended_attention_mask and delete massive comment.
My code is probably broken because this thing isn't tested!
Add:
"""
Try to make this test take a string and check that a resultant string == desired_result using your tokenizers encode and decode functions
"" | 07-22-2020 03:23:05 | 07-22-2020 03:23:05 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5961?src=pr&el=h1) Report
> Merging [#5961](https://codecov.io/gh/huggingface/transformers/pull/5961?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e714412fe6b38346a1f73525b701e030857b2f21&el=desc) will **decrease** coverage by `1.22%`.
> The diff coverage is `25.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5961?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5961 +/- ##
==========================================
- Coverage 78.50% 77.27% -1.23%
==========================================
Files 146 146
Lines 26214 26218 +4
==========================================
- Hits 20578 20259 -319
- Misses 5636 5959 +323
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5961?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/testing\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `72.72% <25.00%> (-3.75%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-1.26%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5961?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5961?src=pr&el=footer). Last update [e714412...a57f645](https://codecov.io/gh/huggingface/transformers/pull/5961?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,960 | closed | Adding Minimal Reproducible Usage Example For TPU support on examples/seq2seq | Attempt to resolve https://github.com/huggingface/transformers/issues/5895.
Minimal Working [Colab Example](https://colab.research.google.com/drive/16q2GWrnZ0Tjg1OxJQUcaWKCWwn3Jh5z0?usp=sharing).
For using more than a single core, one needs to ensure that enough RAM is available else wait for PyTorch-XLA to release a stable version. They have also [released](https://github.com/pytorch/xla/issues/1870#issuecomment-623603323) a fix way back that prevent excessive memory usage for nightly. | 07-22-2020 02:26:14 | 07-22-2020 02:26:14 | The test will obviously break, right?<|||||>Any progress on merging this?<|||||>they won't accept it as the checks have failed?
But it's expected for the checks to fail as I have modified `modeling_bart` and it's gonna have one more Param count.<|||||>Is that so @sshleifer?<|||||>Thanks for the contribution, this looks awesome!
We can't merge with failing tests, but I think the tests can pass.
Could you also check
```
RUN_SLOW=1 pytest tests/test_modeling_bart.py
RUN_SLOW=1 pytest tests/test_modeling_marian.py
RUN_SLOW=1 pytest test_modeling_mbart.py
```
add the `USE_CUDA=1` prefix to make them run faster on GPU.<|||||>Actually can we add a `support_tpu` flag to BartConfig, init it to False, and only allocate `lm_head` if it's set to True. I'm concerned that we are wasting RAM when we train on GPU. (I would happily change my mind if I encountered evidence that this change doesn't change GPU RAM consumption.)<|||||>I tried this version and it seems to work but it stucks at "Validation sanity check". Working colab [here](https://colab.research.google.com/drive/1NA9_EPEBNmo7feQ60iiznsLP_XbWwQmC?usp=sharing)<|||||>Well I removed the validation check altogether by passing in the concerned
flag to 0. Tried debugging to find out what's causing it, but I couldn't
figure it out.
If you will train and validate, it will work.
On Mon, 27 Jul 2020, 17:46 marton-avrios, <[email protected]> wrote:
> I tried this version and it seems to work but it stucks at "Validation
> sanity check".
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/5960#issuecomment-664360705>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AFNPJJQYIPQDG5ZPYXDTTT3R5VV23ANCNFSM4PEHWJYA>
> .
>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This is now supported by `Seq2SeqTrainer`. Use that if you want TPU support! |
transformers | 5,959 | closed | Can't load weights of models | When I run below codes, I can successfully load the tokenizer but fail with loading the models.
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelWithLMHead.from_pretrained("bert-base-uncased")
Here is the error:
OSError: Can't load weights for 'bert-base-uncased'. Make sure that:
- 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'bert-base-uncased' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
I can load models after manually downloading them but I do want to directly load them via transformers. | 07-22-2020 02:21:25 | 07-22-2020 02:21:25 | I think you can try code :
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
model = BertModel.from_pretrained("bert-base-uncased")
```<|||||>> I think you can try code :
>
> ```
> from transformers import BertTokenizer, BertModel
> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
> model = BertModel.from_pretrained("bert-base-uncased")
> ```
still succeed in running tokenizer but fail with modeling<|||||>could due to the website limitaition of my company<|||||>I have been banging my head on the same issue for a few days now.
Just observed, it downloads these weights from AWS links ( check configuration_bert.py or configuration_distilbert.py, you'll find these files in /Anaconda3/envs/envName/Lib/site-packages/transformers/ ) and my company blocks AWS and GCP links.
This, most likely, seems to be the issue.<|||||>> When I run below codes, I can successfully load the tokenizer but fail with loading the models.
> from transformers import AutoTokenizer, AutoModelWithLMHead
> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
> model = AutoModelWithLMHead.from_pretrained("bert-base-uncased")
>
> Here is the error:
> OSError: Can't load weights for 'bert-base-uncased'. Make sure that:
>
> * 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'
> * or 'bert-base-uncased' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
>
> I can load models after manually downloading them but I do want to directly load them via transformers.
hello, I meet the same problem ,and if I download the model manually, and where I should put the file in ?<|||||>> > When I run below codes, I can successfully load the tokenizer but fail with loading the models.
> > from transformers import AutoTokenizer, AutoModelWithLMHead
> > tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
> > model = AutoModelWithLMHead.from_pretrained("bert-base-uncased")
> > Here is the error:
> > OSError: Can't load weights for 'bert-base-uncased'. Make sure that:
> >
> > * 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'
> > * or 'bert-base-uncased' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
> >
> > I can load models after manually downloading them but I do want to directly load them via transformers.
>
> hello, I meet the same problem ,and if I download the model manually, and where I should put the file in ?
anywhere is ok, just put your file location in BertTokenizer.from_pretrained(location)<|||||>> > When I run below codes, I can successfully load the tokenizer but fail with loading the models.
> > from transformers import AutoTokenizer, AutoModelWithLMHead
> > tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
> > model = AutoModelWithLMHead.from_pretrained("bert-base-uncased")
> > Here is the error:
> > OSError: Can't load weights for 'bert-base-uncased'. Make sure that:
> >
> > * 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'
> > * or 'bert-base-uncased' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
> >
> > I can load models after manually downloading them but I do want to directly load them via transformers.
>
> hello, I meet the same problem ,and if I download the model manually, and where I should put the file in ?
Where did u download the model manually from?<|||||>You can browse to https://huggingface.co/bert-base-uncased/tree/main for example and download pretrained models. |
transformers | 5,958 | closed | Add functioning early stopping (patience) and weighted random sampling | This is to fix the issues in #4186 (hopefully) to get it merged in. Also adds weighted random sampling for imbalanced classes. | 07-22-2020 01:59:26 | 07-22-2020 01:59:26 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5958?src=pr&el=h1) Report
> Merging [#5958](https://codecov.io/gh/huggingface/transformers/pull/5958?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/09a2f40684f77e62d0fd8485fe9d2d610390453f&el=desc) will **decrease** coverage by `0.09%`.
> The diff coverage is `13.04%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5958?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5958 +/- ##
==========================================
- Coverage 78.49% 78.39% -0.10%
==========================================
Files 146 146
Lines 26210 26252 +42
==========================================
+ Hits 20573 20580 +7
- Misses 5637 5672 +35
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5958?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `35.50% <9.09%> (-2.34%)` | :arrow_down: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `78.00% <100.00%> (+0.44%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5958?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5958?src=pr&el=footer). Last update [09a2f40...cbc7c63](https://codecov.io/gh/huggingface/transformers/pull/5958?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>when this will be added to library?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,957 | closed | NoneType error when using Trainer | ## System Info
Pop!_OS 20.04
Pytorch: 1.5.1
Transformers: 2.11.0
Python: 3.7.6
## Background Info
I wasn't sure what the `training_dataset` parameter of `Trainer` was so I opted to create a custom Pytorch `DataSet` using this [tutorial](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html).
```python
from torch.utils.data import Dataset
import pandas as pd
import torch
class SDAbstractsDataset(Dataset):
def __init__(self, csv_file):
self.sd_abstracts_df = pd.read_csv(csv_file, encoding='ISO-8859-1')
def __len__(self):
return len(self.sd_abstracts_df)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
sample = {'abstract_text': self.sd_abstracts_df.iloc[idx, 1]}
return sample
```
I instantiate the `SDAbstractsDataset` object, create the `TrainingArguments` object, create an object based of a customized model, then instantiate the `Trainer` object with `sd_dataset`.
```python
from text_gen_w_transformers.finetune_gpt2 import GPT2FinetunedWithNgrams
from text_gen_w_transformers.custom_dataset import SDAbstractsDataset
from transformers import TrainingArguments, Trainer
sd_dataset = SDAbstractsDataset('/path/to/samples_64.csv')
training_args = TrainingArguments(
output_dir='/path/to/output/dir',
do_train=True,
per_device_train_batch_size=4,
learning_rate=1e-3,
num_train_epochs=1
)
model = GPT2FinetunedWithNgrams.from_pretrained('gpt2')
trainer = Trainer(
model=model,
args=training_args,
train_dataset=sd_dataset
)
trainer.train()
```
Whenever I run the `trainer.train()` command, I get the following error:
```python
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Iteration: 0%| | 0/16 [00:00<?, ?it/s]Traceback (most recent call last):
File "/path/to/project/finetune_test.py", line 37, in <module>
trainer.train()
File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 499, in train
tr_loss += self._training_step(model, inputs, optimizer)
File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 622, in _training_step
outputs = model(**inputs)
File "/path/to/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/path/to/project/finetune_gpt2.py", line 42, in forward
orig_input_str = self.tokenizer.decode(input_ids[0], skip_special_tokens=True)
TypeError: 'NoneType' object is not subscriptable
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Iteration: 0%| | 0/16 [00:00<?, ?it/s]
```
I did a little debugging and found that `inputs` in the line `tr_loss += self._training_step(model, inputs, optimizer)` was empty. Any thoughts on how to fix this?
Thanks in advance! | 07-22-2020 01:51:38 | 07-22-2020 01:51:38 | @aclifton314 just wondering if your custom dataset `__getitem__` is configured correctly?
i.e I'm thinking it should be returning `return {'data': x, 'target': y}` or `return x, y`<|||||>@danieldiamond Thank you for the response. That's a good point about `__getitem__`. The data in my csv file is structured in the following way:
```
Title, Raw
DistillBERT a distilled version of BERT smaller faster cheaper and lighter, As transfer learning from large-scale pretrained models becomes more prevalent in Natural Language Processing (NLP) blah blah blah.
```
I'm using the `Raw` columns in the csv as training data for my model, which inherits from `GPT2LMHeadModel`. I had assumed that there wouldn't be any need for targets for this model. But maybe there's something in `Trainer` that expects this particular type of format? In which case, I presume I could always set `y` to `None`?
I was reading [this post](https://discuss.huggingface.co/t/dataset-expected-by-trainer/148/2) on the HF forum and there is the following mention:
```
Make sure the dataloader returns the dict with same key values forward method expects.
Inside _training_step, you’ll pass inputs to the function, and then after the inputs are passed kept on gpu, the function does:
output = model(**inputs)
```
I also looked through the debugger again. `inputs` for `tr_loss += self._training_step(model, inputs, optimizer)` looks like it is created in `for step, inputs in enumerate(epoch_iterator):` (this is in `trainer.py`). Looking at `epoch_iterator`, the `iterable` attribute contains the data from `sd_dataset` (i.e. `epoch_iterator -> iterable -> dataset`).
It looks like the data is being carried through `trainer.py`, but I can't seem to figure out why `inputs` would be empty.<|||||>Your dataset returns keys that are not known to the model. Also, it doesn't seem to tokenize your texts?
It should return a dict with the expected argument of your models, including the labels (that way the model will return the loss for the trainer). I have no idea what the code of `GPT2FinetunedWithNgrams` looks like, so that will depend on that.
Note that `Trainer` is not an abstract training loop for all DL problems, it's customized to work with the transformers models, so your model should behave the same way as HF models if you want to use it.<|||||>@sgugger incorporating your changes and upgrading to Transformers 3.0.2 solved the problem for me. I've got a long thread about it [here](https://discuss.huggingface.co/t/finetuning-gpt2-with-user-defined-loss/163/30).
Thanks for the help! |
transformers | 5,956 | closed | [CI] Install examples/requirements.txt | 07-22-2020 00:58:52 | 07-22-2020 00:58:52 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5956?src=pr&el=h1) Report
> Merging [#5956](https://codecov.io/gh/huggingface/transformers/pull/5956?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e714412fe6b38346a1f73525b701e030857b2f21&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5956?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5956 +/- ##
==========================================
+ Coverage 78.50% 78.51% +0.01%
==========================================
Files 146 146
Lines 26214 26214
==========================================
+ Hits 20578 20581 +3
+ Misses 5636 5633 -3
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5956?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+1.00%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5956?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5956?src=pr&el=footer). Last update [e714412...ca424ca](https://codecov.io/gh/huggingface/transformers/pull/5956?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>merging to rerun. |
|
transformers | 5,955 | closed | module 'tensorflow_core._api.v2.config' has no attribute 'list_physical_devices' | # 🐛 Bug
A bunch of tf tests fail with:
```module 'tensorflow_core._api.v2.config' has no attribute 'list_physical_devices'```
```
ERROR tests/test_benchmark_tf.py - AttributeError: module 'tensorflow_core._api.v2.config' has no attribute 'list_physical_devices'
ERROR tests/test_benchmark_tf.py - AttributeError: module 'tensorflow_core._api.v2.config' has no attribute 'list_physical_devices'
ERROR tests/test_benchmark_tf.py - AttributeError: module 'tensorflow_core._api.v2.config' has no attribute 'list_physical_devices'
ERROR tests/test_benchmark_tf.py - AttributeError: module 'tensorflow_core._api.v2.config' has no attribute 'list_physical_devices'
```
`tf.config.list_physical_devices` seems to be added in tf-2.1, so unless `transformers` starts to require tf >= 2.1, this breaks for tf < 2.1.
Depending on what you decide I can send a PR to fix this with either:
a. require tf-2.1+ (simplest)
b. write a wrapper `list_physical_devices` that uses `tf.config.experimental.list_physical_devices` for tf < 2.1, and `tf.config.list_physical_devices` otherwise, and switch to using it.
c. do nothing?
## Environment info
```
- `transformers` version: 3.0.2
- Platform: Linux-4.15.0-109-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.5
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): 2.0.1 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
| 07-21-2020 23:48:55 | 07-21-2020 23:48:55 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,954 | closed | [pack_dataset] don't sort before packing, only pack train | this is better than sorting.
But for best metrics, don't pack. | 07-21-2020 21:03:39 | 07-21-2020 21:03:39 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5954?src=pr&el=h1) Report
> Merging [#5954](https://codecov.io/gh/huggingface/transformers/pull/5954?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d1d15d6f2de9e2cde48ff3ea2072add3311ce2ac&el=desc) will **increase** coverage by `0.03%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5954?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5954 +/- ##
==========================================
+ Coverage 78.52% 78.56% +0.03%
==========================================
Files 146 146
Lines 26314 26314
==========================================
+ Hits 20664 20674 +10
+ Misses 5650 5640 -10
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5954?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.48% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5954/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+2.25%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5954?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5954?src=pr&el=footer). Last update [d1d15d6...97c473a](https://codecov.io/gh/huggingface/transformers/pull/5954?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,953 | closed | CL util to convert models to fp16 before upload | This should probably go to `commands/` | 07-21-2020 20:16:37 | 07-21-2020 20:16:37 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5953?src=pr&el=h1) Report
> Merging [#5953](https://codecov.io/gh/huggingface/transformers/pull/5953?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d1d15d6f2de9e2cde48ff3ea2072add3311ce2ac&el=desc) will **increase** coverage by `0.03%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5953?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5953 +/- ##
==========================================
+ Coverage 78.52% 78.56% +0.03%
==========================================
Files 146 146
Lines 26314 26314
==========================================
+ Hits 20664 20674 +10
+ Misses 5650 5640 -10
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5953?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.48% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+2.25%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5953?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5953?src=pr&el=footer). Last update [d1d15d6...055e604](https://codecov.io/gh/huggingface/transformers/pull/5953?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,952 | closed | Create README.md | 07-21-2020 19:43:20 | 07-21-2020 19:43:20 | ||
transformers | 5,951 | closed | Create README.md | 07-21-2020 19:34:39 | 07-21-2020 19:34:39 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5951?src=pr&el=h1) Report
> Merging [#5951](https://codecov.io/gh/huggingface/transformers/pull/5951?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/95d1962b9c8460b4cec5a88eb9915e8e25f5bc1e&el=desc) will **decrease** coverage by `1.41%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5951?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5951 +/- ##
==========================================
- Coverage 78.69% 77.27% -1.42%
==========================================
Files 146 146
Lines 26214 26214
==========================================
- Hits 20628 20258 -370
- Misses 5586 5956 +370
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5951?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.26%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5951/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5951?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5951?src=pr&el=footer). Last update [95d1962...89670f7](https://codecov.io/gh/huggingface/transformers/pull/5951?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,950 | closed | seq2seq: checkpoint callback seems messed up | 
^^ lightning checkpoint saved much later than `best_tfmr` | 07-21-2020 19:24:02 | 07-21-2020 19:24:02 | wrong thats just the directory. contents make more sense. |
transformers | 5,949 | closed | seq2seq/run_eval.py can take decoder_start_token_id | Also: batch_decode: document kwargs for autocomplete | 07-21-2020 19:11:20 | 07-21-2020 19:11:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5949?src=pr&el=h1) Report
> Merging [#5949](https://codecov.io/gh/huggingface/transformers/pull/5949?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/95d1962b9c8460b4cec5a88eb9915e8e25f5bc1e&el=desc) will **decrease** coverage by `0.17%`.
> The diff coverage is `50.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5949?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5949 +/- ##
==========================================
- Coverage 78.69% 78.51% -0.18%
==========================================
Files 146 146
Lines 26214 26214
==========================================
- Hits 20628 20581 -47
- Misses 5586 5633 +47
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5949?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5949/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <50.00%> (ø)` | |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5949/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5949/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5949/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5949?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5949?src=pr&el=footer). Last update [95d1962...450d753](https://codecov.io/gh/huggingface/transformers/pull/5949?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,948 | closed | Exporting T5 to ONNX | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): t5-small (T5ForConditionalGeneration)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The problem arises when running `convert_graph_to_onnx.py`
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x ] my own task or dataset: (give details below)
I am using a T5ForConditionalGeneration for machine translation.
## To reproduce
Steps to reproduce the behavior:
1. Run `python transformers/convert_graph_to_onnx.py --framework pt --model t5-small --tokenizer t5-small --opset 12 t5-small.onnx`
```
ONNX opset version set to: 12
Loading pipeline (model: t5-small, tokenizer: t5-small)
/Users/joshuasirota/onnx_env/lib/python3.6/site-packages/transformers/modeling_auto.py:798: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
FutureWarning,
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 242M/242M [01:09<00:00, 3.50MB/s]
Some weights of T5ForConditionalGeneration were not initialized from the model checkpoint at t5-small and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Using framework PyTorch: 1.5.1
Error while converting the model: You have to specify either decoder_input_ids or decoder_inputs_embeds
```
## Expected behavior
An ONNX export should be created
## Environment info
- `transformers` version: 3.0.2
- Platform: Darwin-18.6.0-x86_64-i386-64bit
- Python version: 3.6.5
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 07-21-2020 19:07:18 | 07-21-2020 19:07:18 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Could you please provide your modified script?<|||||>fwiw, I'm seeing the same error when trying to export a T5 model for translation with both PyTorch and TensorFlow and the latest version of Transformers (3.3.1).
```txt
python3 path/to/convert_graph_to_onnx.py --model t5-base translation_en_to_de.onnx --pipeline translation_en_to_de --framework pt
====== Converting model to ONNX ======
ONNX opset version set to: 11
Loading pipeline (model: t5-base, tokenizer: t5-base)
Using framework PyTorch: 1.6.0
Error while converting the model: You have to specify either decoder_input_ids or decoder_inputs_embeds
```
```txt
python3 path/to/convert_graph_to_onnx.py --model t5-base translation_en_to_de.onnx --pipeline translation_en_to_de --framework tf
====== Converting model to ONNX ======
ONNX opset version set to: 11
Loading pipeline (model: t5-base, tokenizer: t5-base)
/usr/local/lib/python3.8/site-packages/transformers/modeling_tf_auto.py:690: FutureWarning: The class `TFAutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `TFAutoModelForCausalLM` for causal language models, `TFAutoModelForMaskedLM` for masked language models and `TFAutoModelForSeq2SeqLM` for encoder-decoder models.
warnings.warn(
2020-09-30 16:29:58.866842: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-09-30 16:29:58.883705: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fe05e6c8780 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-09-30 16:29:58.883726: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-09-30 16:29:58.893052: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
All model checkpoint weights were used when initializing TFT5ForConditionalGeneration.
All the weights of TFT5ForConditionalGeneration were initialized from the model checkpoint at t5-base.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training.
/!\ Please note TensorFlow doesn't support exporting model > 2Gb /!\
Using framework TensorFlow: 2.3.1, keras2onnx: 1.7.0
Error while converting the model: You have to specify either inputs or inputs_embeds
```<|||||>Has anyone solved the issue??
<|||||>Having this issue too, has anyone found a workaround?<|||||>I had the same issue but[ this](https://stackoverflow.com/a/66117248/13273054) post gave an insight on what was causing the error.<|||||>I had the same issue. Has anyone solved the issue?
Thanks!<|||||>@suyuzhang @Anku5hk @howlinghuffy @ankane @LowinLi please have a look at the [fastT5 ](https://github.com/Ki6an/fastT5) library. it **exports** any t5 model to onnx, quantized it, runs it on onnxruntime. you can speed up the t5 models up to 5x and can reduce the model size to 3x. for more info check out the repo [here ](https://github.com/Ki6an/fastT5).<|||||>Looks appropriate, Thanks!<|||||>I had the same issue. Has anyone solved the issue?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 5,947 | closed | Test on a new string of gpt2 fine tuned | I fine tuned GPT2 on my own dataset now I have fine tuned model how can I test it on new string??
GPT-2/GPT and causal language modeling
The following example fine-tunes GPT-2 on WikiText-2. We're using the raw WikiText-2 (no tokens were replaced before the tokenization). The loss here is that of causal language modeling.
```
export TRAIN_FILE=/path/to/dataset/wiki.train.raw
export TEST_FILE=/path/to/dataset/wiki.test.raw
python run_language_modeling.py \
--output_dir=output \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE
``` | 07-21-2020 18:59:16 | 07-21-2020 18:59:16 | I have these files generated
<img width="1389" alt="Screenshot 2020-07-22 at 12 39 19 AM" src="https://user-images.githubusercontent.com/33617789/88096141-d1f06a80-cbb3-11ea-8468-9284fb074bef.png">
<|||||>can you please provide a code snippet<|||||>Not sure what you mean by test, do you want to generate using the model or calculate loss and perplexity for the test data ?
For generation you can use the `.generate` method. This [blog post ](https://huggingface.co/blog/how-to-generate) explains generate very well.<|||||>got it thanks :)<|||||>Closing this as the issue is solved. Feel free to re-open if you still face issues. |
transformers | 5,946 | closed | Update doc to new model outputs | This is a follow-up from #5438, adapting doc pages and examples. | 07-21-2020 18:38:32 | 07-21-2020 18:38:32 | The other problem is that it downloads all models since it tests all examples. The PR fixes some docstrings so I know some of them were broken.
Side note, the correct command is:
```
RUN_SLOW=yes pytest tests/test_doc_samples.py
```
since they are all marked as slow.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5946?src=pr&el=h1) Report
> Merging [#5946](https://codecov.io/gh/huggingface/transformers/pull/5946?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/604a2355dc61f2888d68aab3adb0c5b648a4f42d&el=desc) will **increase** coverage by `0.03%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5946?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5946 +/- ##
==========================================
+ Coverage 78.47% 78.51% +0.03%
==========================================
Files 146 146
Lines 26214 26214
==========================================
+ Hits 20572 20582 +10
+ Misses 5642 5632 -10
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5946?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <ø> (ø)` | |
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `82.04% <ø> (ø)` | |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.74% <ø> (ø)` | |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.39% <ø> (ø)` | |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.82% <ø> (ø)` | |
| [src/transformers/modeling\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kcHIucHk=) | `97.83% <ø> (ø)` | |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.55% <ø> (ø)` | |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.88% <ø> (ø)` | |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.21% <ø> (ø)` | |
| [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <ø> (ø)` | |
| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/5946/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5946?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5946?src=pr&el=footer). Last update [604a235...766911a](https://codecov.io/gh/huggingface/transformers/pull/5946?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Quite a few of those tests are failing but it appears TensorFlow outputs are unreliable: they are printed at full precision and my guess is that depending on your GPU, you get some different digits starting at 1e-6. Will discuss with @LysandreJik on how to make this more reliable when he's back, in the meantime, merging this one. |
transformers | 5,945 | closed | consistently use True/False for `return_tuple` | …
consistently use True/False for `return_tuple` (currently it's sometimes None, sometimes False) | 07-21-2020 18:22:00 | 07-21-2020 18:22:00 | Mmm, it should be `None` all the time except if the model doesn't have an inner config. The current behavior for all other flags (`output_hidden_states` and `output_attentions`) is that passing an argument supersedes the config. So for instance, passing `return_tuple=False` to the model even if `config.return_tuple = True` should end up with no `return_tuple`.<|||||>But why is there a need for a 3rd state, why not just default to `False`?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5945?src=pr&el=h1) Report
> Merging [#5945](https://codecov.io/gh/huggingface/transformers/pull/5945?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/604a2355dc61f2888d68aab3adb0c5b648a4f42d&el=desc) will **decrease** coverage by `0.17%`.
> The diff coverage is `98.85%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5945?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5945 +/- ##
==========================================
- Coverage 78.47% 78.29% -0.18%
==========================================
Files 146 146
Lines 26214 26214
==========================================
- Hits 20572 20525 -47
- Misses 5642 5689 +47
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5945?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/modeling\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `24.10% <0.00%> (ø)` | |
| [src/transformers/modeling\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `82.04% <100.00%> (ø)` | |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.74% <100.00%> (ø)` | |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.39% <100.00%> (ø)` | |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.37% <100.00%> (ø)` | |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.82% <100.00%> (ø)` | |
| [src/transformers/modeling\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kcHIucHk=) | `97.83% <100.00%> (ø)` | |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.55% <100.00%> (ø)` | |
| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/5945/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5945?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5945?src=pr&el=footer). Last update [604a235...f5be8a4](https://codecov.io/gh/huggingface/transformers/pull/5945?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>That is the way those flags are set and consistency with the others is less likely to surprise users:
`None` -> use the value in config (defaulting to `False`)
`False` -> force-use `return_tuple=False` even if the config says `True`
`True` -> force-use `return_tuple=True` even if the config says `False`
One can imagine the user has set the config a certain way but needs to bypass it. Returning a tuple for a punctual conversion to ONNX for instance, or not returning one on a model set for jit-tracing/ONNX just to get the documented output while doing a punctual test.<|||||>I agree that nullable boolean are a good API design 💯 <|||||>Thank you for your explanation, @sgugger! |
transformers | 5,944 | closed | process stuck at LineByLineTextDataset. training not starting | # ❓ Questions & Help
I am using below python code.
BASE_MODEL = "/data/nlp/bert-base-uncased"
CACHE_DIR = "/data/nlp/cache"
model = AutoModelWithLMHead.from_pretrained(BASE_MODEL,cache_dir=CACHE_DIR)
t_tokenizer = ByteLevelBPETokenizer()
t_tokenizer.train(files=paths, vocab_size=52_000, min_frequency=2, special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
])
new_vocab = t_tokenizer.get_vocab()
tokenizer = BertTokenizerFast.from_pretrained(BASE_MODEL)
num_added_toks = tokenizer.add_tokens(list(new_vocab.keys()))
model.resize_token_embeddings(len(tokenizer))
dataset = LineByLineTextDataset(tokenizer=tokenizer,file_path=DATA_FILE,block_size=64)
it works fine for small files but for file > 600 MB the process gets stuck at dataset = LineByLineTextDataset(tokenizer=tokenizer,file_path=DATA_FILE,block_size=64)
| 07-21-2020 17:06:31 | 07-21-2020 17:06:31 | Hi, this is because it's tokenizing the entire dataset in a single thread, so it's bound to be a bit slow. This class caches the result though, so you will only have to do this step once! |
transformers | 5,943 | closed | [Doc] explaining romanian postprocessing for MBART BLEU hacking | 07-21-2020 16:22:32 | 07-21-2020 16:22:32 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5943?src=pr&el=h1) Report
> Merging [#5943](https://codecov.io/gh/huggingface/transformers/pull/5943?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ccbf74a685ae24bd1a0ba1325e4e9a9d62bbb2fa&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5943?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5943 +/- ##
==========================================
- Coverage 77.31% 77.31% -0.01%
==========================================
Files 146 146
Lines 26214 26214
==========================================
- Hits 20268 20267 -1
- Misses 5946 5947 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5943?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5943/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5943?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5943?src=pr&el=footer). Last update [ccbf74a...163ba92](https://codecov.io/gh/huggingface/transformers/pull/5943?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@sshleifer just a quick comment on this, from here: https://github.com/pytorch/fairseq/issues/1758#issuecomment-625214318
for EN to RO, there is no point of maximising BLEU by removing diacritics when in real world (WMT human evaluation + SMT Matrix) it is clearly compared with a reference which HAS diacritics. But lot of papers do not do the things right.<|||||>That makes sense @vince62s! This is mostly so I can have a link to paste into a github issue when people ask me why their BLEU score is 27 :)
|
|
transformers | 5,942 | closed | Converting GPT2 logits to token ids directly | ## System Info
Pop!_OS 20.04
Pytorch: 1.5.1
Transformers: 2.11.0
Python: 3.7.6
## Background Info
Here is the constructor and forward method for the model I am trying to build. Ultimately it will finetune GPT2 with a loss from a ngrams model I made. The loss isn't implemented yet because I want to test the GPT2 generation part first.
```python
class GPT2FinetunedWithNgrams(GPT2LMHeadModel):
def __init__(self, config):
super().__init__(config)
self.tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
def forward(
self,
input_ids=None,
past=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
use_cache=True,
):
temperature = 0.85
tmp_input_ids = input_ids
max_gen_length = 30
counter = 0
orig_input_str = self.tokenizer.decode(input_ids[0], skip_special_tokens=True)
strs_to_join = orig_input_str.split()
while counter < max_gen_length:
transformer_outputs = self.transformer(
tmp_input_ids,
past=past,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
use_cache=use_cache,
)
hidden_states = transformer_outputs[0]
lm_logits = self.lm_head(hidden_states) / (temperature)
last_token = lm_logits[:, -1]
last_token_softmax = torch.softmax(last_token, dim=-1).squeeze()
next_token = torch.argmax(last_token_softmax).tolist()
next_gen_token_str = self.tokenizer.decode(next_token, clean_up_tokenization_spaces=True).strip()
strs_to_join.append(next_gen_token_str)
new_str_input = ' '.join(strs_to_join)
tmp_input_ids = self.tokenizer.encode(new_str_input, return_tensors='pt')
counter += 1
return new_str_input
```
Right now the code will take the `lm_logits`, calculate the softmax, and then get the next token predicted by GPT2. I then add that next token to the original input sequence and feed that combination back into GPT2, until the `max_gen_length` is reached. Finally it returns the original input sequence with the generated sequence appended to it.
## Question 1
Is there a way to directly go from logits to token ids that I am missing in HF Transformers? Or better yet, is there a better, more efficient way of doing this?
## Question 2
Given that I am using things like a `max_length` and `temperature`, I feel like the `generate()` method would be useful here. However, I am not exactly sure how to implement that. Since ultimately I want my model to finetune GPT2 based off a ngrams loss, I don't know if calling `generate()` will call that method from the GPT2 model that is involved in the finetuning or some fixed GPT2 model coming elsewhere. Part of this is my nonfamiliarity with how pytorch trains (which I'm working on understanding better).
Said differently, if I can call `generate()` within the above forward method (I imagine something like `super().generate()`) will that generate a sequence using the GPT2 model whose weights are currently being modified based off my dataset and finetuning, or will it generate a sequence from some static version of GPT2 that is not having its weights modified?
I hope those questions aren't too convoluted. I can elaborate more if needed. Thanks in advance for your help!
| 07-21-2020 15:58:13 | 07-21-2020 15:58:13 | Hi! This is a very interesting question, I think you would have additional answers if you asked it over on the forums: https://discuss.huggingface.co |
transformers | 5,941 | closed | a transparent solution for DataParallel.gather not supporting ModelOutput (dataclass) | 1. Modify torch/nn/parallel/scatter_gather.gather function to support ModelOutput (dataclass) outputs. We override the torch.nn.DataParallel.gather method with this custom method.
2. remove previously committed workarounds
implementation: @sgugger
integration/testing: me
This should transparently solve https://github.com/huggingface/transformers/issues/5693
| 07-21-2020 15:54:25 | 07-21-2020 15:54:25 | This solves the problems encountered after the model outputs PR and doesn't break anything in existing PyTorch behavior. I don't know how the rest of the team feels about patching over libraries methods though.
Other possible fixes are:
- breaking changes and just use dicts everywhere (supported by DataParallel and supported by JIT in PyTorch 1.6.0)
- adding a hook at init of `PretrainedModel` to check at the first forward pass if the model has been wrapped in a DataParallel container and setting return_tuple in that case (but that's kind of ugly).
- manually testing in every model during the forward pass if it has been wrapped in a DataParallel container and setting return_tuple in that case (by changing the test that sets `return_tuple`).<|||||>One extra note to @sgugger's comments is that even if pytorch fixes `gather` to support `dataclasses` - converting them to dicts, it still won't be sufficient, since we have models where the output dataclass has optional members (w/o defaults) followed by required members, e.g. `MaskedLMOutput`, so such possibly modified by pytorch `gather` would fail here, unless the optional members are looked up and `None` is passed to the constructor - I guess this could be done in pytorch's `gather` as well.
Perhaps coming up with a pure `dataclasses` and not `ModelOutput`-specific implementation, that can be given to the pytorch team? and then this workaround will be needed only for older pytorch versions. <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5941?src=pr&el=h1) Report
> Merging [#5941](https://codecov.io/gh/huggingface/transformers/pull/5941?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/32883b310ba30d72e67bb2ebb5847888f03a90a8&el=desc) will **decrease** coverage by `0.05%`.
> The diff coverage is `19.23%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5941?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5941 +/- ##
==========================================
- Coverage 78.51% 78.46% -0.06%
==========================================
Files 146 146
Lines 26214 26236 +22
==========================================
+ Hits 20583 20585 +2
- Misses 5631 5651 +20
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5941?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.58% <ø> (+0.12%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.73% <19.23%> (-4.77%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5941?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5941?src=pr&el=footer). Last update [32883b3...528758e](https://codecov.io/gh/huggingface/transformers/pull/5941?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thinking of it a little bit more, if PyTorch starts to support dataclasses in `gather`, we can then drop the patching by having `= None` for every attribute of every type of `ModelOuput` (as is done in #5740).<|||||>Since the model outputs had the other problem of not working with TensorFlow, we are going on a different road (see [the forum](https://discuss.huggingface.co/t/new-model-output-types/195/8)). |
transformers | 5,940 | closed | What is the difference between the function of add_tokens() and add_special_tokens() in tokenizer | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
When I read the code of tokenizer, I have a problem if I want to use a pretrained model in NMT task, I need to add some tag tokens, such as '2English' or '2French'. I think these tokens are special tokens, so which function should I use: add_tokens() or add_special_tokens(). What is the difference between them?
| 07-21-2020 15:29:14 | 07-21-2020 15:29:14 | For some reasons those functions do not appear in the documentation, will have a look at why. The docstrings state of `add_special_tokens` states:
> Add a dictionary of special tokens (eos, pad, cls...) to the encoder and link them to class attributes. If special tokens are NOT in the vocabulary, they are added to it (indexed starting from the last index of the current vocabulary).
> Using `add_special_tokens` will ensure your special tokens can be used in several ways:
> - special tokens are carefully handled by the tokenizer (they are never split)
> - you can easily refer to special tokens using tokenizer class attributes like `tokenizer.cls_token`. This makes it easy to develop model-agnostic training and fine-tuning scripts.
Though the second point would not apply in your case. The docstring of `add_tokens` states:
> Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from length of the current vocabulary.
When possible, special tokens are already registered for provided pretrained models (ex: BertTokenizer cls_token is already registered to be '[CLS]' and XLM's one is also registered to be '</s>')
Hope that helps!<|||||>Thank a lot for your help. Therefore, if I use the add_tokens(), the tokens may still split?<|||||>I would also appreciate some clarification on the difference between the function and when to use which one. |
transformers | 5,939 | closed | Can't use BatchEncoding in the fit function | I had a [few problems with the `transformers`](https://github.com/huggingface/transformers/issues/5555) and the related [`tensorflow` functionality](https://github.com/tensorflow/tensorflow/issues/41204), but now I've made some progress using trial and error method.
You can see the issue in this [gist](https://colab.research.google.com/drive/125jJ0qrXGIe6goNrH_Ja7XPZtYp7nMXU?usp=sharing), now I found out that the problem was due to `model.fit(...)` function.
Here is the first version which causes the error:
```
model.fit(
x=X_train, #transformers.tokenization_utils_base.BatchEncoding
y=targets,
epochs=3
)
```
Output:
```
ValueError: too many values to unpack (expected 2)
```
Second version which at least works:
```
model.fit(
x=X_train.values(), #dict_values
y=targets,
epochs=3
)
```
**So, now the main question is why does it work and is my code correct?** I'm asking because I have seen many code fragments which used `BatchEncoding` as `x` in the `fit`(e.g. [this](https://www.kaggle.com/definedennis/pretrained-bert-with-huggingface-tensorflow-2-1/) and [this](https://github.com/thakursc1/NLPKernels/blob/3bb1fcac60ab8cdc6f2f68a2d9b5b7a477873811/DisasterTweetClassification/Transformer.py)), but I just can't do the same thing. | 07-21-2020 15:20:49 | 07-21-2020 15:20:49 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,938 | closed | How does AdamW weight_decay works for L2 regularization? | Hello I have some questions about weight regularization in Adam.
Apparently the `weight_decay` in the AdamW function https://huggingface.co/transformers/main_classes/optimizer_schedules.html#adamw-pytorch has the same impact as an `L2 regularization`.
My questions are: is that parameter the same as `lambda` that we have in the regularization term?

How does it exactly work? And what is its impact on the model complexity? | 07-21-2020 15:09:38 | 07-21-2020 15:09:38 | Hi there. General questions like this are probably better asked on the [forum](https://discuss.huggingface.co/). There is a research category that is exactly fitted for this. The `weight_decay` does correspond to the lambda you mention, though it's applied directly to the gradient, to avoid wasting compute with this huge some of all the weights scared.
You can also look at the [AdamW paper](https://arxiv.org/abs/1711.05101) for more information. <|||||>@sgugger thank you for your answer. I will check out the paper. |
transformers | 5,937 | closed | typos in seq2seq/readme | same-as-title. | 07-21-2020 13:01:33 | 07-21-2020 13:01:33 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5937?src=pr&el=h1) Report
> Merging [#5937](https://codecov.io/gh/huggingface/transformers/pull/5937?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d32279438a73e71961f53baa4fb47d0f08c2984d&el=desc) will **decrease** coverage by `0.94%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5937?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5937 +/- ##
==========================================
- Coverage 78.25% 77.31% -0.95%
==========================================
Files 146 146
Lines 26214 26214
==========================================
- Hits 20515 20268 -247
- Misses 5699 5946 +247
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5937?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <0.00%> (+1.16%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5937?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5937?src=pr&el=footer). Last update [d322794...979055f](https://codecov.io/gh/huggingface/transformers/pull/5937?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks! 🎉 |
transformers | 5,936 | closed | Easy selection of a learning rate scheduler when initializing a Trainer | # 🚀 Feature request
Please consider adding a new argument to the Trainer constructor to specify which among the available learning rate schedulers should be used during training.
## Motivation
Even though Trainer already has the option to specify a given optimizer and learning rate scheduler, you need to explicitly initialize both (even when you only want to change the scheduler) with parameters already available to the Trainer itself via TrainingArguments. It would be nicer and smoother to just provide Trainer with a string specifying which scheduler to use (e.g. 'constant_schedule', 'cosine_schedule_with_warmup', ...) and have `get_optimizers` implement the choice.
| 07-21-2020 12:59:11 | 07-21-2020 12:59:11 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,935 | closed | Getting "AttributeError: 'Tensor' object has no attribute 'numpy'" while fine-tuning BERT for NER | As per https://github.com/huggingface/transformers/tree/master/examples/token-classification after doing the required setup and installing required libraries, when I run
```
python3 run_tf_ner.py --data_dir ./ \
--labels ./labels.txt \
--model_name_or_path $BERT_MODEL \
--output_dir $OUTPUT_DIR \
--max_seq_length $MAX_LENGTH \
--num_train_epochs $NUM_EPOCHS \
--per_device_train_batch_size $BATCH_SIZE \
--save_steps $SAVE_STEPS \
--seed $SEED \
--do_train \
--do_eval \
--do_predict
```
it stops at one point with error
```
/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py:488 _accumulate_next *
return self._accumulate_gradients(per_replica_features, per_replica_labels)
/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py:498 _accumulate_gradients *
per_replica_loss = self.args.strategy.experimental_run_v2(
/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py:511 _forward *
per_example_loss, _ = self._run_model(features, labels, True)
/usr/local/lib/python3.6/dist-packages/transformers/trainer_tf.py:534 _run_model *
outputs = self.model(features, labels=labels, training=training)[:2]
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_distilbert.py:879 call *
loss = self.compute_loss(labels, logits)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py:142 compute_loss *
if tf.math.reduce_any(labels == -1).numpy() is True:
AttributeError: 'Tensor' object has no attribute 'numpy'
```
Tensorflow version: 2.2.0
Numpy version: 1.19.0
CUDA version: 10.2
As per some possible solutions I have checked that tf.executing_eagerly() is True.
Tried on own computer and colab both and it ends up at the same point with same error. | 07-21-2020 12:55:19 | 07-21-2020 12:55:19 | I run into the same error, installing Transformers with pip fix this (not a preferred solution but it works)
`!pip install --upgrade --no-deps --force-reinstall transformers`
fine-tuning BERT for NER also fail using `run_ner.py`. The error is
```
Traceback (most recent call last):
File "transformers/examples/token-classification/run_ner.py", line 304, in <module>
main()
File "transformers/examples/token-classification/run_ner.py", line 266, in main
predictions, label_ids, metrics = trainer.predict(test_dataset)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 781, in predict
test_dataloader = self.get_test_dataloader(test_dataset)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 297, in get_test_dataloader
if isinstance(self.test_dataset, torch.utils.data.IterableDataset):
AttributeError: 'Trainer' object has no attribute 'test_dataset'
```<|||||>Thanks for the input @kevin-yauris
> I run into the same error, installing Transformers with pip fix this (not a preferred solution but it works)
> !pip install --upgrade --no-deps --force-reinstall transformers
This fixed the issue for me too. |
transformers | 5,934 | closed | InvalidArgumentError: Cannot convert a Tensor of dtype resource to a NumPy array. | # 🐛 Bug
## Information
Model I am using (Bert):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python
def build_model(item_dim, num_layers, num_heads, max_len):
config = BertConfig(hidden_size=item_dim, num_hidden_layers=num_layers,
num_attention_heads=num_heads, intermediate_size=item_dim*4,
max_position_embeddings=max_len)
bert = TFBertMainLayer(config=config)
inputs = Input(shape=(max_len, item_dim), dtype=tf.float32, name='inputs')
# pre-training vectors to bert
seq_emb = bert(inputs=None, inputs_embeds=inputs)[0]
print(seq_emb)
print(seq_emb[:, -1, :])
last_token_emb = seq_emb[:, -1, :]
outputs = Dense(1, activation='sigmoid')(last_token_emb)
model = Model(inputs=inputs, outputs=outputs)
return model
```
Errors with
`InvalidArgumentError: Cannot convert a Tensor of dtype resource to a NumPy array.`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
* Run the code successfully when feed pre-training vectors to bert.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
https://colab.research.google.com/gist/Douboo/adf6e136e45f8406b1070d88f4041a49/untitled2.ipynb
- `transformers` version: 3.0.2
- Platform: google colab
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.2.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| 07-21-2020 10:32:13 | 07-21-2020 10:32:13 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,933 | closed | How to get a language model score in BertModel? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
Hi, thanks for your awesome works!
How to get a language model score which could judge whether a sentence conforms to grammatical rules by using BertModel ?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
| 07-21-2020 09:22:46 | 07-21-2020 09:22:46 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,932 | closed | Expose padding_strategy on squad processor to fix QA pipeline performance regression | **Before this PR**:
The squad processor was padding the sequence up to the provided `max_length` parameter which results in a tensor with 512 tokens, mostly padding, making the QA pipeline very slow.
- QA Pipeline (`model="distilbert-base-uncased-distilled-squad"`) = 4.8secs
**After this PR**:
The processor will not be padding at all when coming from the QA pipeline.
- QA Pipeline (`model="distilbert-base-uncased-distilled-squad"`) = 1.29secs | 07-21-2020 09:17:12 | 07-21-2020 09:17:12 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5932?src=pr&el=h1) Report
> Merging [#5932](https://codecov.io/gh/huggingface/transformers/pull/5932?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d32279438a73e71961f53baa4fb47d0f08c2984d&el=desc) will **increase** coverage by `0.23%`.
> The diff coverage is `30.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5932?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5932 +/- ##
==========================================
+ Coverage 78.25% 78.49% +0.23%
==========================================
Files 146 146
Lines 26214 26223 +9
==========================================
+ Hits 20515 20584 +69
+ Misses 5699 5639 -60
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5932?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.13% <22.22%> (-0.40%)` | :arrow_down: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `77.00% <100.00%> (+0.03%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.50%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <0.00%> (+1.16%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5932?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5932?src=pr&el=footer). Last update [d322794...30d41ea](https://codecov.io/gh/huggingface/transformers/pull/5932?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Would be awesome to introduce perf regression testing in this repo too at some point (we have some crude timeouts but maybe something more fine-grained)<|||||>Linked issue https://github.com/huggingface/transformers/issues/6144 |
transformers | 5,931 | closed | ALBERT tokenizer is not callable | # 🐛 Bug
## Information
when running the follwoing given example:
from transformers import AlbertTokenizer, AlbertModel
import torch
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertModel.from_pretrained('albert-base-v2')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
it alerts "*** TypeError: 'AlbertTokenizer' object is not callable"
examples link: https://huggingface.co/transformers/model_doc/albert.html#alberttokenizer
Model I am using (Bert, XLNet ...):
ALBERT
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [X ] the official example scripts: (give details below)
from transformers import AlbertTokenizer, AlbertModel
import torch
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertModel.from_pretrained('albert-base-v2')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
link: https://huggingface.co/transformers/model_doc/albert.html#alberttokenizer'
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
*** TypeError: 'AlbertTokenizer' object is not callable
## Expected behavior
Should given a workable example of using ALBERT ?
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.3.0
- Platform: Linux
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 07-21-2020 06:54:37 | 07-21-2020 06:54:37 | Hi there, you should upgrade your transformers library to v3. The version you have does not have callable tokenizers.
Alternatively, the docs for v2.3.0 are [here](https://huggingface.co/transformers/v2.3.0/). You can use the dropdown menu on the left (just under the hugging face) to change the version of the documentation you are using.<|||||>Thanks very much<|||||>Alternatively, I found using tokenizer.ecode is ok.
But, How to view the old doc details for this version?
https://huggingface.co/transformers/v2.3.0/model_doc/albert.html#
This link seems to be archived documentation. No examples provided here.<|||||>This is the documentation of the version 2.3.0. For better documentation, you should really consider upgrading your library :-)<|||||>Understand. Thanks very much. |
transformers | 5,930 | closed | code copy button on the website doesn't copy `...` lines | # 🐛 Bug
## Information
When the copy button is clicked on a code like [this](https://huggingface.co/transformers/quicktour.html#using-the-tokenizer):
```
>>> pt_batch = tokenizer(
... ["We are very happy to show you the 🤗 Transformers library.", "We hope you don't hate it."],
... padding=True,
... truncation=True,
... return_tensors="pt"
... )
```
only the first line is copied.
All lines starting with `>>>` get copied, but `...` lines are ignored. So for example, [this](https://huggingface.co/transformers/quicktour.html#customizing-the-model) gets fully copied:
```
>>> from transformers import DistilBertConfig, DistilBertTokenizer, DistilBertForSequenceClassification
>>> config = DistilBertConfig(n_heads=8, dim=512, hidden_dim=4*512)
>>> tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
>>> model = DistilBertForSequenceClassification(config)
```
I was told that this is the issue with `sphinx_copybutton` and I found an already opened issue there:
https://github.com/executablebooks/sphinx-copybutton/issues/65
So hopefully it gets fixed over there, and then the website can be updated to include the fix.
| 07-21-2020 04:56:02 | 07-21-2020 04:56:02 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>this seems to have been fixed in sphinx - I re-checked and it works now. |
transformers | 5,929 | closed | Add DeBERTa model | Add DeBERTa model to hf transformers. DeBERTa applies two techniques to improve RoBERTa, one is disentangled attention, the other is enhanced mask decoder. With 80GB training data, DeBERTa outperform RoBERTa on a majority of NLU tasks, e.g. SQUAD, MNLI and RACE. Paper link: https://arxiv.org/abs/2006.03654 | 07-21-2020 04:49:55 | 07-21-2020 04:49:55 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5929?src=pr&el=h1) Report
> Merging [#5929](https://codecov.io/gh/huggingface/transformers/pull/5929?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1fc4de69ed024e18b88cb6f040021630599de2f7?el=desc) will **decrease** coverage by `0.30%`.
> The diff coverage is `73.13%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5929?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5929 +/- ##
==========================================
- Coverage 79.35% 79.05% -0.31%
==========================================
Files 181 184 +3
Lines 35800 36660 +860
==========================================
+ Hits 28410 28982 +572
- Misses 7390 7678 +288
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5929?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `76.92% <50.00%> (-6.42%)` | :arrow_down: |
| [src/transformers/tokenization\_deberta.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZGViZXJ0YS5weQ==) | `69.76% <69.76%> (ø)` | |
| [src/transformers/modeling\_deberta.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kZWJlcnRhLnB5) | `73.26% <73.26%> (ø)` | |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.39% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `96.34% <100.00%> (+0.04%)` | :arrow_up: |
| [src/transformers/configuration\_deberta.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2RlYmVydGEucHk=) | `100.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `87.11% <100.00%> (+0.06%)` | :arrow_up: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `92.64% <100.00%> (+0.10%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: |
| [src/transformers/tokenization\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |
| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/5929/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5929?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5929?src=pr&el=footer). Last update [1fc4de6...0a08565](https://codecov.io/gh/huggingface/transformers/pull/5929?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Related issue #4858 <|||||>Someone is waiting for fine-tuning a new model :)<|||||>Very cool, looking forward to that model!!<|||||>Hello, May I know when will the PR be merged?
<|||||>@BigBird01 it will probably take between 1 and 3 weeks to merge. 2,500 lines is a lot to review :)
I made some comments, and can take another pass on this when they're addressed.
<|||||>> @BigBird01 it will probably take between 1 and 3 weeks to merge. 2,500 lines is a lot to review :)
> I made some comments, and can take another pass on this when they're addressed.
Thanks!<|||||>Thanks for addressing the comments! Will take a look in a few days.<|||||>> Great, it's nearly done! Thanks a lot for your work on it.
>
> What's left to do is:
>
> * Ensure that the documentation is in the correct format
> * Enable the remaining tests
>
> If you don't have time to work on it right now, let me know and I'll finish the implementation and merge it. Thanks!
@LysandreJik Thanks for the comments. It will be great if you can work on the rest:) Feel free to let me know if you have any questions on the implementation.
<|||||>Okay @BigBird01, I have the PyTorch version ready, passing all tests and the docs cleaned up as well. Should I push directly on your branch, or do you want me to open a PR on your fork so that you can check my changes before applying them?<|||||>Before we merge there will be one final item we'll have to take care of, that's integration tests! It's necessary to ensure we don't diverge from the original implementation. [Here's an example with RoBERTa.](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_roberta.py#L397-L448)
Given that you have the original implementation, could you implement such tests? Thanks a lot!<|||||>> Okay @BigBird01, I have the PyTorch version ready, passing all tests and the docs cleaned up as well. Should I push directly on your branch, or do you want me to open a PR on your fork so that you can check my changes before applying them?
@LysandreJik I just added you to the repo contributor. Please open a PR on it and I will merge your change into it then add integration tests. Thanks!
<|||||>Hi @BigBird01, I just opened the pull request [here](https://github.com/BigBird01/transformers/pull/1).<|||||>Hi @BigBird01, did you get a chance to take a look at the PR?<|||||>> Hi @BigBird01, did you get a chance to take a look at the PR?
Sorry for the late reply. I just merged your changes and will try to add final test today. <|||||>> Hi @BigBird01, did you get a chance to take a look at the PR?
@LysandreJik I just finished the final test. But I hit a isort error after I pushed the code to the repo, but the tests get passed on my local node. Could you help to take a look at it?<|||||>> Hi, left a last few comments and questions. Let me know if you do not have time to implement/answer these last changes and I'll do the last updates and merge `DeBERTa`.
@LysandreJik Thanks for the following up. I just replied most of your comments. Please feel free to make the last changes to merge the PR. Thanks in advance:)<|||||>I don't really understand the tracing changes, as the model does not pass the TorchScript tests. I'm removing this part, feel free to open a PR to add it back and set the `test_torchscript` flag to `True` in `test_modeling_deberta.py`. FYI, the error is the following:
```
def save(self, *args, **kwargs):
r"""
save(f, _extra_files=ExtraFilesMap{})
See :func:`torch.jit.save <torch.jit.save>` for details.
"""
> return self._c.save(*args, **kwargs)
E RuntimeError:
E Could not export Python function call 'XSoftmax'. Remove calls to Python functions before export. Did you forget add @script or @script_method annotation? If this is a nn.ModuleList, add it to __constants__:
```<|||||>There is an issue with the checkpoints uploaded on `huggingface.co`, as the base model identifier is `bert`, whereas this has been changed to `deberta`. This means that no weights are loaded on the model, as the base prefix doesn't fit.
Do you mind if I update the weights on S3 with so that the state dict changes from this:
```
[...]
'bert.embeddings.word_embeddings.weight',
'bert.embeddings.position_embeddings.weight',
'bert.embeddings.LayerNorm.weight',
'bert.embeddings.LayerNorm.bias',
'bert.encoder.layer.0.attention.self.q_bias',
'bert.encoder.layer.0.attention.self.v_bias'
[...]
```
to this?
```
[...]
'deberta.embeddings.word_embeddings.weight',
'deberta.embeddings.position_embeddings.weight',
'deberta.embeddings.LayerNorm.weight',
'deberta.embeddings.LayerNorm.bias',
'deberta.encoder.layer.0.attention.self.q_bias',
'deberta.encoder.layer.0.attention.self.v_bias'
[...]
```<|||||>> I don't really understand the tracing changes, as the model does not pass the TorchScript tests. I'm removing this part, feel free to open a PR to add it back and set the `test_torchscript` flag to `True` in `test_modeling_deberta.py`. FYI, the error is the following:
>
> ```
> def save(self, *args, **kwargs):
> r"""
> save(f, _extra_files=ExtraFilesMap{})
>
> See :func:`torch.jit.save <torch.jit.save>` for details.
> """
> > return self._c.save(*args, **kwargs)
> E RuntimeError:
> E Could not export Python function call 'XSoftmax'. Remove calls to Python functions before export. Did you forget add @script or @script_method annotation? If this is a nn.ModuleList, add it to __constants__:
> ```
Sure. Let's remove it first and I can try to fix it later in a separate PR.<|||||>> There is an issue with the checkpoints uploaded on `huggingface.co`, as the base model identifier is `bert`, whereas this has been changed to `deberta`. This means that no weights are loaded on the model, as the base prefix doesn't fit.
>
> Do you mind if I update the weights on S3 with so that the state dict changes from this:
>
> ```
> [...]
> 'bert.embeddings.word_embeddings.weight',
> 'bert.embeddings.position_embeddings.weight',
> 'bert.embeddings.LayerNorm.weight',
> 'bert.embeddings.LayerNorm.bias',
> 'bert.encoder.layer.0.attention.self.q_bias',
> 'bert.encoder.layer.0.attention.self.v_bias'
> [...]
> ```
>
> to this?
>
> ```
> [...]
> 'deberta.embeddings.word_embeddings.weight',
> 'deberta.embeddings.position_embeddings.weight',
> 'deberta.embeddings.LayerNorm.weight',
> 'deberta.embeddings.LayerNorm.bias',
> 'deberta.encoder.layer.0.attention.self.q_bias',
> 'deberta.encoder.layer.0.attention.self.v_bias'
> [...]
> ```
Sure.<|||||>Okay @BigBird01, everything seems good to go, this should be merged very soon :)
Just one last question, for your models on the hub (`microsoft/deberta-base` and `microsoft/deberta-large`) in your configuration there is `position_biased_input` set to `False`, which means that in the embedding layer, the `position_embeddings` will be set to `None`:
```py
if not self.position_biased_input:
self.position_embeddings = None
```
However, in the model state dicts in the `pytorch_model.bin`, there is a layer containing the `position_embeddings`. What is correct here? Should there be `position_biased_input = True` in the configurations, or should this layer be removed from the state dicts?
Thanks!<|||||>> Okay @BigBird01, everything seems good to go, this should be merged very soon :)
>
> Just one last question, for your models on the hub (`microsoft/deberta-base` and `microsoft/deberta-large`) in your configuration there is `position_biased_input` set to `False`, which means that in the embedding layer, the `position_embeddings` will be set to `None`:
>
> ```python
> if not self.position_biased_input:
> self.position_embeddings = None
> ```
>
> However, in the model state dicts in the `pytorch_model.bin`, there is a layer containing the `position_embeddings`. What is correct here? Should there be `position_biased_input = True` in the configurations, or should this layer be removed from the state dicts?
>
> Thanks!
Yes. It's used in the mask decoding part(EMD). We are still polishing that part and will release it once ready.<|||||>Thanks @patrickvonplaten, @sgugger for the reviews. Will implement the changes tonight.<|||||>Thanks for your work on this @BigBird01 :)<|||||>> Thanks for your work on this @BigBird01 :)
Thank you all to merge the code into master @LysandreJik @patrickvonplaten
One question, why after the merge we can't find the document of deberta model at https://huggingface.co/transformers/
Could you help to check that?<|||||>The documentation is online, you just have to click on `master` on the top left right under the hugging face logo as is done here: https://huggingface.co/transformers/master/. The next release will then show deberta docs as a default :-) <|||||>@BigBird01, two slow tests are failing with the DeBERTa models. Could you show how you implemented the integration tests so that I may investigate?<|||||>> @BigBird01, two slow tests are failing with the DeBERTa models. Could you show how you implemented the integration tests so that I may investigate?
@LysandreJik In the integration tests, I just feed the model with a fake input data and verify the output of the model. It's similar to RoBERTa tests. I may take a took at it today. <|||||>Thanks! The DeBERTa may not be working correctly right now, knowing the source of the issue would be great. <|||||>> Thanks! The DeBERTa may not be working correctly right now, knowing the source of the issue would be great.
The issue is due to the model failed to be loaded due to parameter name mismatch. It needs to update the model by changing the encoder name from 'bert' to 'deberta'.<|||||>> Thanks! The DeBERTa may not be working correctly right now, knowing the source of the issue would be great.
@LysandreJik I just made a fix to the test failure. https://github.com/huggingface/transformers/pull/7645
Could you take a look? |
transformers | 5,928 | closed | Feed forward chunking for all pretrained models | Based on this card: [Feed forward chunking](https://github.com/huggingface/transformers/projects/9#card-39483681)
@patrickvonplaten
I'd like to help contribute and implement this for all the other models if this is still pending? | 07-21-2020 01:27:10 | 07-21-2020 01:27:10 | Any opinions here? I will create a PR if there is interest and would like to get your ideas and suggestions. @patrickvonplaten @sshleifer <|||||>@patrickvonplaten would be the point person and he is on Vacation until August 3.
In the interim, if you want to start working on this go right ahead. Make sure it's actually faster/needed before you start though. I don't really know.<|||||>Hey @Pradhy729,
Yes it would be great to start a PR to add feed forward chunking to other models. Maybe you can start with BERT in your PR and ping us to get Feedback :-)
A couple of things to consider:
1) You should probably move the config param `config.chunk_size_feed_forward` to the general `configuration_utils.py` file.
2) As @sshleifer said it would be good to benchmark the gains in a very similar way to this Notebook:
https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_2_4.ipynb
3) as said earlier we should start with BERT and `config.chunk_size_feed_forward`.<|||||>Awesome! I will start with BERT and share with you for feedback.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,927 | closed | [CI] self-scheduled runner tests examples/ | 07-21-2020 00:51:45 | 07-21-2020 00:51:45 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5927?src=pr&el=h1) Report
> Merging [#5927](https://codecov.io/gh/huggingface/transformers/pull/5927?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4781afd045b4722e7f28347f1c4f42a56a4550e8&el=desc) will **decrease** coverage by `0.09%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5927?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5927 +/- ##
==========================================
- Coverage 78.69% 78.59% -0.10%
==========================================
Files 146 146
Lines 26214 26214
==========================================
- Hits 20628 20603 -25
- Misses 5586 5611 +25
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5927?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5927?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5927?src=pr&el=footer). Last update [4781afd...a25be30](https://codecov.io/gh/huggingface/transformers/pull/5927?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Gunna merge this and make sure that it runs! |
|
transformers | 5,926 | closed | DataParallel fix: multi gpu evaluation | The DataParallel training was fixed in https://github.com/huggingface/transformers/pull/5733, this commit also fixes the evaluation. It's more convenient when the user enables both `do_train` and `do_eval`. | 07-20-2020 20:43:10 | 07-20-2020 20:43:10 | Yes, this was missing in #5733, thanks for adding it! |
transformers | 5,925 | closed | Allow user to see actual error if a download has failed | Fixes #5869. Adds new logic of exception handling.
* If there was a caught exception in `cached_path` during download, and `force_download` is True, then raise it
* If `force_download` is False, save this exception and raise it later, it file is not in local cache
* If download exception in `cached_path` was raised, then re-raise it in calling function. If there's no exception, but resolved file is None, keep old message ("...is a correct model identifier...")
It would be great to know if there's a better way to dealing with this issue. | 07-20-2020 19:11:49 | 07-20-2020 19:11:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5925?src=pr&el=h1) Report
> Merging [#5925](https://codecov.io/gh/huggingface/transformers/pull/5925?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/32883b310ba30d72e67bb2ebb5847888f03a90a8&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5925?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5925 +/- ##
==========================================
- Coverage 78.51% 78.51% -0.01%
==========================================
Files 146 146
Lines 26214 26214
==========================================
- Hits 20583 20582 -1
- Misses 5631 5632 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5925?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.52% <ø> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <ø> (-0.30%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.88% <ø> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.14% <ø> (ø)` | |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5925/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5925?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5925?src=pr&el=footer). Last update [32883b3...80220ae](https://codecov.io/gh/huggingface/transformers/pull/5925?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,924 | closed | Create README.md | 07-20-2020 18:36:47 | 07-20-2020 18:36:47 | ||
transformers | 5,923 | closed | how (if at all) are those models related... | https://github.com/huggingface/transformers/tree/master/examples/seq2seq
and...
https://github.com/nlpyang/PreSumm
are they the same? | 07-20-2020 18:27:14 | 07-20-2020 18:27:14 | I think they are not related at all.
Which models are you talking about?<|||||>Hi! :)
I am talking about the implementation that is used for the text summarization example under transformers/examples and a BERT-based implementation that is described here:
https://paperswithcode.com/paper/text-summarization-with-pretrained-encoders
I wondered if this might be the same models or even implementations.<|||||>That is at `examples/seq2seq/bertabs`, but not actively maintained.
More recent models can be finetuned with the code at `examples/seq2seq/finetune.py`.<|||||>Could you please describe what is meant by "finetuning more recent models"? Does it mean that it is a generic script that can work with arbitrary NLP models which are implemented with Pytorch?<|||||>it can work with the following classes of models in our model hub:
```
BartForConditonalGeneration
T5ForConditonalGeneration
MarianMTModel
```
which in total is over 1100 checkpoints!
<|||||>Oh, I see. The field is progressing so quickly. So I understand that BART is an alternative architecture to the one from the paper "Text Summarization with Pretrained Encoders". I will investigate this. Thank you! |
transformers | 5,922 | closed | Avoid unnecessary warnings when loading pretrained model | Currently, the following commands will produce a lot of warnings, because some weights are not saved with the model by design.
GPT-2 (see #5800, #5814)
```
from transformers import GPT2LMHeadModel
model = GPT2LMHeadModel.from_pretrained('gpt2')
```
T5 (see #3553, #5348)
```
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained('t5-small')
```
Bart
```
from transformers import BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn')
```
This PR introduces a new class attributes that you can tune per model to ignore some keys during loading and avoid those warnings (that scare users something went wrong for no reason). It fixes the above issues and gives us an API to fix any similar issues on other models.
Pinging @patrickvonplaten since you mentioned this recently. | 07-20-2020 18:08:47 | 07-20-2020 18:08:47 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5922?src=pr&el=h1) Report
> Merging [#5922](https://codecov.io/gh/huggingface/transformers/pull/5922?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9827d666ebdf959aa9dfe3627ccb80592b378b77&el=desc) will **decrease** coverage by `0.12%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5922?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5922 +/- ##
==========================================
- Coverage 78.64% 78.51% -0.13%
==========================================
Files 146 146
Lines 26244 26252 +8
==========================================
- Hits 20639 20612 -27
- Misses 5605 5640 +35
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5922?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.75% <100.00%> (+<0.01%)` | :arrow_up: |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.92% <100.00%> (+0.04%)` | :arrow_up: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.14% <100.00%> (+0.03%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.30% <100.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+3.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5922?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5922?src=pr&el=footer). Last update [9827d66...56688d2](https://codecov.io/gh/huggingface/transformers/pull/5922?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,921 | closed | Create README.md | - Maybe the result of this query answers the question You did some days ago @julien-c ;-) | 07-20-2020 18:08:07 | 07-20-2020 18:08:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5921?src=pr&el=h1) Report
> Merging [#5921](https://codecov.io/gh/huggingface/transformers/pull/5921?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/32883b310ba30d72e67bb2ebb5847888f03a90a8&el=desc) will **decrease** coverage by `1.20%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5921?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5921 +/- ##
==========================================
- Coverage 78.51% 77.31% -1.21%
==========================================
Files 146 146
Lines 26214 26214
==========================================
- Hits 20583 20268 -315
- Misses 5631 5946 +315
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5921?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (ø)` | |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5921?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5921?src=pr&el=footer). Last update [32883b3...f002de2](https://codecov.io/gh/huggingface/transformers/pull/5921?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,920 | closed | Create README.md | 07-20-2020 18:00:30 | 07-20-2020 18:00:30 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5920?src=pr&el=h1) Report
> Merging [#5920](https://codecov.io/gh/huggingface/transformers/pull/5920?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/32883b310ba30d72e67bb2ebb5847888f03a90a8&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5920?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5920 +/- ##
=======================================
Coverage 78.51% 78.51%
=======================================
Files 146 146
Lines 26214 26214
=======================================
Hits 20583 20583
Misses 5631 5631
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5920?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5920/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5920/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (ø)` | |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5920/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5920?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5920?src=pr&el=footer). Last update [32883b3...2579206](https://codecov.io/gh/huggingface/transformers/pull/5920?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,919 | closed | [examples/seq2seq]: add --label_smoothing option | cc @patil-suraj
This seems to improve BLEU score by ~2pts!
- also adds --early_stopping_patience command line arg
- fixes MBartDataset src,tgt flipping bug
- adds `--early_stopping_patience` command line arg for PL.
- wandb now looks for shell variable `$WANDB_PROJECT_NAME` | 07-20-2020 17:55:22 | 07-20-2020 17:55:22 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5919?src=pr&el=h1) Report
> Merging [#5919](https://codecov.io/gh/huggingface/transformers/pull/5919?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4781afd045b4722e7f28347f1c4f42a56a4550e8&el=desc) will **decrease** coverage by `0.17%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5919?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5919 +/- ##
==========================================
- Coverage 78.69% 78.51% -0.18%
==========================================
Files 146 146
Lines 26214 26214
==========================================
- Hits 20628 20581 -47
- Misses 5586 5633 +47
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5919?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5919?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5919?src=pr&el=footer). Last update [4781afd...303f0ac](https://codecov.io/gh/huggingface/transformers/pull/5919?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Gunna merge this.
The packed dataset is definitely a win, label smoothing less clear.
TODO: figure out loss function mystery. |
transformers | 5,918 | closed | Add Fast Transformers - Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention | # 🌟 New model addition
## Model description
The Fast Transformers repo introduces a fast transformer model based on work to improve attention published in two papers:
- Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention (https://arxiv.org/abs/2006.16236)
- Fast Transformers with Clustered Attention (https://arxiv.org/abs/2007.04825)
## Open source status
* [X] the model implementation is available: (give details)
https://github.com/idiap/fast-transformers
* [x] the model weights are available: (give details)
* [X] who are the authors: (mention them, if possible by @gh-username)
@angeloskath | 07-20-2020 16:44:56 | 07-20-2020 16:44:56 | Hi guys, let us know how we can help and also kindly add @apoorv2904 to the author list.
Although the model weights are nothing particularly useful we do provide them for our colab so let us know if they are needed and how to provide them.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Don´t let it die. In my tests this is the best performing model so far!<|||||>@patrickvonplaten @sgugger
I could try to include on huggingface/transformers if there is an interest from the core team. But I would have to depend on https://github.com/idiap/fast-transformers as they created optimized cuda/cpu c++ versions of the proposed attention. A MR with this dependency would be accepted by Huggingface? <|||||>would love if this comes in!<|||||>Hey @bratao,
Yes, we would definitely be interested in this model and would also be fine with an optional dependency of `https://github.com/idiap/fast-transformers` Also pinging @joeddav @TevenLeScao here (in case you guys are interested in helping with the integration).
I would also be happy to help you with the model integration otherwise @bratao :-) <|||||>Great, I´m on it @patrickvonplaten
I will work on this on my free time, As soon as I have something, I put it here the fork.
If anyone else want to help or speed it up, just talk to me using the email in my profile!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>What happened to this model? It was not finally integrated right? :( @bratao @patrickvonplaten |
transformers | 5,917 | closed | convert_roberta: AttributeError when converting CamemBERT model.pt to pytorch_model.bin | Hi,
I trained a CamemBERT model with the fairseq library which gave me the following files:
- dict.txt: vocabulary coming from the sentencepiece model
- sentencepiece.bpe.model
- model.pt
Now I am trying to convert the model.pt into pytorch_model.bin and config.json as mentionned here ([fairseq/issues#1514](https://github.com/pytorch/fairseq/issues/1514)) and here ([transformers/issue#1850](https://github.com/huggingface/transformers/issues/1850)), by using the conversion script of the transformers library ([transfomers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py)). The goal is to use those files with fast-bert.
However, using this command line:
```shell
python convert_roberta_original_pytorch_checkpoint_to_pytorch.py --roberta_checkpoint_path ./ --pytorch_dump_folder_path ./ --classification_head
```
I get the following error:
```python
AttributeError Traceback (most recent call last)
<ipython-input-27-ea791887ff26> in <module>
----> 1 convert_roberta_original_pytorch_checkpoint_to_pytorch.convert_roberta_checkpoint_to_pytorch(CAMEMBERT_PATH, CAMEMBERT_PATH, True)
~/anaconda3/envs/NLP/lib/python3.7/site-packages/transformers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py in convert_roberta_checkpoint_to_pytorch(roberta_checkpoint_path, pytorch_dump_folder_path, classification_head)
48 roberta = FairseqRobertaModel.from_pretrained(roberta_checkpoint_path)
49 roberta.eval() # disable dropout
---> 50 roberta_sent_encoder = roberta.model.decoder.sentence_encoder
51 config = RobertaConfig(
52 vocab_size=roberta_sent_encoder.embed_tokens.num_embeddings,
~/anaconda3/envs/NLP/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
592 return modules[name]
593 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 594 type(self).__name__, name))
595
596 def __setattr__(self, name, value):
AttributeError: 'RobertaModel' object has no attribute 'decoder'
```
And indeed when I check the fairseq/pytorch RobertaModel has no decoder attribute.
Am I doing this wrong ? I see no other conversion script to fit my CamemBERT model so I guess the RoBERTa one is the good one.
| 07-20-2020 16:43:35 | 07-20-2020 16:43:35 | maybe @sshleifer has an idea<|||||>I can help!
On fairseq master (as of 5/28/20), that class seems to no longer have a `decoder` attribute.
I think you want to change the `roberta.model.decoder` references to `self.model.encoder`, but hard to know without seeing the `state_dict`/handling the model interactively.
The best way to debug is to either instantiate the fairseq model in jupyter/ipython or set a breakpoint and see what the attributes are.
If you are stuck, feel free to upload your `model.pt` to some cloud storage and I can give it a shot!
<|||||>Hey @sshleifer!
I did what you suggested and it worked, thanks a lot. You have to replace all the references to `roberta.model.decoder` with `roberta.model.encoder` as the attributes were just renamed.
On the other hand, I can't figure out what happened to `roberta.args.num_classes` that is used for the classification_head flag, which makes it useless for now.
I would gladly commit the fix but I'm not a powergit user, so I'll leave it to the pros.
Thanks again!
***
Edit : the error that comes up with the flag.
```python
Traceback (most recent call last):
File "converter.py", line 179, in <module>
args.roberta_checkpoint_path, args.pytorch_dump_folder_path, args.classification_head
File "converter.py", line 62, in convert_roberta_checkpoint_to_pytorch
config.num_labels = roberta.args.num_classes
AttributeError: 'Namespace' object has no attribute 'num_classes'
```
<|||||>I think `num_classes` will be like something like
`roberta.model.classification_heads[some_key].out_proj.weight.shape[0]`
There is likely only one possible key.<|||||>I just checked with a model fine-tuned on MNLI and the key is classification_heads['mnli'], is this what you expected?<|||||>sounds right! Don't lose that head!<|||||>Hey @sshleifer,
Sorry if that's not the right place to ask but I couldn't find an answer to that question anywhere: is there a script like this one to convert a model.pt trained on gpu to a model.bin ? or should this script works both for cpu and gpu models ?
Thanks!<|||||>Did our library save the `model.pt`?
The filenames don't really matter if the contents of the file are a `state_dict`.
So it may be as simple as, from the terminal,
```bash
mv model.pt pytorch_model.bin
```
The library doesn't care if a `state_dict` was saved on gpu or cpu.
If that fails try to run `torch.load('model.pt', map_location='cpu')` in an interactive environment and see if it's a state dict.
<|||||>Thanks for the answer @sshleifer! HuggingFace was working well, it was my nvidia apex installation that was broken and returned errors in fast-bert that confused me. All works well now! |
transformers | 5,916 | closed | Clarify arg class | Just clarifying which dataset we're talking about. | 07-20-2020 16:35:26 | 07-20-2020 16:35:26 | 👍 |
transformers | 5,915 | closed | Incompatible tensor type when running BART on TPU | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): `facebook/bart-large`
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: XSUM
* [ ] my own task or dataset: (give details below)
## To reproduce
1. Setup a Google VM with the XLA image and configure it to use TPUs
2. Follow the instrcutions in `seq2seq` example for downloading XSUM
3. Then run
```
export PYTHONPATH="../":"${PYTHONPATH}"
python finetune.py \
--learning_rate=3e-5 \
--gpus 0 \
--n_tpu_cores 8 \
--do_train \
--do_predict \
--n_val 1000 \
--val_check_interval 0.1 \
--data_dir ${PWD}/xsum \
--train_batch_size=1 \
--eval_batch_size=1 \
--output_dir=xsum_results \
--num_train_epochs 1 \
--model_name_or_path facebook/bart-large
```
...and you get something like
```
Exception in device=TPU:5: Attempted to call `variable.set_data(tensor)`, but `variable` and `tensor` have incompatible tensor type.
Traceback (most recent call last):
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 119, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 222, in tpu_train
self.run_pretrain_routine(model)
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1196, in run_pretrain_routine
False)
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 293, in _evaluate
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 470, in evaluation_forward
output = model.validation_step(*args)
File "/home/martongyorgy/transformers/examples/seq2seq/finetune.py", line 145, in validation_step
return self._generative_step(batch)
File "/home/martongyorgy/transformers/examples/seq2seq/finetune.py", line 176, in _generative_step
decoder_start_token_id=self.decoder_start_token_id,
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/transformers/generation_utils.py", line 248, in generate
if self.get_output_embeddings() is None:
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/transformers/modeling_bart.py", line 1113, in get_output_embeddings
return _make_linear_from_emb(self.model.shared) # make it on the fly
File "/anaconda3/envs/myenv/lib/python3.6/site-packages/transformers/modeling_bart.py", line 190, in _make_linear_from_emb
lin_layer.weight.data = emb.weight.data
RuntimeError: Attempted to call `variable.set_data(tensor)`, but `variable` and `tensor` have incompatible tensor type.
```
## Environment info
```
- `transformers` version: 3.0.2
- Platform: Linux-4.9.0-12-amd64-x86_64-with-debian-9.12
- Python version: 3.6.10
- PyTorch version (GPU?): 1.5.0a0+ab660ae (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: TPU setup following Google Cloud tutorial for PyTorch
```
| 07-20-2020 16:07:32 | 07-20-2020 16:07:32 | Let's consolidate the discussion to #5895 .
Definitely an issue! |
transformers | 5,914 | closed | Add AlbertForPretraining to doc | Document models that were absent. | 07-20-2020 16:06:18 | 07-20-2020 16:06:18 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5914?src=pr&el=h1) Report
> Merging [#5914](https://codecov.io/gh/huggingface/transformers/pull/5914?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f19751117d54a4dd677c614f6e400a7ee49b3f24&el=desc) will **increase** coverage by `0.02%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5914?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5914 +/- ##
==========================================
+ Coverage 78.49% 78.51% +0.02%
==========================================
Files 146 146
Lines 26214 26214
==========================================
+ Hits 20577 20583 +6
+ Misses 5637 5631 -6
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5914?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+1.00%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5914?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5914?src=pr&el=footer). Last update [f197511...8901dff](https://codecov.io/gh/huggingface/transformers/pull/5914?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,913 | closed | [Fix] seq2seq pack_dataset.py actually packs | Added stronger test (that failed before small code fixes).
| 07-20-2020 15:52:37 | 07-20-2020 15:52:37 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5913?src=pr&el=h1) Report
> :exclamation: No coverage uploaded for pull request base (`master@f197511`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5913?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5913 +/- ##
=========================================
Coverage ? 78.69%
=========================================
Files ? 146
Lines ? 26214
Branches ? 0
=========================================
Hits ? 20629
Misses ? 5585
Partials ? 0
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5913?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5913?src=pr&el=footer). Last update [f197511...547c2ad](https://codecov.io/gh/huggingface/transformers/pull/5913?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,912 | closed | Improve doc of use_cache | Followup from #5883 | 07-20-2020 15:17:32 | 07-20-2020 15:17:32 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5912?src=pr&el=h1) Report
> :exclamation: No coverage uploaded for pull request base (`master@f197511`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5912?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5912 +/- ##
=========================================
Coverage ? 78.46%
=========================================
Files ? 146
Lines ? 26214
Branches ? 0
=========================================
Hits ? 20569
Misses ? 5645
Partials ? 0
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5912?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5912/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `94.33% <ø> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5912?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5912?src=pr&el=footer). Last update [f197511...b3c3a0e](https://codecov.io/gh/huggingface/transformers/pull/5912?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,911 | closed | [WIP] Add Pegasus | * Add PEGASUS in TF2 | 07-20-2020 14:18:39 | 07-20-2020 14:18:39 | Hi @sshleifer, I will continue to push code to this PR and make it runnable asap.<|||||>@sshleifer The code has been uploaded. (1) The test is runnable in TF2 and loads AESLC checkpoints successfully with correct outputs. (2) Code of models and layers (including decoding, beam search) are all in a single file which may look messy (sorry). (3) Most code is simply copied and pasted (then converted to TF2) from the original PEGASUS repo so you may refer to the original repo for clearer code if necessary.
I think you can start from here. Please let me know if I can help further.<|||||>@JingqingZ I'm gunna add torch first in #6340 (you will be a PR co-author). And then come back here to finish TF. No action needed from you, just an update.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,910 | closed | QA Pipeline: Key Error due to predicting a token in question | # 🐛 Bug
## Information
Model: deepset/roberta-base-squad2
Language: English
The problem arises when using: QA inference via pipeline
This seems to be a very similar issue to #5711
The pipeline throws an exception when the model predicts a token that is not part of the document, in this case it seems to be in the question.
In the example below, the model predicts token 3 to be the start and end of the answer span. But these tokens are a part of the question I believe. Therefore, we get a key error when trying to access
feature.token_to_orig_map[3]) in here:
https://github.com/huggingface/transformers/blob/ce374ba87767d551f720242d5e64bfa976531079/src/transformers/pipelines.py#L1370-L1380
## To reproduce
```
nlp = pipeline("question-answering",model="deepset/roberta-base-squad2",
tokenizer="deepset/roberta-base-squad2",
device=-1)
nlp(question="Who is the father of Sansa Stark?", context="===''A Game of Thrones''===\
Sansa Stark begins the novel by being betrothed to Crown Prince Joffrey Baratheon, believing Joffrey to be a gallant prince. While Joffrey and Sansa are walking through the woods, Joffrey notices Arya sparring with the butcher's boy, Mycah. A fight breaks out and Joffrey is attacked by Nymeria (Arya's direwolf) after Joffrey threatens to hurt Arya. Sansa lies to King Robert about the circumstances of the fight in order to protect both Joffrey and her sister Arya. Since Arya ran off with her wolf to save it, Sansa's wolf is killed instead, estranging the Stark daughters.\
During the Tourney of the Hand to honour her father Lord Eddard Stark, Sansa Stark is enchanted by the knights performing in the event. At the request of his mother, Queen Cersei Lannister, Joffrey spends a portion of the tourney with Sansa, but near the end he commands his guard Sandor Clegane, better known as The Hound, to take her back to her quarters. Sandor explains how his older brother, Gregor, aka "Mountain that Rides" pushed his face into a brazier of hot coals, for playing with one of his wooden toys.\
After Eddard discovers the truth of Joffrey's paternity, he tells Sansa that they will be heading back to Winterfell. Sansa is devastated and wishes to stay in King's Landing, so she runs off to inform Queen Cersei of her father's plans, unwittingly providing Cersei with the information needed to arrest her father. After Robert dies, Sansa begs Joffrey to show mercy on her father and he agrees, if Ned will swear an oath of loyalty, but executes him anyway, in front of Sansa. Sansa is now effectively a hostage in King's Landing and finally sees Joffrey's true nature, after he forces her to look at the tarred head of her now-deceased father.")
```
results in
```
Traceback (most recent call last):
File "/Users/deepset/deepset/haystack/tutorials/Tutorial1_Basic_QA_Pipeline.py", line 145, in <module>
prediction = finder.get_answers(question="Who is the father of Sansa Stark?", top_k_retriever=1, top_k_reader=5)
File "/Users/deepset/deepset/haystack/haystack/finder.py", line 57, in get_answers
top_k=top_k_reader) # type: Dict[str, Any]
File "/Users/deepset/deepset/haystack/haystack/reader/transformers.py", line 80, in predict
predictions = self.model(query, topk=self.n_best_per_passage)
File "/Users/deepset/deepset/environments/haystack/lib/python3.7/site-packages/transformers/pipelines.py", line 1316, in __call__
for s, e, score in zip(starts, ends, scores)
File "/Users/deepset/deepset/environments/haystack/lib/python3.7/site-packages/transformers/pipelines.py", line 1316, in <listcomp>
for s, e, score in zip(starts, ends, scores)
KeyError: 3
```
## Expected behavior
Predictions that are pointing to tokens that are not part of the "context" (here: tokens in question) should be filtered out from possible answers.
## Environment info
transformers version: latest master (82dd96cae74797be0c1d330566df7f929214b278)
Platform: Mac OS Catalina
Python version: 3.7.5
PyTorch version (GPU?): 1.5.1, CPU
Using GPU in script?: No
Using distributed or parallel set-up in script?: No
| 07-20-2020 12:34:04 | 07-20-2020 12:34:04 | I also have this -- and related issues on v3.0.0, 3.0.2
on v3.0.0 I get Key Error for sequence terminal tokens -- I think this may be to do with removing special tokens but not masking their positions or due to a truncation error -- may be truncating one too few?
on v3.0.2 I get Key Error for tokens at the beginning of the sequence (Key Error : 0) -- may be an issue to do with the [CLS] token and/or tokens falling in the question span.
I think the trick is to use the attention_mask * token_type_ids * start/end_scores. This will set the logits for all tokens outside the answer to 0 and can be done easily on batch tensors/GPU. I will see if I can put together a pull request.
Platform: Mac OS Catalina, GCP linux cuda 10.1
Python version: 3.6.8
PyTorch version (GPU?): 1.5.1, GPU
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No<|||||>@brandenchan @jusjosgra thanks for reporting the issue and the steps to reproduce.
We did have an issue but it should have been fixed on master branch.
When running the snippet you provided on master I get the following:
`{'score': 0.05008925125002861, 'start': 679, 'end': 697, 'answer': 'Lord Eddard Stark,'}`
If you can checkout the master branch and give it a try to make sure it works on your side too.
_If everything work as expected: I need to checkout with the team when we can do a maintenance release._<|||||>I still have an error. It appears to be for an example where start and end are both predicted as 0 (a null answer).
In this case the valid results have been filtered to document characters only and so an index for 0 doesnt exist in feature.token_to_orig_map (the first index in my instance is 12, 0 doesnt exist).
So there needs to be a method to handle when the predicted span occurs outside the filtered feature dict. You could return a null object since these cases represent no answer or you could return the best answer found inside the valid span (i.e. mask the logits for non document tokens when getting the max value).<|||||>Here is one solution:
original code
```python
# Normalize logits and spans to retrieve the answer
start_ = np.exp(start_ - np.log(np.sum(np.exp(start_), axis=-1, keepdims=True)))
end_ = np.exp(end_ - np.log(np.sum(np.exp(end_), axis=-1, keepdims=True)))
if kwargs["handle_impossible_answer"]:
min_null_score = min(min_null_score, (start_[0] * end_[0]).item())
starts, ends, scores = self.decode(start_, end_, kwargs["topk"], kwargs["max_answer_len"])
char_to_word = np.array(example.char_to_word_offset)
# Convert the answer (tokens) back to the original text
answers += [
{
"score": score.item(),
"start": np.where(char_to_word == feature.token_to_orig_map[s])[0][0].item(),
"end": np.where(char_to_word == feature.token_to_orig_map[e])[0][-1].item(),
"answer": " ".join(
example.doc_tokens[feature.token_to_orig_map[s] : feature.token_to_orig_map[e] + 1]
),
}
for s, e, score in zip(starts, ends, scores)
```
my update
```python
# Normalize logits and spans to retrieve the answer
start_ = np.exp(start_ - np.log(np.sum(np.exp(start_), axis=-1, keepdims=True)))
end_ = np.exp(end_ - np.log(np.sum(np.exp(end_), axis=-1, keepdims=True)))
if kwargs["handle_impossible_answer"]:
min_null_score = min(min_null_score, (start_[0] * end_[0]).item())
starts, ends, scores = self.decode(start_, end_, kwargs["topk"], kwargs["max_answer_len"])
char_to_word = np.array(example.char_to_word_offset)
# Convert the answer (tokens) back to the original text
answers += [
{
"score": score.item(),
"start": np.where(char_to_word == feature.token_to_orig_map[s])[0][0].item(),
"end": np.where(char_to_word == feature.token_to_orig_map[e])[0][-1].item(),
"answer": " ".join(
example.doc_tokens[feature.token_to_orig_map[s] : feature.token_to_orig_map[e] + 1]
),
}
if s in feature.token_to_orig_map and e in feature.token_to_orig_map # this condition handles the case when answer spans are outside the valid token range.
else {"score": min_null_score, "start": 0, "end": 0, "answer": ""}
for s, e, score in zip(starts, ends, scores)
```
personally I would rather get the best valid span (max over the masked logits) rather than an error/null answer. This might be a more useful use of "handle impossible answer". Returning null answers might be the best default behaviour and "best valid span" might be a good alternative although this would involve a significant refactor of decode to mask the logits appropriately.<|||||>I think there is another bug in the decode function (although I may be misunderstanding).
You compute negative log likelihoods as probabilities but in order to mask items you set them to 0. These items need to be set to a high negative number (e.g. -99) as valid values span zero.
for example:
```python
def decode(...):
...
start_, end_ = (
start_ - np.abs(-99 * np.array(feature.p_mask)),
end_ - np.abs(-99 * np.array(feature.p_mask)),
)
# Mask CLS
start_[0] = end_[0] = -99.
```<|||||>@mfuntowicz
Got the same key error zero issue. Above code fixed it<|||||>@jusjosgra Thanks for the provided solution here, do you want to submit a PR with your fix? Tag myself and @LysandreJik as reviewers and we'll merge it into master.
Otherwise I'll do 😄.
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,909 | closed | Make Tokenizers Faster When There Are Many Additional Special Tokens | `PreTrainedTokenizer.unique_no_split_tokens` used to be a list that contains all special tokens. During tokenization, the tokenizer will repeatedly check `if sub_text not in self.unique_no_split_tokens` or `if token not in self.unique_no_split_tokens`. List lookups will significantly slow down tokenization if the list is large, i.e., there are many additional special tokens added to `unique_no_split_tokens`. To resolve this issue, this commit will change `PreTrainedTokenizer.unique_no_split_tokens` to be an ordered dict (actually an ordered set, since all values are `None`), such that lookups can be done very efficiently while still keeping its original ordering. | 07-20-2020 11:42:01 | 07-20-2020 11:42:01 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5909?src=pr&el=h1) Report
> Merging [#5909](https://codecov.io/gh/huggingface/transformers/pull/5909?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/82dd96cae74797be0c1d330566df7f929214b278&el=desc) will **decrease** coverage by `0.13%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5909?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5909 +/- ##
==========================================
- Coverage 78.49% 78.35% -0.14%
==========================================
Files 146 146
Lines 26210 26211 +1
==========================================
- Hits 20573 20538 -35
- Misses 5637 5673 +36
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5909?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <100.00%> (+0.04%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5909?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5909?src=pr&el=footer). Last update [82dd96c...c90f92b](https://codecov.io/gh/huggingface/transformers/pull/5909?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,908 | closed | ImportError: cannot import name 'DataCollatorForPermutationLanguageModeling' | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Language I am using the model on English
The problem arises when using:
* [ ] the official example scripts: [`run_language_modeling.py`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py).
The tasks I am working on is:
* [ ] Continue Training XLNet on domain-specific dataset / finetuning XLNet LM
## To reproduce
Steps to reproduce the behavior:
1. Install transformers 3.0
2. run the following command as mentioned in the readme file from examples :
```
python run_language_modeling.py \
--output_dir=output \
--model_type=xlnet \
--model_name_or_path=xlnet-base-cased \
--do_train \
--train_data_file=$TRAIN_FILE
```
Error message :
```
2020-07-20 10:47:29.463663: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
File "run_language_modeling.py", line 29, in <module>
from transformers import (
ImportError: cannot import name 'DataCollatorForPermutationLanguageModeling'
```
## Expected behavior
I expect not to have this import error since I'm using the latest release of the library
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0
- Platform: Google Colab
- Python version: Python 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101
- Tensorflow version (GPU?): 2.2.0
- Using GPU in script?: Tesla K80
- Using distributed or parallel set-up in script?: No
| 07-20-2020 11:22:20 | 07-20-2020 11:22:20 | Hi @krannnn , `DataCollatorForPermutationLanguageModeling` is added after 3.0, you will need to install from source if you want to run examples<|||||>Hi @patil-suraj , out of curiosity, how do you install it? What do you mean by you will need to install from source? |
transformers | 5,907 | closed | ModuleNotFoundError: No module named 'torch_xla' | from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-wikiSQL-sql-to-en")
model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-wikiSQL-sql-to-en") | 07-20-2020 10:32:26 | 07-20-2020 10:32:26 | Hi @vyaslkv , what is your transformers version ? I tried this from master branch and it worked.<|||||>'3.0.2' Thanks for the quick reply, which version worked for you<|||||>I also tried with the master branch by uninstalling the transformers and then using the repo
<|||||>worked!! Thanks @patil-suraj :)<|||||>can you give me an example how to use this a short one<|||||><img width="1138" alt="Screenshot 2020-07-20 at 7 07 12 PM" src="https://user-images.githubusercontent.com/33617789/87943880-46e07900-cabc-11ea-85e1-c98e8462ac0f.png">
<|||||>ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
<|||||>You should use `.generate` method for generation.
```python3
model.generate(input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'])
```
pinging @mrm8488 for exact `generate` arguments.<|||||><img width="1166" alt="Screenshot 2020-07-20 at 8 55 41 PM" src="https://user-images.githubusercontent.com/33617789/87955457-68952c80-cacb-11ea-889d-95de35eb4cc0.png">
<|||||>is this correct? generating the text out of it the tokenizer decode part?<|||||>I will write a in the model card the exact arguments to use it ASAP and post it here.<|||||>Also @vyaslkv it would nice if you post code instead of screenshot so we can copy paste and try the code faster ;)<|||||>```sh
git clone https://github.com/huggingface/transformers.git
pip install ./transformers
```
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-wikiSQL-sql-to-en")
model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-wikiSQL-sql-to-en")
def get_explanation(query):
input_text = "translante Sql to English: %s </s>" % query
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'])
return tokenizer.decode(output[0])
query = "SELECT COUNT Params form model where location=HF-Hub"
get_explanation(query)
```<|||||>@mrm8488 can you also make something like nlp to sql<|||||>@mrm8488 it doesn't work for longer queries or is there any particular format I should give<|||||>> @mrm8488 can you also make something like nlp to sql
I already did it. <|||||>> @mrm8488 it doesn't work for longer queries or is there any particular format I should give
The max number of f tokens is 128 but I am currently working on the 256 version<|||||>@mrm8488 can you send me the link of nlp to sql<|||||>https://huggingface.co/mrm8488/t5-base-finetuned-wikiSQL-sql-to-en<|||||>Model card: https://github.com/huggingface/transformers/commit/61e8be9940096ce763872c8d1479965511d0b472<|||||>@mrm8488 I think this is sql to English not English to SQL correct me If I am wrong<|||||>English to SQL is t5-base-finetuned-wikiSQL or English to SQL is t5-small-finetuned-wikiSQL<|||||>https://github.com/mrm8488/shared_colab_notebooks/blob/master/T5_finetuned_wikiSQL_demo.ipynb<|||||>The main issue is solved, closing this for now. Feel free to re-open if the problem persists. |
transformers | 5,906 | closed | Word frequencies in TransfoXLTokenizer | I was wondering if it is still possible to access word frequencies through the populated counter of TransfoXLTokenizer? For example, `tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')` seems to have an empty counter. This is referring to:
"So p_M(S) is just the output of the model right?
For p_u(S), I think the easiest is probably to use the empirical probabilities.
`TransfoXLTokenizer` has a counter to store words frequencies [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization_transfo_xl.py#L98) which should be populated in the "pretrained" tokenizer so I would use and normalize this to get unconditional probabilities for each word and then compute SLOR."
_Originally posted by @thomwolf in https://github.com/huggingface/transformers/issues/477#issuecomment-483973033_ | 07-20-2020 10:28:33 | 07-20-2020 10:28:33 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,905 | closed | Retrain/reuse fine-tuned models on a different set of labels | # ❓ Questions & Help
## Details
Hello,
I am wondering is it possible to reuse or retrain a fine-tuned model with a new set of labels(the set of labels contain new labels or the new set of labels is a subset of the labels used to fine-tune the model)?
What I try to do is fine-tune pre-trained models for a task (e.g. NER) in the domain free dataset, then reuse/retrain this fine-tuned model to do a similar task but in a more specific domain (e.g. NER for healthcare), thus in this specific domain, the set of labels may not be the same.
I already try to fine-tune a BERT model to do NER on WNUT17 data based on token classification example in Transformers GitHub. After that, I try to retrain the fine-tuned model by adding a new label and provide train data that has this label, the train failed with error:
```
RuntimeError: Error(s) in loading state_dict for BertForTokenClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([13, 1024]) from checkpoint, the shape in current model is torch.Size([15, 1024]).
size mismatch for classifier.bias: copying a param with shape torch.Size([13]) from checkpoint, the shape in current model is torch.Size([15]).
```
Is it possible to do this with Transformers and if so how? Maybe there is a method that can do something like [this](https://spacy.io/api/entityrecognizer#add_label)(the method is from spaCy). Thank you in advance!
I already post this in the forum
[Retrain/reuse fine-tuned models on a different set of labels](https://discuss.huggingface.co/t/retrain-reuse-fine-tuned-models-on-different-set-of-labels/346) | 07-20-2020 09:44:24 | 07-20-2020 09:44:24 | @kevin-yauris I had a similar problem with retraining fine-tuned model. Here is what I have done.
Do not pass config parameter when creating your model with `from_pretrained()`. Just initialize it with something like this:
```
model = AutoModelForTokenClassification.from_pretrained(
model_name,
from_tf=bool(".ckpt" in model_name),
cache_dir=cache_dir,
)
```
Then, you will need to change the last layer in the model. I was using PyTorch to fine-tune a blank model initially, therefore these steps will work for PyTorch models.
The last layer in the `TokenClassification` model is called `classification`. It is simply a linear layer, so you can create new one with the correct shape and randomized weights, and assign it to the initialized model `classification` layer. Say before my layer was (768,5) with the initial 5 classes, and now I want 9 so make a final layer with shape (768,9).
```
#reinitiallize the final classification layer to match new number of labels
model.classifier = torch.nn.Linear(in_features=model.classifier.in_features, out_features=config.num_labels, bias=True)
model.config = config
model.num_labels = config.num_labels
```
Since to initialize the model you will be loading config file from the fine-tuned model, you also want to change model config to your current one with the new classes, so the correct config gets exported after your model is trained. Also you will want to modify `num_labels` of the model, since that was initialized with the old number of classes in the old config.
<|||||>Hi @TarasPriadka thank you for answering
I also to the same thing that you did but with Tensorflow https://discuss.huggingface.co/t/retrain-reuse-fine-tuned-models-on-different-set-of-labels/346/5?u=kevinyauris.
I forget about the model.num_labels tho, thank you for the catch.
I wonder if there is another way to do it since if we replace the last layer with randomized weights we can't use the learned weight for some labels that are the same with previous labels/classes.
Let's say there are 3 classes in the initial model and now I want to add 1 more class but the other classes are the same. If we use this method all weights for the last layer are randomized and we need to fine-tune the model with all the data again instead of just give train data for the new class.<|||||>@kevin-yauris I've seen your forum post since I've been looking for a solution. My idea is that you already have an `id2label` and `label2id` in the model, so you could find if the incoming labels are already trained in the fine-tuned model. You find those which are not and you add randomized layers for them. However I am not sure how you can take a layer, and then just add randomized rows to it.<|||||>Hi @TarasPriadka ,
Thanks for sharing the solution. I followed the same steps which solved this error
RuntimeError: Error(s) in loading state_dict for BertForTokenClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([17, 1024]) from checkpoint, the shape in current model is torch.Size([13, 1024]).
size mismatch for classifier.bias: copying a param with shape torch.Size([17]) from checkpoint, the shape in current model is torch.Size([13]).
but now, it throws another error -
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/usr/local/lib/python3.7/site-packages/transformers/trainer.py", line 514, in train
optimizer.step()
File "/usr/local/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 67, in wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/transformers/optimization.py", line 244, in step
exp_avg.mul_(beta1).add_(grad, alpha=1.0 - beta1)
RuntimeError: The size of tensor a (17) must match the size of tensor b (13) at non-singleton dimension 0
I had first trained the model on a dataset having 17 classes and now I want to transfer this model to the 2nd dataset which has 13 labels.
Do we have to change the num_labels for any other layer ?
Thanks,<|||||>@vikas95 I am not sure, but just changing the model's `num_labels` seemed to be working for me. However, I was scaling up labels, not reducing them. I would assume that it should have the same solution. Maybe you can share your model's layers before and after applying my fix with `print(model)`, and we can take a look into a possible solution.<|||||>Hi @TarasPriadka ,
Thanks for the suggestion, I printed the model after loading the checkpoint and after updating the classification layer.
The classification layer output dimension is changing from your mentioned solution i.e.,
initially after loading the checkpoint, model = AutoModelForTokenClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
cache_dir=model_args.cache_dir,
)
The classification layer size is - (classifier): Linear(in_features=1024, out_features=17, bias=True)
and after updating the classification layer, the size is - (classifier): Linear(in_features=1024, out_features=13, bias=True)
The rest of the layers look similar but I am still not sure why its throwing the previously mentioned error.
-Vikas<|||||>@vikas95, so the shape of the model is fine. The issue is in `exp_avg.mul_(beta1).add_(grad, alpha=1.0 - beta1)`. The exception highlights that `exp_avg` is of size 17 and its trying to add `grad` which is 13. So the problem is in `exp_avg`, since it wasn't updated along with everything else. Can you share the whole chunk of code where you initialize the model, trainer, etc?<|||||>Hi @TarasPriadka ,
Here is the part where I initialize the model (which is from run_ner.py (https://github.com/huggingface/transformers/blob/6e8a38568eb874f31eb49c42285c3a634fca12e7/examples/token-classification/run_ner.py#L158)) -
labels = get_labels(data_args.labels)
label_map = {i: label for i, label in enumerate(labels)}
num_labels = len(labels)
config = AutoConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
num_labels=num_labels,
id2label=label_map,
label2id={label: i for i, label in enumerate(labels)},
cache_dir=model_args.cache_dir,
)
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
use_fast=model_args.use_fast,
)
model = AutoModelForTokenClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
cache_dir=model_args.cache_dir,
)
model.classifier = torch.nn.Linear(in_features=model.classifier.in_features, out_features=config.num_labels, bias=True)
model.config = config
model.num_labels = config.num_labels
<|||||>@vikas95 Can you also share the trainer code<|||||>@TarasPriadka - Its the same as in run_ner.py
I haven't changed any other part of the code.
<|||||>@vikas95 can you check if in your model folder you have this file `optimizer.pt` and `scheduler.pt`
<|||||>@TarasPriadka ,
Thanks for the help, I was giving a specific checkpoint directory as the model path i.e., "datasetA_model/checkpoint-6000/" which had both optimizer.pt and scheduler.pt
but then I changed the model path to just "datasetA_model/" and it works fine with no errors.
I am guessing that if I just give the "datasetA_model/" as model path then it would select the highest checkpoint ?
Anyway, thanks a lot for looking at the problem and for all the quick responses and help 😬
<|||||>@vikas95 This was a great deal of fun. When you are running
```
trainer.train(
model_path=model_name if os.path.isdir(model_name) else None
)
```
trainer loads in those files, and initializes Adam optimizer with them. Optimizer breaks since you are changing the shape of the output layer, but optimizer was initialized with the other shape. What you can do, is either delete that file, or just run `trainer.train()` without parameters.<|||||>Cool, this makes sense.
Thanks again for the explanation. I was actually trying with just trainer.train() for last 30 minutes and it works fine.
Thanks again for all the help and explanations. <|||||>Does anyone know what is the alternative method in Pytorch?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Thank you all for this solution, it worked for me but I encountered another problem while training getting this error:
`ValueError: Expected input batch_size (3200) to match target batch_size (32).`
My batch size is indeed 32. If I change it to other value e.g. 16 the error will be:
`ValueError: Expected input batch_size (1600) to match target batch_size (16).` it always multiplies by 100 which is a weird behavior because when trying to run the exact same code but on an original pre-trained model (in my case is `xlm-roberta-base`), to fine-tune it on classification task, it works just fine.
Here is my code:
```
config = XLMRobertaConfig.from_pretrained("../xlm-roberta_domains_classifier/model", output_hidden_states=True,
num_labels=len(train_df.label.unique()),
id2label=id2label, label2id=label2id)
model = XLMRobertaForSequenceClassification.from_pretrained('../xlm-roberta_domains_classifier/model')
model.cuda()
model.classifier = torch.nn.Linear(in_features=model.classifier.out_proj.in_features, out_features=config.num_labels, bias=True)
model.config = config
model.num_labels = config.num_labels
tokenizer = XLMRobertaTokenizer.from_pretrained('../xlm-roberta_domains_classifier/model')
model.cuda()
```
Model summary:
```
XLMRobertaForSequenceClassification(
(roberta): RobertaModel(
(embeddings): RobertaEmbeddings(
(word_embeddings): Embedding(250002, 768, padding_idx=1)
(position_embeddings): Embedding(514, 768, padding_idx=1)
(token_type_embeddings): Embedding(1, 768)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): RobertaEncoder(
(layer): ModuleList(
(0): RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(1): RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(2): RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(3): RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(4): RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(5): RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(6): RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(7): RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(8): RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(9): RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(10): RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(11): RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
)
(classifier): Linear(in_features=768, out_features=12, bias=False)
)
```
Data preparation:
```
train_encoding = tokenizer(train_df.text.to_list(), return_tensors='pt', padding=True, truncation=True).to(device)
train_input_ids = train_encoding['input_ids'].to(device)
train_attention_mask = train_encoding['attention_mask'].to(device)
train_labels = torch.tensor(train_df.label.to_list()).unsqueeze(0).to(device)[0]
val_encoding = tokenizer(val_df.text.to_list(), return_tensors='pt', padding=True, truncation=True).to(device)
val_input_ids = val_encoding['input_ids'].to(device)
val_attention_mask = val_encoding['attention_mask'].to(device)
val_labels = torch.tensor(val_df.label.to_list()).unsqueeze(0).to(device)[0]
test_encoding = tokenizer(test_df.text.to_list(), return_tensors='pt', padding=True, truncation=True).to(device)
test_input_ids = test_encoding['input_ids'].to(device)
test_attention_mask = test_encoding['attention_mask'].to(device)
test_labels = torch.tensor(test_df.label.to_list()).unsqueeze(0).to(device)[0]
batch_size = 32
train_data = TensorDataset(train_input_ids, train_attention_mask, train_labels)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)
validation_data = TensorDataset(val_input_ids, val_attention_mask, val_labels)
validation_sampler = SequentialSampler(validation_data)
validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=batch_size)
test_data = TensorDataset(test_input_ids, test_attention_mask, test_labels)
test_sampler = SequentialSampler(test_data)
test_dataloader = DataLoader(test_data, sampler=test_sampler, batch_size=batch_size)
```
Training logic:
```
optimizer = AdamW(model.parameters(),
lr = 4e-5,
eps = 1e-8 # args.adam_epsilon - default is 1e-8.
)
from transformers import get_linear_schedule_with_warmup
epochs = 3
total_steps = len(train_dataloader) * epochs
scheduler = get_linear_schedule_with_warmup(optimizer,
num_warmup_steps = 0, # Default value in run_glue.py
num_training_steps = total_steps)
seed_val = 42
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
# Store the average loss after each epoch so we can plot them.
loss_values = []
# For each epoch...
for epoch_i in range(0, epochs):
# ========================================
# Training
# ========================================
print("")
print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
print('Training...')
t0 = time.time()
total_loss = 0
model.train()
for step, batch in enumerate(train_dataloader):
if step % 50 == 0 and not step == 0:
elapsed = format_time(time.time() - t0)
print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
model.zero_grad()
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels)
loss = outputs[0]
total_loss += loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
scheduler.step()
avg_train_loss = total_loss / len(train_dataloader)
loss_values.append(avg_train_loss)
print("")
print(" Average training loss: {0:.2f}".format(avg_train_loss))
print(" Training epcoh took: {:}".format(format_time(time.time() - t0)))
# ========================================
# Validation
# ========================================
print("")
print("Running Validation...")
t0 = time.time()
model.eval()
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
for batch in validation_dataloader:
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
with torch.no_grad():
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask)
logits = outputs[0]
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
eval_accuracy += tmp_eval_accuracy
nb_eval_steps += 1
print(" Accuracy: {0:.2f}".format(eval_accuracy/nb_eval_steps))
print(" Validation took: {:}".format(format_time(time.time() - t0)))
print("")
print("Training complete!")
```
The error occurs when reaching this line:
```
outputs = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels)
``` |
transformers | 5,904 | closed | RobertaTokenizerFast unexpectedly quits when creating a TextDataset | # 🐛 Bug
When creating a `TextDataset` using `RobertaTokenizerFast` my program unexpectedly dies. (Not so with `RobertaTokenizer`).
## Information
Model I am using: RoBERTa
Language I am using the model on: English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: [language modelling](https://github.com/huggingface/transformers/blob/33d3072e1c54bcd235447b98c6dea1b4cb71234c/examples/run_lm_finetuning.py)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import AutoTokenizer, TextDataset
tokenizer = AutoTokenizer.from_pretrained("roberta-base", use_fast=True)
train_dataset = TextDataset(
tokenizer=tokenizer,
file_path="/home/ubuntu/data/wikitext-103-raw/wiki.train.raw",
block_size=-1,
overwrite_cache=False,
)
print(train_dataset)
```
## Expected behavior
Creation of the training dataset, not having the process killed. eg:
```
<transformers.data.datasets.language_modeling.TextDataset object at 0x7f138a1fd2b0>
```
## Environment info
- `transformers` version: 2.11.0
- Platform: Linux-5.3.0-1030-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 07-20-2020 07:36:22 | 07-20-2020 07:36:22 | @n1t0 <|||||>This seems to work for me, I guess it crashes because you don't have enough memory. Unfortunately `TextDataset` has not been optimized for fast tokenizers yet, so it does a lot more work than needed when using them. It's probably better to use python tokenizers for now with `TextDataset`.
Also, maybe the [huggingface/nlp](https://github.com/huggingface/nlp) library might be better suited here. cc @lhoestq <|||||>You could try
```python
from transformers import AutoTokenizer
from nlp import load_dataset
tokenizer = AutoTokenizer.from_pretrained("roberta-base", use_fast=True)
dataset = load_dataset("text", data_files="path/to/wiki.train.raw", split="train")
tokenized_dataset = dataset.map(lambda ex: tokenizer(ex["text"]), batched=True)
print(tokenized_dataset[0]["input_ids"])
```
We're still working on making it as fast as we can, but at least you won't have any memory issues.<|||||>Re @n1t0 comment: "I guess it crashes because you don't have enough memory" this is correct. (I was hoping I could get away with 61.0 GiB, the standard for an AWS `p3.2xlarge`.)
Re @lhoestq your code ran without errors for me. Thanks!
I did get a lot of the [`Token indices sequence length is longer than the specified maximum sequence length for this model (522 > 512). Running this sequence through the model will result in indexing errors`](https://github.com/huggingface/transformers/issues/1791) warnings which I wasn't getting before.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,903 | closed | [WIP] Add Theseus Compression | `transformers.theseus` provides the implementation for BERT-of-Theseus, LayerDrop and Mixout.
Original BERT-of-Theseus authors: @JetRunner @MichaelZhouwang
| 07-20-2020 07:22:25 | 07-20-2020 07:22:25 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5903?src=pr&el=h1) Report
> Merging [#5903](https://codecov.io/gh/huggingface/transformers/pull/5903?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/492bb6aa486856f8243dfeb533ed1b23e996e403?el=desc) will **decrease** coverage by `2.80%`.
> The diff coverage is `83.75%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5903?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5903 +/- ##
==========================================
- Coverage 80.12% 77.31% -2.81%
==========================================
Files 169 152 -17
Lines 32317 26290 -6027
==========================================
- Hits 25893 20326 -5567
+ Misses 6424 5964 -460
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5903?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/theseus/theseus\_list.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90aGVzZXVzL3RoZXNldXNfbGlzdC5weQ==) | `67.64% <67.64%> (ø)` | |
| [src/transformers/theseus/theseus\_module.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90aGVzZXVzL3RoZXNldXNfbW9kdWxlLnB5) | `88.88% <88.88%> (ø)` | |
| [src/transformers/theseus/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90aGVzZXVzL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | |
| [src/transformers/theseus/layerdrop\_list.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90aGVzZXVzL2xheWVyZHJvcF9saXN0LnB5) | `100.00% <100.00%> (ø)` | |
| [src/transformers/theseus/mixout\_list.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90aGVzZXVzL21peG91dF9saXN0LnB5) | `100.00% <100.00%> (ø)` | |
| [src/transformers/theseus/theseus\_errors.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90aGVzZXVzL3RoZXNldXNfZXJyb3JzLnB5) | `100.00% <100.00%> (ø)` | |
| [src/transformers/commands/env.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9lbnYucHk=) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
| [src/transformers/commands/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9fX2luaXRfXy5weQ==) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
| [src/transformers/commands/transformers\_cli.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFuc2Zvcm1lcnNfY2xpLnB5) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.68%)` | :arrow_down: |
| ... and [166 more](https://codecov.io/gh/huggingface/transformers/pull/5903/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5903?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5903?src=pr&el=footer). Last update [492bb6a...20dd34f](https://codecov.io/gh/huggingface/transformers/pull/5903?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>which examples have you tested this with?<|||||>> which examples have you tested this with?
run_glue and run_ner but more to come!<|||||>Want an early review for this? Would be wonderful! And it can save me some time before I do the docs. @sshleifer <|||||>> run_glue and run_ner but more to come!
:heart_eyes: can't wait to re-fine-tune my NER models :hugs: <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Well I've been really busy recently but don't close it for me stalebot!<|||||>Thanks for reopening it @LysandreJik! I was stuck with some details and I'll probably get it done soon.<|||||>Sounds great, looking forward to it!<|||||>Will move to an independent package. Closing this. |
transformers | 5,902 | closed | 🐛 BART : Same representations for different `<s>` tokens | # 🐛 Bug
## Context
I'm trying to use BART for a sentence classification task. So I encode the input with the following format :
```
<s> Sen1 </s> <s> Sen2 </s> <s> Sen3 </s> ...
```
And use `<s>` as sentence representation (from the encoder, not decoder). Then I classify these representations.
---
I trained my model but the classification give random choice. After debugging, I noticed that the encoder produce always the same representation for `<s>` token.
## Bug
**The representation of `<s>` token is always the same, no matter where they appear in the input.**
Here is a notebook reproducing the issue : [Colab](https://colab.research.google.com/drive/1mqKKFAEGEwa5XbkJtm7_VrmQ8L3_0Bnt?usp=sharing)
In this notebook I simply encode an input, modify the input to add an additional `<s>` token, forward it through BART and compare the encoder representation of `<s>`. It gives me :
```
tensor([-0.0097, 0.0075, 0.0086, ..., 0.0041, -0.0085, -0.0011],
grad_fn=<SelectBackward>)
tensor([-0.0097, 0.0075, 0.0086, ..., 0.0041, -0.0085, -0.0011],
grad_fn=<SelectBackward>)
```
---
Now if I do the same with `</s>` token, their representation is different :
```
tensor([ 0.0658, 0.0161, -0.0062, ..., -0.0536, -0.0515, 0.1837],
grad_fn=<SelectBackward>)
tensor([ 0.0627, 0.0576, 0.0408, ..., -0.0406, -0.0765, 0.1689],
grad_fn=<SelectBackward>)
```
Which makes sense because they represent different thing...
## Note 1
I think this is a bug because I remember a few months ago, I tried to do the same (use `<s>` representation for classification) with fairseq version of BART, and it worked...
## Note 2
I'm aware the example in the Notebook is using just the pre-trained version of BART, so the `<s>` representation does not represent sentence. But even after training the model, the exact same behavior arise.
@sshleifer | 07-20-2020 05:30:12 | 07-20-2020 05:30:12 | Very strange indeed. Nothing comes to mind, but if you can show the fairseq discrepancies I can take a look.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@astariul Hi I also encounter this problem. The finetuned BART just gives the same output for `<s>` under eval mode, have you fixed it? <|||||>@turing-yfqiu I'm sorry it's long time ago, I don't remember how I did.... |
transformers | 5,901 | closed | How can I check the loss during pretraing huggingface/transformers |
thanks in advance.
I trained roberta model from scratch.
But I cant check the training loss during pretraining.
I did it by refering below link.
https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb
in above link, loss is printed every 500 steps,
but when I did, there is no loss print.
Iteration: 100%|█████████▉| 20703/20711 [4:42:54<00:07, 1.14it/s][A
Iteration: 100%|█████████▉| 20704/20711 [4:42:54<00:05, 1.24it/s][A
Iteration: 100%|█████████▉| 20705/20711 [4:42:55<00:05, 1.20it/s][A
Iteration: 100%|█████████▉| 20706/20711 [4:42:56<00:04, 1.18it/s][A
Iteration: 100%|█████████▉| 20707/20711 [4:42:57<00:03, 1.19it/s][A
Iteration: 100%|█████████▉| 20708/20711 [4:42:58<00:02, 1.16it/s][A
Iteration: 100%|█████████▉| 20709/20711 [4:42:59<00:01, 1.14it/s][A
Iteration: 100%|█████████▉| 20710/20711 [4:43:00<00:00, 1.13it/s][A
Iteration: 100%|██████████| 20711/20711 [4:43:00<00:00, 1.45it/s][A
Iteration: 100%|██████████| 20711/20711 [4:43:00<00:00, 1.22it/s]
Epoch: 100%|██████████| 13/13 [61:14:16<00:00, 16952.06s/it]
Epoch: 100%|██████████| 13/13 [61:14:16<00:00, 16958.16s/it]
compress roberta.20200717.zip on ./pretrained
save roberta.20200717.zip on minio(petcharts)
stackoverflow link
https://stackoverflow.com/questions/62988081/checking-pretraining-loss-in-huggingface-transformers | 07-20-2020 03:09:13 | 07-20-2020 03:09:13 | training loss should be printed every 500 iteration, but there is no log in pretraing.
(parser.add_argument("--logsteps", help="logging steps", type=int, default=500)<|||||>Hi! This wasn't intentional, so we've fixed it with #6097. If you rerun the script, you should see the losses now. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.