repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 5,200 | closed | tokenizer.convert_ids_to_tokens(tokenizer.convert_tokens_to_ids(x)) returning a different result | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I have a trained Esperanto Tokenizer, the training code was taken from the how_to_train example. I'm using Roberta Tokenizer as RobertaTokenizerFast doesn't work with trainer.py yet (or last time I checked).
```python
from transformers import RobertaTokenizer
alt_tokenizer = RobertaTokenizer.from_pretrained("./drive/My Drive/EsperBERTo", max_len=512)
alt_tokenizer.convert_ids_to_tokens(alt_tokenizer.convert_tokens_to_ids(alt_tokenizer.tokenize('Neniu atendas la Hispanan Inkvizicion.')))
```
output is: ['Neniu', 'Ġatendas', 'Ġla', 'ĠHispan', 'an', 'ĠIn', 'kvizi', 'cion', '.']
## Expected behavior
the same output as input
## Environment info
tokenizers 0.8.0rc1
transformers 2.11.0
- `transformers` version: 2.11.0
- tokenizers version: 0.8.0rc1
- Platform: colab
- Python version: 3.6
- Using GPU in script?: No
| 06-22-2020 23:30:39 | 06-22-2020 23:30:39 | |
transformers | 5,199 | closed | Update README.md | Fix/add information in README.md | 06-22-2020 22:24:51 | 06-22-2020 22:24:51 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5199?src=pr&el=h1) Report
> Merging [#5199](https://codecov.io/gh/huggingface/transformers/pull/5199?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a34fb91d541bdb235a6c9fa96ecf11d51426ac84&el=desc) will **decrease** coverage by `0.89%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5199?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5199 +/- ##
==========================================
- Coverage 77.99% 77.10% -0.90%
==========================================
Files 138 138
Lines 23786 23786
==========================================
- Hits 18553 18340 -213
- Misses 5233 5446 +213
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5199?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5199/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `19.92% <0.00%> (-75.00%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5199/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5199/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.71% <0.00%> (-0.30%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5199?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5199?src=pr&el=footer). Last update [a34fb91...e374d82](https://codecov.io/gh/huggingface/transformers/pull/5199?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,198 | closed | [Reformer classification head] Implement the reformer model classification head for text classification | This PR is an effort to implement the classification head on top of the plain vanilla reformer model.
References have been taken from the Roberta and XLNet classification head to implement the reformer classification head.
Tried testing the implementation changes, but it is failing error, link to google collab https://colab.research.google.com/drive/1KFsQxLqsMB6vBF4_bRmTFGhdGwkgx0zI?usp=sharing;
Test scenario details;
- IMDB movie review dataset.
- The cell3 has the reformer model classification head code.
- Since the length of the reviews is not more than 512 words( assuming there would be a few outliers, but that could be skipped)
- Initializing the model with the config.axial_pos_shape to (16,32) but the setting does not take effect and throws runtime error.
Please help.
| 06-22-2020 21:54:40 | 06-22-2020 21:54:40 | Looks cool :-) Will take a look soon!<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5198?src=pr&el=h1) Report
> Merging [#5198](https://codecov.io/gh/huggingface/transformers/pull/5198?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **increase** coverage by `0.35%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5198?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5198 +/- ##
==========================================
+ Coverage 77.01% 77.36% +0.35%
==========================================
Files 128 146 +18
Lines 21615 25991 +4376
==========================================
+ Hits 16646 20109 +3463
- Misses 4969 5882 +913
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5198?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (+0.11%)` | :arrow_up: |
| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <ø> (+5.16%)` | :arrow_up: |
| [src/transformers/benchmark/benchmark\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `86.04% <ø> (+0.68%)` | :arrow_up: |
| [src/transformers/benchmark/benchmark\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdGYucHk=) | `87.50% <ø> (ø)` | |
| [src/transformers/benchmark/benchmark\_args\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <ø> (-7.75%)` | :arrow_down: |
| [src/transformers/benchmark/benchmark\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `61.53% <ø> (ø)` | |
| [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.50% <ø> (-3.60%)` | :arrow_down: |
| [src/transformers/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <ø> (+0.32%)` | :arrow_up: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <ø> (+0.41%)` | :arrow_up: |
| ... and [113 more](https://codecov.io/gh/huggingface/transformers/pull/5198/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5198?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5198?src=pr&el=footer). Last update [223084e...d2f3839](https://codecov.io/gh/huggingface/transformers/pull/5198?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I left a couple of comments that colud help you fix the PR :-) Also it would be great to add tests here<|||||>> I left a couple of comments that colud help you fix the PR :-) Also it would be great to add tests here
Could you please point to some reference code base, that I can look at. This will give me a good idea of the testing standard in the application.<|||||>Sure, here we go: https://github.com/huggingface/transformers/blob/1ae132a07d7f294cf58cd50f7db8723d00e282de/tests/test_modeling_bert.py#L361
and
https://github.com/huggingface/transformers/blob/1ae132a07d7f294cf58cd50f7db8723d00e282de/tests/test_modeling_bert.py#L520
All this class should be added to `all_model_classes` in the Reformer test.<|||||>@patrickvonplaten I tried to fix the test case for the reformer classification head. But it is failing in one of the common test cases class,
test_modelling_common.py-> test_hidden_states_output
where it tries to run the hidden states test for the Reformer Models, it fails only for classification head.
Since this is a common workflow, I am wondering where am I going wrong. Also, the newly added QA head on the reformer is not failing which is quite surprising. It looks like it bypasses the QA head and skips the test.
Please let me know your thoughts/suggestions.
Thanks
Amit<|||||>I think we are very close to merging this PR :-) After the minor changes as described above, I think we are good to merge<|||||>@patrickvonplaten Implemented the changes as suggested :)<|||||>Waiting for another approval - LGTM<|||||>Awesome PR @as-stevens, do you have a new example notebook? If not I take it that it's possible to just import the class you created in your notebook and have added to the codebase.<|||||>@jstremme the link to the notebook, where I tried to fine-tune the model for IMDB text classification. Since there are not many pre-trained models for Reformer, I used the Crime and Punishment for classification.
[https://colab.research.google.com/drive/1l61NccWTGMfNFPj1T8kvMjnik2iEQWfe](url)
The model did not perform well, as it is not a bi-directional trained model. The link [https://github.com/huggingface/transformers/issues/5023](url) contains more information.
Thanks
Amit<|||||>Thanks very much @as-stevens. Great work, and I'm hopeful I'll be able to pretrain and use my own bi-directional Reformer for text classification by following a similar approach. Happy to share high-level details with the transformers community as I go.
Unfortunately, the links you shared don't work for me. Can you try updating? |
transformers | 5,197 | closed | batch_encode_plus() causes OOM, while encode_plus does not | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I am running a sequence classification task using `DistilBertForSequenceClassfication`. I follow `examples/text_classfication/run_glue.py` and `src/transformers/data/processors/glue.py` to implement my data loading process. My dataset is a rather large one (~2.5 GB with 7M+ examples), compared to those of the GLUE tasks.
In the current `glue.py`, `_glue_convert_examples_to_features()` reads all the examples into a list, and then call `batch_encode_plus()` on that list. On my large dataset, this implementation caused an out-of-memory (OOM) error. Therefore, I switched to `encode_plus()`, and called it on individual data example while looping through the dataset. `encode_plus()` did not cause OOM.
I wonder if there is something wrong with `batch_encode_plus()` so that it cannot handle all the examples in a dataset at once? If that is the case, it might be a good idea to add a corresponding note to the documentation.
| 06-22-2020 21:51:48 | 06-22-2020 21:51:48 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I'm also running out of memory using `BertTokenizerFast.batch_encode_plus()`. I'm using the `BertTokenizer.batch_encode_plus()` now and it seems to be working but its very slow and single-threaded! I have 220 GB RAM and the dataset is under 2 GB 😞 .<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Any new developments?<|||||>any solutions to this? facing the same issue |
transformers | 5,196 | closed | [HANS] Fix label_list for RoBERTa/BART (class flipping) | the MNLI checkpoints ported from fairseq (more specifically RoBERTa and BART) has flipping classes. @sshleifer fixed it for `run_glue.py` last week (see https://github.com/huggingface/transformers/pull/5141), this PR fixes the same problem for `run_hans.py`.
Now the HANS evaluation for [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) gives:
```
Heuristic entailed results:
lexical_overlap: 0.9932
subsequence: 0.9996
constituent: 0.9982
Heuristic non-entailed results:
lexical_overlap: 0.8002
subsequence: 0.263
constituent: 0.2074
```
For [roberta-large-mnli](https://huggingface.co/roberta-large-mnli):
```
Heuristic entailed results:
lexical_overlap: 1.0
subsequence: 1.0
constituent: 1.0
Heuristic non-entailed results:
lexical_overlap: 0.8746
subsequence: 0.2804
constituent: 0.1056
``` | 06-22-2020 21:32:07 | 06-22-2020 21:32:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5196?src=pr&el=h1) Report
> Merging [#5196](https://codecov.io/gh/huggingface/transformers/pull/5196?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c439752482759c94784e11a87dcbf08ce69dccf3&el=desc) will **decrease** coverage by `0.08%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5196?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5196 +/- ##
==========================================
- Coverage 78.07% 77.99% -0.09%
==========================================
Files 138 138
Lines 23786 23786
==========================================
- Hits 18572 18552 -20
- Misses 5214 5234 +20
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5196?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-5.42%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.61% <0.00%> (-0.20%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.15%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5196?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5196?src=pr&el=footer). Last update [c439752...9f1bf7e](https://codecov.io/gh/huggingface/transformers/pull/5196?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LGTM, though if we could do it again we would just flip the final layer weights when converting models... (discussed recently with @LysandreJik and @thomwolf) |
transformers | 5,195 | closed | Add link to new comunity notebook (optimization) | related to https://github.com/huggingface/transformers/issues/4842#event-3469184635
This notebook is about benchmarking model training with/without dynamic padding optimization.
https://github.com/ELS-RD/transformers-notebook
Using dynamic padding on MNLI provides a **4.7 times training time reduction**, with max pad length set to 512. The effect is strong because few examples are >> 400 tokens in this dataset. IRL, it will depend of the dataset, but it always bring improvement and, after more than 20 experiments listed in this [article](https://towardsdatascience.com/divide-hugging-face-transformers-training-time-by-2-or-more-21bf7129db9q-21bf7129db9e?source=friends_link&sk=10a45a0ace94b3255643d81b6475f409), it seems to not hurt performance.
Following advice from @patrickvonplaten I do the PR myself :-) | 06-22-2020 21:29:02 | 06-22-2020 21:29:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5195?src=pr&el=h1) Report
> Merging [#5195](https://codecov.io/gh/huggingface/transformers/pull/5195?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1c5cd8e5f59154905c5ae0f47a8c8905618a12ff&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5195?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5195 +/- ##
==========================================
+ Coverage 77.98% 78.00% +0.01%
==========================================
Files 138 138
Lines 23786 23786
==========================================
+ Hits 18550 18554 +4
+ Misses 5236 5232 -4
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5195?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.18% <0.00%> (-0.13%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (ø)` | |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5195/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.57% <0.00%> (+1.18%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5195?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5195?src=pr&el=footer). Last update [1c5cd8e...438d62f](https://codecov.io/gh/huggingface/transformers/pull/5195?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>That's really a great notebook and fits very well with what we are currently working on I think @mfuntowicz |
transformers | 5,194 | closed | [Use cache] Align logic of `use_cache` with output_attentions and output_hidden_states | This PR cleans the usage for `use_cache` the same way it is done for `output_hidden_states` and `output_attentions`.
The logic as for all non-tensor function arguments is that
1) if the function argument `use_cache` is specified use this one
2) if not then use the `config.use_cache`.
**Break in Backward Compatibilty**
There is a small break in backward compatibility, in that Bart now uses "use_cache=True" as a default value *if* both `input_ids` and `decoder_input_ids` are passed as an argument. Bart always used `use_cache=True` as a default for generation, so this should not concern many people IMO. | 06-22-2020 20:42:00 | 06-22-2020 20:42:00 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5194?src=pr&el=h1) Report
> Merging [#5194](https://codecov.io/gh/huggingface/transformers/pull/5194?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e9ef21175eed7121a3f785708f2264c61215cdc3&el=desc) will **increase** coverage by `0.05%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5194?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5194 +/- ##
==========================================
+ Coverage 77.95% 78.01% +0.05%
==========================================
Files 138 138
Lines 23772 23798 +26
==========================================
+ Hits 18531 18565 +34
+ Misses 5241 5233 -8
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5194?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.29% <100.00%> (+0.05%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <100.00%> (+<0.01%)` | :arrow_up: |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.80% <100.00%> (+0.36%)` | :arrow_up: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `84.32% <100.00%> (+0.15%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.66% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.09% <100.00%> (+0.36%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `91.41% <100.00%> (+0.10%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |
| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/5194/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5194?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5194?src=pr&el=footer). Last update [e9ef211...d02506e](https://codecov.io/gh/huggingface/transformers/pull/5194?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I'm confused what you mean by backwards compatibility. As long as the tests pass this looks fine to me.
The reason we need to be able to set `use_cache` outside of config is that in the validation_step of the `SummarizationModule` we want to call generate, and in the training_step we want to call `forward(use_cache=False)`.
<|||||>Please check bart slow tests :)<|||||>> Please check bart slow tests :)
On CPU all slow tests pass. On GPU via `USE_CUDA=1` the test: `tests/test_modeling_bart.py::MBartIntegrationTests::test_enro_forward` fails, but it also uses half-precision on GPU and expects the same numbers as when not using half-precision. Do you think this is due to this PR @sshleifer ?<|||||>Sounds completely unrelated, unless any of them are super slow. I think the only risk is that you accidentally disable the cache.<|||||>> Great! Very cool to have added some tests as well. No tests were added for CTRL?
I think I was to lazy back when I added the tests for GPT2 and T5 -> will add tests in a separate PR for CTRL |
transformers | 5,193 | closed | Switch master/stable doc and add older releases | Switch the doc base url to the latest stable release and add master docs as well as some intermediate docs that were missing. | 06-22-2020 20:31:10 | 06-22-2020 20:31:10 | |
transformers | 5,192 | closed | Using segments ids in encoder-decoder model in generate function | # 🐛 Bug
## Information
I have implemented an encoderdecoder model where both of them are bert, in encoder module i'm using segment ids (token_type_ids) and give it to model easily in this way and works correctly:
```
kwargs = {'token_type_ids':segments_tensors}
outputs = model(input_ids=encoder_input, decoder_input_ids=decoder_input, **kwargs)[0]
```
but in generate function i use below code to give segment ids:
```
kwargs = {'token_type_ids':segments_tensors}
generated = model.generate(encoder_input, decoder_start_token_id=101
num_return_sequences=16,num_beams=32, **kwargs)
```
I thought this should work but commenting or uncommenting kwargs shows me that no changes in output have happened. Is there another correct approach to give arguments of encoder module and token_type_ids through generate function or not? | 06-22-2020 20:07:12 | 06-22-2020 20:07:12 | `segments_tensors` are currently not implemented for `generate`. Why do you want to pass them to `generate`?<|||||>I'm working on a conditional dialogue task (knowledge grounded dialogue) in which I give the conversation history and fact sentence as inputs to the model. I want to show the model that fact sentence is different from conversation history, so I decided to use segment embedding to do this and learn different segment embeddings for history and fact sentence. in training I have no problem and pass token_type_ids as kwargs, but in inferencing phase, I have this mentioned problem. <|||||>To use `token_type_ids`, you would actually need to change the line in the generate function where the encoder is called to something like this:
`encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask, token_type_ids=model_specific_kwargs['token_type_ids'])`
or maybe better:
`encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask, token_type_ids=model_specific_kwargs.pop('token_type_ids', None))`
In case no `token_type_ids` are passed.
https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L385
<|||||>> To use `token_type_ids`, you would actually need to change the line in the generate function where the encoder is called to something like this:
> `encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask, token_type_ids=model_specific_kwargs['token_type_ids'])`
>
> or maybe better:
> `encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask, token_type_ids=model_specific_kwargs.pop('token_type_ids', None))`
> In case no `token_type_ids` are passed.
>
> https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L385
Thank you very much for your guidance! 🙏 |
transformers | 5,191 | closed | [Benchmark] Jetson Nano DistillBERT SQuAD benchmark | # 🖥 Benchmarking `transformers`
## Benchmark
DistillBERT SQuAD
## Set-up
- Jetson Nano
- Jetpack 4.4
- onnxruntime 1.13.1
Benchmark script https://gist.github.com/arijitx/1400d3d4e07fc517d6c5bfea506c2353
## Results
tokens/sec
| | Pytorch-GPU | Pytorch-CPU(4 cores) | onnx-CPU (4 core) | onnx-CUDA | onnx-TRT |
|--------------------|-------------|----------------------|-------------------|-----------|----------|
| Distill BERT SQuAD | 570 | 61 | 107 | 605 | fail |
| 06-22-2020 19:43:52 | 06-22-2020 19:43:52 | For tensorrt you need to manage dynamic axis with a script in the nuphar folder of onnx runtime repo (`symbolic_shape_infer.py`).
It gives best result for small batch (like single example).
It requires some customization in parameters because default one are not that good (check the Nvidia doc for the workspace size, etc.)
Of course, it requires a specific compilation but I suppose you did it.
For CPU you may want to install the onnx runtime without gpu support so you get openmp.<|||||>Ya I did `symbolic_shape_infer.py` but still for some reason, the model is not running with tensorrtProvider, it starts consuming memory and then crashes after the memory is loaded, I did try with workspace_size , iteration as mentioned in the [Onnx TensorRT Provider doc](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/TensorRT-ExecutionProvider.md) anything `ORT_TENSORRT_MAX_PARTITION_ITERATIONS > 1 ` crashes.
Still Jetpack 4.4 is not officially supported by onnx, and I couldn't find a way to export pytorch models with LongTensors directly to TensorRT engines :(
Am able to run successfully in the CUDAProvider, CPU can be optimized further with a openmp specific build.<|||||>My own XP with onnx has been painful too :-)
May be you will be interested in this article https://medium.com/@fanzongshaoxing/accelerate-pytorch-model-with-tensorrt-via-onnx-d5b5164b369 ?
It s about tensorrt on Python without onnx runtime (didn't try yet but was curious)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,190 | closed | [bart] add config.extra_pos_embeddings to facilitate reuse | Previously bart's LearnedPositionalEmbedding basically assumed pad_token_id>=1, and added 2 extra spaces of empty embeddings at the beginning. These positions were never used.
Since the blenderbot state dict has only 128 entries (exactly as many as it needs, whereas bart has 1026, 2 more than it needs). it cannot use `LearnedPositionalEmbedding` unless we either allow this offset to bet set to 0, or add empty indices.
I prefer the former since the offset's motivation is not super clear to me.
I also move `create_position_ids_from_input_ids` into modeling_roberta.py, the only place it is used.
| 06-22-2020 18:50:26 | 06-22-2020 18:50:26 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5190?src=pr&el=h1) Report
> Merging [#5190](https://codecov.io/gh/huggingface/transformers/pull/5190?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b28b53713161a6299c757c32f7179a2cb2d8cbd7&el=desc) will **decrease** coverage by `0.87%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5190?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5190 +/- ##
==========================================
- Coverage 77.96% 77.09% -0.88%
==========================================
Files 138 138
Lines 23838 23839 +1
==========================================
- Hits 18585 18378 -207
- Misses 5253 5461 +208
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5190?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.14% <ø> (-0.05%)` | :arrow_down: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <100.00%> (+0.13%)` | :arrow_up: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.24% <100.00%> (+<0.01%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `94.85% <100.00%> (+0.07%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `19.92% <0.00%> (-75.00%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.71% <0.00%> (-0.30%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.80% <0.00%> (+0.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5190?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5190?src=pr&el=footer). Last update [b28b537...00d97e5](https://codecov.io/gh/huggingface/transformers/pull/5190?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,189 | closed | Have documentation fail on warning | 06-22-2020 18:43:37 | 06-22-2020 18:43:37 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5189?src=pr&el=h1) Report
> Merging [#5189](https://codecov.io/gh/huggingface/transformers/pull/5189?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1262495a912b9cd97e2ae174fd627a9d8a502341&el=desc) will **increase** coverage by `36.63%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5189?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5189 +/- ##
===========================================
+ Coverage 41.35% 77.99% +36.63%
===========================================
Files 138 138
Lines 23772 23772
===========================================
+ Hits 9831 18540 +8709
+ Misses 13941 5232 -8709
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5189?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (+0.63%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `91.07% <0.00%> (+0.71%)` | :arrow_up: |
| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.37% <0.00%> (+1.44%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (+2.66%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <0.00%> (+2.73%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.81% <0.00%> (+3.54%)` | :arrow_up: |
| [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.77% <0.00%> (+5.54%)` | :arrow_up: |
| [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <0.00%> (+10.71%)` | :arrow_up: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `76.84% <0.00%> (+12.63%)` | :arrow_up: |
| ... and [47 more](https://codecov.io/gh/huggingface/transformers/pull/5189/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5189?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5189?src=pr&el=footer). Last update [1262495...47a9e15](https://codecov.io/gh/huggingface/transformers/pull/5189?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>The `build_doc` test now fails when there's a warning, e.g. on the second commit:
```
reading sources... [ 85%] multilingual
reading sources... [ 87%] notebooks
reading sources... [ 89%] pretrained_models
reading sources... [ 91%] quickstart
reading sources... [ 93%] serialization
reading sources... [ 95%] summary
reading sources... [ 97%] torchscript
reading sources... [100%] usage
Warning, treated as error:
/home/circleci/transformers/src/transformers/modeling_albert.py:docstring of transformers.AlbertModel.forward:47:Unexpected indentation.
make: *** [Makefile:19: html] Error 2
Exited with code exit status 2
CircleCI received exit code 2
``` |
|
transformers | 5,188 | closed | Simplify LearnedPositionalEmbedding | `create_position_ids_from_input_ids` and `LearnedPositionalEmbedding`
were both copied from fairseq and need either much more documentation or much simpler logic. They were originally copied for Roberta and now are also used for bart. | 06-22-2020 18:34:15 | 06-22-2020 18:34:15 | |
transformers | 5,187 | closed | Add TF auto model to the docs + fix sphinx warnings (again) | The goal of this PR was just to add the automodel to the docs (as pointed out in #5145) but then I had 100 sphinx warnings that made me grumpy so I had to fix them (some of them from the auto model docs, but most of them from #4978).
As usual a few of them were harmless, others had a real impact on the docs, so I think we should make sure in the CI that there are no sphinx warnings to avoid making the docs bad by mistake. | 06-22-2020 18:03:05 | 06-22-2020 18:03:05 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5187?src=pr&el=h1) Report
> Merging [#5187](https://codecov.io/gh/huggingface/transformers/pull/5187?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ebc36108dc1c20985905c79f7d6a00f57f3cd3ae&el=desc) will **decrease** coverage by `0.89%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5187?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5187 +/- ##
==========================================
- Coverage 77.99% 77.10% -0.90%
==========================================
Files 138 138
Lines 23772 23772
==========================================
- Hits 18541 18329 -212
- Misses 5231 5443 +212
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5187?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `80.43% <ø> (ø)` | |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.23% <ø> (ø)` | |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.50% <ø> (ø)` | |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <ø> (ø)` | |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.70% <ø> (ø)` | |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `80.19% <ø> (ø)` | |
| [src/transformers/modeling\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `84.12% <ø> (ø)` | |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.43% <ø> (ø)` | |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.04% <ø> (ø)` | |
| [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `88.78% <ø> (ø)` | |
| ... and [24 more](https://codecov.io/gh/huggingface/transformers/pull/5187/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5187?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5187?src=pr&el=footer). Last update [ebc3610...cca9865](https://codecov.io/gh/huggingface/transformers/pull/5187?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,186 | closed | Trouble with PL Checkpoint loading after finetuning bart-large | Hi there,
I recently ran "finetune_bart.sh" with data from cnn_dm/train.source and saved checkpoints. I only have GPU with 12G memory, then I trained this model with batch_size 1 without distributed training.
I predicted test summaries results on cnn_dm/test.source with:
```python
tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path)
checkpoints = list(sorted(glob.glob(os.path.join(args.output_dir, "checkpointepoch=*.ckpt"), recursive=True)))
model = model.load_from_checkpoint(checkpoints[-1])
model.eval()
model.freeze()
fout = Path("test_generation.txt").open("w", encoding="utf-8")
for batch in tqdm(list(chunks(examples, 1))):
inputs = tokenizer.batch_encode_plus([batch[0]], max_length=1024, return_tensors='pt')['input_ids']
outputs = model.text_predictions(inputs)
fout.write(outputs[0] + "\n")
fout.flush()
output_lns = [x.rstrip() for x in open("test_generation.txt").readlines()]
reference_lns = [x.rstrip() for x in open("cnn_dm/test.target").readlines()]
calculate_rouge(output_lns, reference_lns, "rouge_scores.txt")
```
After that I received results:
ROUGE_1:
AggregateScore(low=Score(precision=0.428813534233686, recall=0.38720984475275855, fmeasure=0.3951340300593005), mid=Score(precision=0.4311467253592325, recall=0.38942524434531234, fmeasure=0.3970368964972586), high=Score(precision=0.4336709710273236, recall=0.3917389347240298, fmeasure=0.39884900115657695))
ROUGE_2:
AggregateScore(low=Score(precision=0.166422532946031, recall=0.14864999757241987, fmeasure=0.15229217101061118), mid=Score(precision=0.16833011753361882, recall=0.1504195716038362, fmeasure=0.15396139813339343), high=Score(precision=0.17058789969308358, recall=0.15232853455768658, fmeasure=0.15590467051587376))
ROUGE_L:
AggregateScore(low=Score(precision=0.2791458979499142, recall=0.2529897770384392, fmeasure=0.25757140506162507), mid=Score(precision=0.28120564591223207, recall=0.25488090117768447, fmeasure=0.2593799995678622), high=Score(precision=0.2834430254354239, recall=0.25683467237590635, fmeasure=0.2611850779055858))
This result is worse than the reported cnn_dm results in paper. I haven't changed any parameters yet. What is wrong with my training?
| 06-22-2020 17:28:19 | 06-22-2020 17:28:19 | How long did you train for?
Our `finetune.py` is not identical to the authors' finetuning script and might produce different results, but rouge2=15.3 suggests that either you didn't train for very long, or there is a bug in your/our code.
What is the `model.text_predictions` method?
<|||||>I trained about 3 days on single 12G Titan V GPU.
```
def text_predictions(self, input_ids):
generated_ids = self.model.generate(
input_ids=input_ids,
num_beams=1,
max_length=80,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True,
)
preds = [
self.tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True)
for g in generated_ids
]
return preds
```
This is my text_predictions function.<|||||>try
```python
self.model.generate(input_ids=input_ids, attention_mask=attention_mask)
```
This will use the beam search kwargs from model.config, which are much better than those.
<|||||>I changed my code:
```
if args.do_predict:
examples = [" " + x.rstrip() if "t5" in args.model_name_or_path else x.rstrip() for x in
open("cnn_dm/test.source").readlines()]
tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path)
model = model.load_from_checkpoint('bart_sum/checkpointepoch=2.ckpt').to(device)
torch.save(model.state_dict(), args.output_dir + '/pytorch_model.bin')
model.config.to_json_file(args.output_dir + '/config.json')
model = BartForConditionalGeneration.from_pretrained('bart_sum').to(device)
task_specific_params = model.config.task_specific_params
if task_specific_params is not None:
model.config.update(task_specific_params.get("summarization", {}))
model.eval()
fout = Path("test_generation.txt").open("w", encoding="utf-8")
for batch in tqdm(list(chunks(examples, 1))):
dct = tokenizer.batch_encode_plus(batch, max_length=1024, return_tensors="pt", pad_to_max_length=True).to(device)
summaries = model.generate(**dct)
dec = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summaries]
for hypothesis in dec:
fout.write(hypothesis + "\n")
fout.flush()
```
After that, I found generated sentences in dec is a random sentence. For example:
Everybody Everybody Everybody SUM SUM SUM sergeant sergeant sergeant noisy noisy noisy Myster Myster MysterJewishJewishJewish sergeant sergeant talks talks talks noisy noisy imitate imitate imitate sergeant sergeantestonesestonesestones noisy noisy overhe overhe overhe Palmer Palmer Palmer noisy noisyJewishJewish spawned spawned spawned sergeant sergeant Height Height Heightnornornor For For For manif manif manif onboard onboard onboardsharingsharingestonesestones selects selects selects electors electors electors noisy noisy sergeant sergeantWAY Height Height selects selects sergeant sergeantJewishJewish Appearance Appearance Appearance Myster Myster selects selects fearing fearing fearing framed framed framed summ summ summ sergeant sergeant Pistol Pistol Pistol sergeant sergeant Appearance Appearance sergeant sergeant bees bees beesestonesestones Islamists Islamists Islamists sergeant sergeantselectselectselect sergeant sergeant spawned spawned manif manifestonesestonesirdirdird
I think my finetuned model is totally fail but I don't know what is wrong with my finetune model. I simply ran` finetune_bart.sh ` . The only change is setting training batch size to be 1.
<|||||>Interesting. A Rouge2 score of ~15, which you had before, is a lot better than random, was that the same checkpoint?
Also, could you tell me the output for the following commands?
```bash
transformers-cli env
pip freeze | grep torch
```
<|||||>This is a same checkpoint. I think my previous code has some problems. I wrote a function "text_predictions" in class SummarizationTrainer. This text_predictions function called `self.model.generate`. I think this self.model is from class BaseTransformer with `self.model = MODEL_MODES[mode].from_pretrained("facebook/bart-large)` instead of my saved checkpoint. Am I right?
with running: `transformers-cli env`
```
- `transformers` version: 2.11.0
- Platform: Linux-4.15.0-106-generic-x86_64-with-debian-stretch-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
with running `pip freeze | grep torch`:
```
pytorch-lightning==0.7.6
torch==1.5.0
torchvision==0.6.0a0+82fd1c8
```
<|||||>Yes that is correct. To reload a checkpoint, I have been using `resume_from_checkpoint` with good results. Also this should be fixed in the latest code, for future readers.
@williamFalcon in pl=0.7.6 is it possible to load a checkpoint without instantiating a trainer?
does `model = model.load_from_checkpoint('bart_sum/checkpointepoch=2.ckpt').to(device)` not work?<|||||>Then how could I generate samples from checkpoints?
```
trainer.resume_from_checkpoint = checkpoints[-1]
trainer.logger.log_hyperparams(model.hparams)
trainer.test(model)
```
This only provides me with a loss value without generating samples from a checkpoint.<|||||>You can use `model.model.generate(input_ids)` if `model` is a `pl.Module`.
You can also save the transformers model, and then use `run_eval.py`
Something like:
`$ mkdir output_dir`
```python
from pathlib import Path
save_dir = Path('output_dir')
save_dir.mkdir(exist_ok=True)
model.model.save_pretrained(save_dir)
```
Then follow instructions [here](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md#evaluation-commands) for how to invoke `run_eval.py`
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,185 | closed | [T5] add missing docstring for some configurations | Add missing docs for T5 config. | 06-22-2020 16:49:11 | 06-22-2020 16:49:11 | |
transformers | 5,184 | closed | More clear error message in the use-case of #5169 | Supplying `is_pretokenized=True` to an encoding method means that the sequences are given as a list of words (words being strings).
Make the error message more clear in this case.
Fix #5169 | 06-22-2020 15:22:28 | 06-22-2020 15:22:28 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5184?src=pr&el=h1) Report
> Merging [#5184](https://codecov.io/gh/huggingface/transformers/pull/5184?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d2a7c86dc33d6def6dba44f6ed2b71e8a1644130&el=desc) will **decrease** coverage by `0.03%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5184?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5184 +/- ##
==========================================
- Coverage 78.04% 78.00% -0.04%
==========================================
Files 138 138
Lines 23766 23767 +1
==========================================
- Hits 18548 18540 -8
- Misses 5218 5227 +9
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5184?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.83% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-5.42%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.96% <0.00%> (ø)` | |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.57% <0.00%> (+1.18%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5184?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5184?src=pr&el=footer). Last update [d2a7c86...3c842db](https://codecov.io/gh/huggingface/transformers/pull/5184?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,183 | closed | Unable to load the reformer pre-trained model, connection broken after X% | # ❓ Questions & Help
I am trying to load the Reformer pre-trained model, But I am not able to do so. The load fails after some random x%. The error stack trace is;
ChunkedEncodingError: ('Connection broken: OSError("(10054, \'WSAECONNRESET\')",)', OSError("(10054, 'WSAECONNRESET')",))
Code to load the model, a simple cell with imports and just the code below to load the model;
modelWiki = ReformerModelWithLMHead.from_pretrained('google/reformer-enwik8')
The notebook is running behind the proxy, could that be an issue?
The SO - https://stackoverflow.com/questions/27333671/how-to-solve-the-10054-error
But it talks about connection refused by server.
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 06-22-2020 15:14:42 | 06-22-2020 15:14:42 | Not sure how to help you here. Can you load other models from pretrained? Like
```python
model = BertModel.from_pretrained("bert-base-uncased")
```
?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,182 | closed | MarianTokenizer.prepare_translation_batch uses new tokenizer API | new params:
```python
truncation_strategy="only_first",
padding="longest",
```
The old behavior was to pad to model_max_len then call `trim_batch`.
I also added two tests than ensure that it works. | 06-22-2020 14:50:06 | 06-22-2020 14:50:06 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5182?src=pr&el=h1) Report
> Merging [#5182](https://codecov.io/gh/huggingface/transformers/pull/5182?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c4d4e8bdbd25d9463d41de6398940329c89b7fb6&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `66.66%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5182?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5182 +/- ##
==========================================
- Coverage 77.90% 77.88% -0.02%
==========================================
Files 140 140
Lines 24334 24335 +1
==========================================
- Hits 18957 18953 -4
- Misses 5377 5382 +5
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5182?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.80% <66.66%> (+0.05%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.21% <0.00%> (+0.49%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.37% <0.00%> (+25.00%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5182?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5182?src=pr&el=footer). Last update [c4d4e8b...5195adc](https://codecov.io/gh/huggingface/transformers/pull/5182?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,181 | closed | Is it possible to mimic trim_batch using new tokenizer strategies? | I am trying to replace the old workflow of
calling batch_encode_plus to make tensors of shape
`(n_examples, model_max_length)` and then calling `trim_batch` to reduce padding computation, with the new tokenizers kwargs.
Is this possible?
The following code does not seem to truncate inputs longer than 512 (the second assert breaks).
Attempt:
```python
from transformers import BartTokenizer
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
kw = dict(max_length=512,
pad_to_max_length=True, padding=True, return_tensors='pt', truncation='only_first')
batch = tokenizer(['tiny sentence 1', 'tiny_sentence2'],**kw)
assert batch.input_ids.shape[1] == 7, batch.input_ids.shape[1]
input_ids, mask = trim_batch(**batch, pad_token_id=tokenizer.pad_token_id)
assert input_ids.shape[1] == 7, batch.input_ids.shape[1]
batch_overflow = tokenizer(['tiny sentence 1'*1000, 'tiny_sentence2'], **kw)
assert batch_overflow.input_ids.shape[1] == 512, batch_overflow.input_ids.shape[1]
```
Traceback:
```python
assert batch_overflow.input_ids.shape[1] == 512, batch_overflow.input_ids.shape[1]
AssertionError: 3002
```
Help much appreciated, @mfuntowicz @thomwolf | 06-22-2020 14:46:02 | 06-22-2020 14:46:02 | Hi @sshleifer you should read the detailed description on the tokenizers refactoring PR https://github.com/huggingface/transformers/pull/4510#issue-421650163
Until it's added in the doc (will be soon), it's required reading for all core contributors of `transformers`.<|||||>Thanks. I read that, and am still somewhat confused about why I pass `truncation=True` and get entries that are longer than `tokenizer.max_model_length`. The PR description says:

Here is a simplified example:
```python
from transformers import BartTokenizer
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
assert tokenizer.model_max_length == 1024
# tokenizer.batch_encode_plus returns ids shaped (2, 1024)
batch_sentences = ['tiny sentence 1'*1000, 'tiny_sentence2']
ids = tokenizer.batch_encode_plus(batch_sentences, pad_to_max_length=True, max_length=tokenizer.model_max_length,
truncation=True, return_tensors='pt').input_ids
assert ids.shape[1] <= tokenizer.model_max_length, ids.shape[1]
# tokenizer.__call__ returns ids shaped (2, 3002)
ids = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors='pt',
max_length=tokenizer.model_max_length, ).input_ids
assert ids.shape[1] <= tokenizer.model_max_length, ids.shape[1]
```<|||||>I'll take a look |
transformers | 5,180 | closed | Which Marian version was used to train the Helsinki-NLP/* checkpoints? | Trying to find the **Marian version used to train Marian MT transformers**. This would help me understand the **benchmarks for translation times**. I see that multiple papers mention various benchmarks for CPU translation :
- https://www.aclweb.org/anthology/D19-5632.pdf - has the benchmarks for the latest update Marian v1.9
- https://www.aclweb.org/anthology/P18-4020.pdf - does not mention the version but I presume it uses versions between Marian v1.7 and Marian v1.5
**Marian v1.9 suggests considerably faster translation times than Marian v1.7 and Marian v1.5.**
Finding the version used for the huggingface transformers both in the huggingface and Helsinki NLP documentation hasn't been fruitful. Would be really helpful to have this answered. | 06-22-2020 14:04:10 | 06-22-2020 14:04:10 | Those models were trained by @jorgtied as part of the OPUS project. He might know the answer.<|||||>@sshleifer Do you know how can I fine tune the models with my specific data?<|||||>I just started a PR to support that, but it's still a week away. At the moment, your best bet is to modify `summarization/finetune.py`.
<|||||>> Trying to find the **Marian version used to train Marian MT transformers**. This would help me understand the **benchmarks for translation times**. I see that multiple papers mention various benchmarks for CPU translation :
>
> * https://www.aclweb.org/anthology/D19-5632.pdf - has the benchmarks for the latest update Marian v1.9
>
> * https://www.aclweb.org/anthology/P18-4020.pdf - does not mention the version but I presume it uses versions between Marian v1.7 and Marian v1.5
>
>
> **Marian v1.9 suggests considerably faster translation times than Marian v1.7 and Marian v1.5.**
>
> Finding the version used for the huggingface transformers both in the huggingface and Helsinki NLP documentation hasn't been fruitful. Would be really helpful to have this answered.
The version should be marked in the original model files distributed at https://github.com/Helsinki-NLP/Opus-MT. Most of them will be v1.7. I just started recently to use v1.9 for upcoming models.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,179 | closed | Model card for t5-base-finetuned-emotion (recognition) | 06-22-2020 12:48:27 | 06-22-2020 12:48:27 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5179?src=pr&el=h1) Report
> Merging [#5179](https://codecov.io/gh/huggingface/transformers/pull/5179?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eb0ca71ef6772a1dd16ac152e1f7a07a9e1e6fda&el=desc) will **increase** coverage by `0.08%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5179?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5179 +/- ##
==========================================
+ Coverage 77.98% 78.06% +0.08%
==========================================
Files 138 138
Lines 23710 23710
==========================================
+ Hits 18490 18510 +20
+ Misses 5220 5200 -20
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5179?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `35.03% <0.00%> (+6.36%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5179?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5179?src=pr&el=footer). Last update [eb0ca71...a3d99bd](https://codecov.io/gh/huggingface/transformers/pull/5179?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,178 | closed | Add model cards for Microsoft's MiniLM | 06-22-2020 12:42:22 | 06-22-2020 12:42:22 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5178?src=pr&el=h1) Report
> Merging [#5178](https://codecov.io/gh/huggingface/transformers/pull/5178?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa0be6d76187e0639851f6d762b9ffae7fbd9202&el=desc) will **increase** coverage by `0.61%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5178?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5178 +/- ##
==========================================
+ Coverage 77.75% 78.36% +0.61%
==========================================
Files 138 138
Lines 23710 23710
==========================================
+ Hits 18435 18581 +146
+ Misses 5275 5129 -146
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5178?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.81% <0.00%> (-0.15%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.28% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `95.18% <0.00%> (+0.37%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.28% <0.00%> (+0.82%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |
| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/5178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `96.00% <0.00%> (+68.00%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5178?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5178?src=pr&el=footer). Last update [fa0be6d...e38cb39](https://codecov.io/gh/huggingface/transformers/pull/5178?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,177 | closed | When is 2.12 coming out? | Hi when is 2.12 coming out? Cheers.
(Interested in AutoModelForCausalLM) | 06-22-2020 10:52:14 | 06-22-2020 10:52:14 | For now you can install from source to use `AutoModelForCausalLM`<|||||>Yes but you can't push out libraries with packages installed from master<|||||>Probably this week or the next!<|||||>+1<|||||>Version v3.0.0 was released this morning! |
transformers | 5,176 | closed | object returned by RobertaTokenizerFast() class are not serializable, | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): RobertaTokenizerFast()
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
yes
## To reproduce
[Steps to reproduce the behavior: https://github.com/huggingface/tokenizers/issues/313#issue-642829111 ](https://github.com/huggingface/tokenizers/issues/313#issue-642829111)
| 06-22-2020 10:31:31 | 06-22-2020 10:31:31 | cc @mfuntowicz <|||||>This is fixed on master and should be in the next release |
transformers | 5,175 | closed | Update model card for COVID-QA model | Specify exact dataset that was used for crossvalidation. | 06-22-2020 08:57:20 | 06-22-2020 08:57:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5175?src=pr&el=h1) Report
> Merging [#5175](https://codecov.io/gh/huggingface/transformers/pull/5175?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc3a0c06075050d3de586c543e4ad6a7efc9260e&el=desc) will **decrease** coverage by `0.37%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5175?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5175 +/- ##
==========================================
- Coverage 78.31% 77.93% -0.38%
==========================================
Files 137 137
Lines 23475 23475
==========================================
- Hits 18384 18295 -89
- Misses 5091 5180 +89
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5175?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <0.00%> (-0.83%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <0.00%> (-0.38%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.16% <0.00%> (-0.13%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.11% <0.00%> (+0.29%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5175?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5175?src=pr&el=footer). Last update [bc3a0c0...2d35d71](https://codecov.io/gh/huggingface/transformers/pull/5175?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,174 | closed | Add README.md (nyu-mll) | Add README.md on organization nyu-mll. | 06-22-2020 06:33:31 | 06-22-2020 06:33:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5174?src=pr&el=h1) Report
> Merging [#5174](https://codecov.io/gh/huggingface/transformers/pull/5174?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc3a0c06075050d3de586c543e4ad6a7efc9260e&el=desc) will **decrease** coverage by `0.37%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5174?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5174 +/- ##
==========================================
- Coverage 78.31% 77.93% -0.38%
==========================================
Files 137 137
Lines 23475 23475
==========================================
- Hits 18384 18295 -89
- Misses 5091 5180 +89
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5174?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <0.00%> (-0.83%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <0.00%> (-0.38%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.16% <0.00%> (-0.13%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.96% <0.00%> (+0.14%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.80% <0.00%> (+0.80%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5174?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5174?src=pr&el=footer). Last update [bc3a0c0...7af96db](https://codecov.io/gh/huggingface/transformers/pull/5174?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks! model cards: https://huggingface.co/nyu-mll |
transformers | 5,173 | closed | Trying to made a keras model with transformer layers defined in hf-transformers, keep running into `AttributeError: Tensor.op is meaningless when eager execution is enabled, when trying to make a keras model` | # ❓ Questions & Help
## Details
I am trying to build models using huggingface transformer layers. However, I keep running into `AttributeError: Tensor.op is meaningless when eager execution is enabled.`
I don't want to disable eager execution, as I heard it interfere's with some of keras' other functionality (feel free to correct if this is wrong).
Here's the abridged code for one attempt (full code here https://colab.research.google.com/drive/1pnFDEQB4EuxNM1pSgbWJNKD2208dIIN0?usp=sharing )
```
from transformers.modeling_tf_bert import TFBertLayer
class TFBertEncoderAlter(tf.keras.layers.Layer):
def __init__(self, config, **kwargs):
super().__init__(**kwargs)
self.output_hidden_states = config.output_hidden_states
self.layer = [TFBertLayer(config, name="layer_._{}".format(i)) for i in range(config.num_hidden_layers)]
def call(self, inputs, training=False):
hidden_states, attention_mask, output_attentions = inputs
all_hidden_states = ()
all_attentions = ()
for i, layer_module in enumerate(self.layer):
if self.output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states,)
layer_outputs = layer_module(
[hidden_states, attention_mask, output_attentions], training=training
)
hidden_states = layer_outputs[0]
if cast_bool_to_primitive(output_attentions) is True:
all_attentions = all_attentions + (layer_outputs[1],)
# Add last layer
if self.output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states,)
outputs = (hidden_states,)
if self.output_hidden_states:
outputs = outputs + (all_hidden_states,)
if cast_bool_to_primitive(output_attentions) is True:
outputs = outputs + (all_attentions,)
return outputs # outputs, (hidden states), (attentions)
P_trans11 = TFBertEncoderAlter(config, name='Encoder')
inputHiddenVals = tf.keras.Input(shape=[None, None], dtype=tf.float32, name='input_Q',
batch_size=None)
P_outputs = P_trans11((outt, None, None))
modelNew = tf.keras.Model(inputHiddenVals,P_outputs)
```
Here is the output
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-35-8cd9393cb573> in <module>()
5
6 P_outputs = P_trans11((outt, None, None))
----> 7 modelNew = tf.keras.Model(inputHiddenVals,P_outputs)
6 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in __init__(self, *args, **kwargs)
165
166 def __init__(self, *args, **kwargs):
--> 167 super(Model, self).__init__(*args, **kwargs)
168 _keras_api_gauge.get_cell('model').set(True)
169 # Model must be created under scope of DistStrat it will be trained with.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/network.py in __init__(self, *args, **kwargs)
171 'inputs' in kwargs and 'outputs' in kwargs):
172 # Graph network
--> 173 self._init_graph_network(*args, **kwargs)
174 else:
175 # Subclassed network
/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
454 self._self_setattr_tracking = False # pylint: disable=protected-access
455 try:
--> 456 result = method(self, *args, **kwargs)
457 finally:
458 self._self_setattr_tracking = previous_value # pylint: disable=protected-access
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/network.py in _init_graph_network(self, inputs, outputs, name, **kwargs)
252
253 if any(not hasattr(tensor, '_keras_history') for tensor in self.outputs):
--> 254 base_layer_utils.create_keras_history(self._nested_outputs)
255
256 self._base_init(name=name, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py in create_keras_history(tensors)
184 keras_tensors: The Tensors found that came from a Keras Layer.
185 """
--> 186 _, created_layers = _create_keras_history_helper(tensors, set(), [])
187 return created_layers
188
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
210 if getattr(tensor, '_keras_history', None) is not None:
211 continue
--> 212 op = tensor.op # The Op that created this Tensor.
213 if op not in processed_ops:
214 if op.type.startswith('Sparse'):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in op(self)
1111 def op(self):
1112 raise AttributeError(
-> 1113 "Tensor.op is meaningless when eager execution is enabled.")
1114
1115 @property
AttributeError: Tensor.op is meaningless when eager execution is enabled.
```
Here is a different approach ( full code here https://colab.research.google.com/drive/1bieigPh98l9POzT3Tdz8DWDN_nh18YG1?usp=sharing )
```
from transformers.modeling_tf_bert import TFBertEncoder, TFBertMainLayer, TFBertLayer
from transformers.modeling_tf_utils import (
get_initializer,
keras_serializable,
shape_list,
)
from transformers.configuration_bert import BertConfig
class TFBertEncoder0(tf.keras.layers.Layer):
def __init__(self, config, **kwargs):
super().__init__(**kwargs)
self.output_attentions = config.output_attentions
self.output_hidden_states = config.output_hidden_states
self.layer = [TFBertLayer(config, name="layer_._{}".format(i)) for i in range(config.num_hidden_layers)]
def call(self, inputs, training=False):
hidden_states, attention_mask, head_mask = inputs
all_hidden_states = ()
all_attentions = ()
for i, layer_module in enumerate(self.layer):
if self.output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states,)
layer_outputs = layer_module([hidden_states, attention_mask, head_mask], training=training)
hidden_states = layer_outputs[0]
if self.output_attentions:
all_attentions = all_attentions + (layer_outputs[1],)
# Add last layer
if self.output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states,)
outputs = (hidden_states,)
if self.output_hidden_states:
outputs = outputs + (all_hidden_states,)
if self.output_attentions:
outputs = outputs + (all_attentions,)
return outputs # outputs, (hidden states), (attentions)
@keras_serializable
class TFBertMainLayerAlter4(tf.keras.layers.Layer):
config_class = BertConfig
def __init__(self, config, **kwargs):
super().__init__(**kwargs)
self.num_hidden_layers = config.num_hidden_layers
self.initializer_range = config.initializer_range
self.output_attentions = config.output_attentions
self.encoder = TFBertEncoder0(config, name="encoder")
def _prune_heads(self, heads_to_prune):
""" Prunes heads of the model.
heads_to_prune: dict of {layer_num: list of heads to prune in this layer}
See base class PreTrainedModel
"""
raise NotImplementedError
def call(
self,
inputs,
training=False,
):
encoder_outputs = self.encoder(
[inputs, None, None], training=training
)
sequence_output = encoder_outputs[0]
outputs = (sequence_output,) + encoder_outputs[
1:
] # add hidden_states and attentions if they are here
return outputs # sequence_output, pooled_output, (hidden_states), (attentions)
P_trans11 = TFBertMainLayerAlter4(config3, name="roberta")
inputHiddenVals = tf.keras.Input(shape=[None, None], dtype=tf.float32, name='input_Q',
batch_size=None)
P_outputs = P_trans11(outt)
modelNew = tf.keras.Model(inputHiddenVals,P_outputs)
```
Again, same result
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-43-79b3b73e5f5f> in <module>()
5
6 P_outputs = P_trans11(outt)
----> 7 modelNew = tf.keras.Model(inputHiddenVals,P_outputs)
6 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in __init__(self, *args, **kwargs)
165
166 def __init__(self, *args, **kwargs):
--> 167 super(Model, self).__init__(*args, **kwargs)
168 _keras_api_gauge.get_cell('model').set(True)
169 # Model must be created under scope of DistStrat it will be trained with.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/network.py in __init__(self, *args, **kwargs)
171 'inputs' in kwargs and 'outputs' in kwargs):
172 # Graph network
--> 173 self._init_graph_network(*args, **kwargs)
174 else:
175 # Subclassed network
/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
454 self._self_setattr_tracking = False # pylint: disable=protected-access
455 try:
--> 456 result = method(self, *args, **kwargs)
457 finally:
458 self._self_setattr_tracking = previous_value # pylint: disable=protected-access
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/network.py in _init_graph_network(self, inputs, outputs, name, **kwargs)
252
253 if any(not hasattr(tensor, '_keras_history') for tensor in self.outputs):
--> 254 base_layer_utils.create_keras_history(self._nested_outputs)
255
256 self._base_init(name=name, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py in create_keras_history(tensors)
184 keras_tensors: The Tensors found that came from a Keras Layer.
185 """
--> 186 _, created_layers = _create_keras_history_helper(tensors, set(), [])
187 return created_layers
188
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
210 if getattr(tensor, '_keras_history', None) is not None:
211 continue
--> 212 op = tensor.op # The Op that created this Tensor.
213 if op not in processed_ops:
214 if op.type.startswith('Sparse'):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in op(self)
1111 def op(self):
1112 raise AttributeError(
-> 1113 "Tensor.op is meaningless when eager execution is enabled.")
1114
1115 @property
AttributeError: Tensor.op is meaningless when eager execution is enabled.
```
Here's another attempt (full code here https://colab.research.google.com/drive/1UVJ7XSx0vXpgApe6E7ECVb9LNSJN_D9-?usp=sharing )
```
from transformers.modeling_tf_bert import TFBertLayer
l1 = TFBertLayer(config)
l2 = TFBertLayer(config)
inputHiddenVals = tf.keras.Input(shape=[None, None], dtype=tf.float32, name='input_Q',
batch_size=None)
P_outputs = l1((outt, None, None))[0]
P_outputs2 = l2((outt, None, None))[0]
modelNew = tf.keras.Model(inputHiddenVals,P_outputs2)
```
Again, same results.
**A link to original question on Stack Overflow**:
https://stackoverflow.com/questions/62504903/attributeerror-tensor-op-is-meaningless-when-eager-execution-is-enabled-when-t | 06-21-2020 22:32:02 | 06-21-2020 22:32:02 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,172 | closed | Load a T5ForConditionalGeneration's encoder into a T5Model | Hi,
I know that T5ForConditionalGeneration is a T5Model with decoding. I've got a T5ForConditionalGeneration and that I've fine-tuned on a seq2seq task and now I want to use its T5 encoder to initialize the parameters in a T5Model( to further train it on some other task.) I read the code and I didn't understand what should I do. Can you please help me? | 06-21-2020 21:20:20 | 06-21-2020 21:20:20 | Hi @Palipoor, This might do the trick,
```python
first_model = T5ForConditionalGeneration.from_pretrained('your_finetuned_model')
second_model = T5Model.from_pretrained('t5-small') # assuming 'your_finetuned_model' is t5-small
# get first model's encoder weights
first_model_encoder_state_dict = first_model.encoder.state_dict()
# load first model's encoder weights into second_model's encoder
second_model.encoder.load_state_dict(first_model_encoder_state_dict)
```
@patrickvonplaten can you take a look ?<|||||>@patil-suraj has the correct idea I think. You can even make it easier by just doing
```python
t5_model_no_lm_head = T5.from_pretrained("<path_to_t5_for_cond_generation>") # this will load all weighs that are present in both models. So It will just skip the lm head weighs
```
You can verify that this works by doing the following:
```python
t5_model_with_lm_head = T5ForConditionalGeneration.from_pretrained('t5-small')
t5_model_with_lm_head.save_pretrained("./")
t5_model_no_lm_head = T5Model.from_pretrained("./")
```<|||||>@patrickvonplaten @patil-suraj Thank you both for your resposnes! <|||||>Hi All,
So I have been trying to load a pre-tuned T5 model and then predict a sentence. I have been following this tutorial :
https://www.geeksforgeeks.org/text-to-text-transfer-transformer-in-data-augmentation/
And I have been able to pretune and save the model in a folder. Now I cant seem to know how to load it back and predict a given sentence. I have done the steps suggested by @patil-suraj but then how to use this configuration to do a prediction.
Grateful if you could help. Thanks :)
|
transformers | 5,171 | closed | Fixing docs for Encoder Decoder Config | This is a small contribution to the EncoderDecoderConfig documentation.
While trying to use this class, I noticed the documentation said that we could pass 2 "encoder" parameters to the class constructor.
I am almost 99.99% certain it meant we could pass an encoder and a decoder parameter to the class constructor. | 06-21-2020 21:15:31 | 06-21-2020 21:15:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5171?src=pr&el=h1) Report
> Merging [#5171](https://codecov.io/gh/huggingface/transformers/pull/5171?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc3a0c06075050d3de586c543e4ad6a7efc9260e&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5171?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5171 +/- ##
=======================================
Coverage 78.31% 78.31%
=======================================
Files 137 137
Lines 23475 23475
=======================================
+ Hits 18384 18385 +1
+ Misses 5091 5090 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5171?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.16% <0.00%> (-0.13%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.11% <0.00%> (+0.29%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5171?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5171?src=pr&el=footer). Last update [bc3a0c0...06ccbb9](https://codecov.io/gh/huggingface/transformers/pull/5171?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Oh, you are 100% correct here @mikaelsouza . This must be pretty confusing in the docs. Thanks for the fix! |
transformers | 5,170 | closed | Add support for `encoder_hidden_states` and `encoder_attention_mask` in modeling_longformer | # 🚀 Feature request
In order to use the longformer with the `EncoderDecoderModel` structure, it needs to accept `head_mask`, `encoder_hidden_states`, and `encoder_attention_mask` in the `forward()` function. Currently, `LongformerSelfAttention` asserts that `encoder_hidden_states` and `encoder_attention_mask` are `None`. I attempted to implement this functionality by comparing the bert implementation, which accepts these values, to the longformer implementation.
I can create an `EncoderDecoderModel` like so: `model = EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-base-4096", "allenai/longformer-base-4096")` and the model does train using my changes. Even though the model trains, I am not confident that the changes I made are correct. I think the cross attention mechanism already exists within the `LongformerModel` since it is a subclass of `RobertaModel` which is a subclass of `BertModel`. `BertModel` uses `BertLayer`s which use cross attention if `self.is_decoder` is True.
Furthermore, I get the below error messages, which have something to do with the inheritance hierarchy. It fails to initialize the cross attention layers, which makes sense, but I am not sure about the other ones.
```
2020-06-21 18:00:53,533|transformers.modeling_encoder_decoder|INFO> Initializing allenai/longformer-base-4096 as a decoder model. Cross attention layers are added to allenai/longformer-base-4096 and randomly initialized if allenai/longformer-base-4096's architecture allows for cross attention layers.
2020-06-21 18:00:53,609|transformers.modeling_utils|INFO> loading weights file https://cdn.huggingface.co/allenai/longformer-base-4096/pytorch_model.bin from cache at /root/.cache/torch/transformers/dfc92dbbf5c555abf807425ebdb22b55de7a17e21fe1c48cbaa5764982c1d9c0.cd65234711d2e83d420aa696eb9186cdec6ab79ef8bf090b442cf249443dfa92
2020-06-21 18:00:58,792|transformers.modeling_utils|INFO> Weights of BertLMHeadModel not initialized from pretrained model: ['embeddings.word_embeddings.weight', 'embeddings.position_embeddings.weight', 'embeddings.token_type_embeddings.weight', 'embeddings.LayerNorm.weight', 'embeddings.LayerNorm.bias', 'encoder.layer.0.attention.self.query.weight', 'encoder.layer.0.attention.self.query.bias', 'encoder.layer.0.attention.self.key.weight', 'encoder.layer.0.attention.self.key.bias', 'encoder.layer.0.attention.self.value.weight', 'encoder.layer.0.attention.self.value.bias', 'encoder.layer.0.attention.output.dense.weight', 'encoder.layer.0.attention.output.dense.bias', 'encoder.layer.0.attention.output.LayerNorm.weight', 'encoder.layer.0.attention.output.LayerNorm.bias', 'encoder.layer.0.crossattention.self.query.weight', 'encoder.layer.0.crossattention.self.query.bias', 'encoder.layer.0.crossattention.self.key.weight', 'encoder.layer.0.crossattention.self.key.bias', 'encoder.layer.0.crossattention.self.value.weight', 'encoder.layer.0.crossattention.self.value.bias', 'encoder.layer.0.crossattention.output.dense.weight', 'encoder.layer.0.crossattention.output.dense.bias', 'encoder.layer.0.crossattention.output.LayerNorm.weight', 'encoder.layer.0.crossattention.output.LayerNorm.bias', 'encoder.layer.0.intermediate.dense.weight', 'encoder.layer.0.intermediate.dense.bias', 'encoder.layer.0.output.dense.weight', 'encoder.layer.0.output.dense.bias', 'encoder.layer.0.output.LayerNorm.weight', 'encoder.layer.0.output.LayerNorm.bias', 'encoder.layer.1.attention.self.query.weight', 'encoder.layer.1.attention.self.query.bias', 'encoder.layer.1.attention.self.key.weight', 'encoder.layer.1.attention.self.key.bias', 'encoder.layer.1.attention.self.value.weight', 'encoder.layer.1.attention.self.value.bias', 'encoder.layer.1.attention.output.dense.weight', 'encoder.layer.1.attention.output.dense.bias', 'encoder.layer.1.attention.output.LayerNorm.weight', 'encoder.layer.1.attention.output.LayerNorm.bias', 'encoder.layer.1.crossattention.self.query.weight', 'encoder.layer.1.crossattention.self.query.bias', 'encoder.layer.1.crossattention.self.key.weight', 'encoder.layer.1.crossattention.self.key.bias', 'encoder.layer.1.crossattention.self.value.weight', 'encoder.layer.1.crossattention.self.value.bias', 'encoder.layer.1.crossattention.output.dense.weight', 'encoder.layer.1.crossattention.output.dense.bias', 'encoder.layer.1.crossattention.output.LayerNorm.weight', 'encoder.layer.1.crossattention.output.LayerNorm.bias', 'encoder.layer.1.intermediate.dense.weight', 'encoder.layer.1.intermediate.dense.bias', 'encoder.layer.1.output.dense.weight', 'encoder.layer.1.output.dense.bias', 'encoder.layer.1.output.LayerNorm.weight', 'encoder.layer.1.output.LayerNorm.bias', 'encoder.layer.2.attention.self.query.weight', 'encoder.layer.2.attention.self.query.bias', 'encoder.layer.2.attention.self.key.weight', 'encoder.layer.2.attention.self.key.bias', 'encoder.layer.2.attention.self.value.weight', 'encoder.layer.2.attention.self.value.bias', 'encoder.layer.2.attention.output.dense.weight', 'encoder.layer.2.attention.output.dense.bias', 'encoder.layer.2.attention.output.LayerNorm.weight', 'encoder.layer.2.attention.output.LayerNorm.bias', 'encoder.layer.2.crossattention.self.query.weight', 'encoder.layer.2.crossattention.self.query.bias', 'encoder.layer.2.crossattention.self.key.weight', 'encoder.layer.2.crossattention.self.key.bias', 'encoder.layer.2.crossattention.self.value.weight', 'encoder.layer.2.crossattention.self.value.bias', 'encoder.layer.2.crossattention.output.dense.weight', 'encoder.layer.2.crossattention.output.dense.bias', 'encoder.layer.2.crossattention.output.LayerNorm.weight', 'encoder.layer.2.crossattention.output.LayerNorm.bias', 'encoder.layer.2.intermediate.dense.weight', 'encoder.layer.2.intermediate.dense.bias', 'encoder.layer.2.output.dense.weight', 'encoder.layer.2.output.dense.bias', 'encoder.layer.2.output.LayerNorm.weight', 'encoder.layer.2.output.LayerNorm.bias', 'encoder.layer.3.attention.self.query.weight', 'encoder.layer.3.attention.self.query.bias', 'encoder.layer.3.attention.self.key.weight', 'encoder.layer.3.attention.self.key.bias', 'encoder.layer.3.attention.self.value.weight', 'encoder.layer.3.attention.self.value.bias', 'encoder.layer.3.attention.output.dense.weight', 'encoder.layer.3.attention.output.dense.bias', 'encoder.layer.3.attention.output.LayerNorm.weight', 'encoder.layer.3.attention.output.LayerNorm.bias', 'encoder.layer.3.crossattention.self.query.weight', 'encoder.layer.3.crossattention.self.query.bias', 'encoder.layer.3.crossattention.self.key.weight', 'encoder.layer.3.crossattention.self.key.bias', 'encoder.layer.3.crossattention.self.value.weight', 'encoder.layer.3.crossattention.self.value.bias', 'encoder.layer.3.crossattention.output.dense.weight', 'encoder.layer.3.crossattention.output.dense.bias', 'encoder.layer.3.crossattention.output.LayerNorm.weight', 'encoder.layer.3.crossattention.output.LayerNorm.bias', 'encoder.layer.3.intermediate.dense.weight', 'encoder.layer.3.intermediate.dense.bias', 'encoder.layer.3.output.dense.weight', 'encoder.layer.3.output.dense.bias', 'encoder.layer.3.output.LayerNorm.weight', 'encoder.layer.3.output.LayerNorm.bias', 'encoder.layer.4.attention.self.query.weight', 'encoder.layer.4.attention.self.query.bias', 'encoder.layer.4.attention.self.key.weight', 'encoder.layer.4.attention.self.key.bias', 'encoder.layer.4.attention.self.value.weight', 'encoder.layer.4.attention.self.value.bias', 'encoder.layer.4.attention.output.dense.weight', 'encoder.layer.4.attention.output.dense.bias', 'encoder.layer.4.attention.output.LayerNorm.weight', 'encoder.layer.4.attention.output.LayerNorm.bias', 'encoder.layer.4.crossattention.self.query.weight', 'encoder.layer.4.crossattention.self.query.bias', 'encoder.layer.4.crossattention.self.key.weight', 'encoder.layer.4.crossattention.self.key.bias', 'encoder.layer.4.crossattention.self.value.weight', 'encoder.layer.4.crossattention.self.value.bias', 'encoder.layer.4.crossattention.output.dense.weight', 'encoder.layer.4.crossattention.output.dense.bias', 'encoder.layer.4.crossattention.output.LayerNorm.weight', 'encoder.layer.4.crossattention.output.LayerNorm.bias', 'encoder.layer.4.intermediate.dense.weight', 'encoder.layer.4.intermediate.dense.bias', 'encoder.layer.4.output.dense.weight', 'encoder.layer.4.output.dense.bias', 'encoder.layer.4.output.LayerNorm.weight', 'encoder.layer.4.output.LayerNorm.bias', 'encoder.layer.5.attention.self.query.weight', 'encoder.layer.5.attention.self.query.bias', 'encoder.layer.5.attention.self.key.weight', 'encoder.layer.5.attention.self.key.bias', 'encoder.layer.5.attention.self.value.weight', 'encoder.layer.5.attention.self.value.bias', 'encoder.layer.5.attention.output.dense.weight', 'encoder.layer.5.attention.output.dense.bias', 'encoder.layer.5.attention.output.LayerNorm.weight', 'encoder.layer.5.attention.output.LayerNorm.bias', 'encoder.layer.5.crossattention.self.query.weight', 'encoder.layer.5.crossattention.self.query.bias', 'encoder.layer.5.crossattention.self.key.weight', 'encoder.layer.5.crossattention.self.key.bias', 'encoder.layer.5.crossattention.self.value.weight', 'encoder.layer.5.crossattention.self.value.bias', 'encoder.layer.5.crossattention.output.dense.weight', 'encoder.layer.5.crossattention.output.dense.bias', 'encoder.layer.5.crossattention.output.LayerNorm.weight', 'encoder.layer.5.crossattention.output.LayerNorm.bias', 'encoder.layer.5.intermediate.dense.weight', 'encoder.layer.5.intermediate.dense.bias', 'encoder.layer.5.output.dense.weight', 'encoder.layer.5.output.dense.bias', 'encoder.layer.5.output.LayerNorm.weight', 'encoder.layer.5.output.LayerNorm.bias', 'encoder.layer.6.attention.self.query.weight', 'encoder.layer.6.attention.self.query.bias', 'encoder.layer.6.attention.self.key.weight', 'encoder.layer.6.attention.self.key.bias', 'encoder.layer.6.attention.self.value.weight', 'encoder.layer.6.attention.self.value.bias', 'encoder.layer.6.attention.output.dense.weight', 'encoder.layer.6.attention.output.dense.bias', 'encoder.layer.6.attention.output.LayerNorm.weight', 'encoder.layer.6.attention.output.LayerNorm.bias', 'encoder.layer.6.crossattention.self.query.weight', 'encoder.layer.6.crossattention.self.query.bias', 'encoder.layer.6.crossattention.self.key.weight', 'encoder.layer.6.crossattention.self.key.bias', 'encoder.layer.6.crossattention.self.value.weight', 'encoder.layer.6.crossattention.self.value.bias', 'encoder.layer.6.crossattention.output.dense.weight', 'encoder.layer.6.crossattention.output.dense.bias', 'encoder.layer.6.crossattention.output.LayerNorm.weight', 'encoder.layer.6.crossattention.output.LayerNorm.bias', 'encoder.layer.6.intermediate.dense.weight', 'encoder.layer.6.intermediate.dense.bias', 'encoder.layer.6.output.dense.weight', 'encoder.layer.6.output.dense.bias', 'encoder.layer.6.output.LayerNorm.weight', 'encoder.layer.6.output.LayerNorm.bias', 'encoder.layer.7.attention.self.query.weight', 'encoder.layer.7.attention.self.query.bias', 'encoder.layer.7.attention.self.key.weight', 'encoder.layer.7.attention.self.key.bias', 'encoder.layer.7.attention.self.value.weight', 'encoder.layer.7.attention.self.value.bias', 'encoder.layer.7.attention.output.dense.weight', 'encoder.layer.7.attention.output.dense.bias', 'encoder.layer.7.attention.output.LayerNorm.weight', 'encoder.layer.7.attention.output.LayerNorm.bias', 'encoder.layer.7.crossattention.self.query.weight', 'encoder.layer.7.crossattention.self.query.bias', 'encoder.layer.7.crossattention.self.key.weight', 'encoder.layer.7.crossattention.self.key.bias', 'encoder.layer.7.crossattention.self.value.weight', 'encoder.layer.7.crossattention.self.value.bias', 'encoder.layer.7.crossattention.output.dense.weight', 'encoder.layer.7.crossattention.output.dense.bias', 'encoder.layer.7.crossattention.output.LayerNorm.weight', 'encoder.layer.7.crossattention.output.LayerNorm.bias', 'encoder.layer.7.intermediate.dense.weight', 'encoder.layer.7.intermediate.dense.bias', 'encoder.layer.7.output.dense.weight', 'encoder.layer.7.output.dense.bias', 'encoder.layer.7.output.LayerNorm.weight', 'encoder.layer.7.output.LayerNorm.bias', 'encoder.layer.8.attention.self.query.weight', 'encoder.layer.8.attention.self.query.bias', 'encoder.layer.8.attention.self.key.weight', 'encoder.layer.8.attention.self.key.bias', 'encoder.layer.8.attention.self.value.weight', 'encoder.layer.8.attention.self.value.bias', 'encoder.layer.8.attention.output.dense.weight', 'encoder.layer.8.attention.output.dense.bias', 'encoder.layer.8.attention.output.LayerNorm.weight', 'encoder.layer.8.attention.output.LayerNorm.bias', 'encoder.layer.8.crossattention.self.query.weight', 'encoder.layer.8.crossattention.self.query.bias', 'encoder.layer.8.crossattention.self.key.weight', 'encoder.layer.8.crossattention.self.key.bias', 'encoder.layer.8.crossattention.self.value.weight', 'encoder.layer.8.crossattention.self.value.bias', 'encoder.layer.8.crossattention.output.dense.weight', 'encoder.layer.8.crossattention.output.dense.bias', 'encoder.layer.8.crossattention.output.LayerNorm.weight', 'encoder.layer.8.crossattention.output.LayerNorm.bias', 'encoder.layer.8.intermediate.dense.weight', 'encoder.layer.8.intermediate.dense.bias', 'encoder.layer.8.output.dense.weight', 'encoder.layer.8.output.dense.bias', 'encoder.layer.8.output.LayerNorm.weight', 'encoder.layer.8.output.LayerNorm.bias', 'encoder.layer.9.attention.self.query.weight', 'encoder.layer.9.attention.self.query.bias', 'encoder.layer.9.attention.self.key.weight', 'encoder.layer.9.attention.self.key.bias', 'encoder.layer.9.attention.self.value.weight', 'encoder.layer.9.attention.self.value.bias', 'encoder.layer.9.attention.output.dense.weight', 'encoder.layer.9.attention.output.dense.bias', 'encoder.layer.9.attention.output.LayerNorm.weight', 'encoder.layer.9.attention.output.LayerNorm.bias', 'encoder.layer.9.crossattention.self.query.weight', 'encoder.layer.9.crossattention.self.query.bias', 'encoder.layer.9.crossattention.self.key.weight', 'encoder.layer.9.crossattention.self.key.bias', 'encoder.layer.9.crossattention.self.value.weight', 'encoder.layer.9.crossattention.self.value.bias', 'encoder.layer.9.crossattention.output.dense.weight', 'encoder.layer.9.crossattention.output.dense.bias', 'encoder.layer.9.crossattention.output.LayerNorm.weight', 'encoder.layer.9.crossattention.output.LayerNorm.bias', 'encoder.layer.9.intermediate.dense.weight', 'encoder.layer.9.intermediate.dense.bias', 'encoder.layer.9.output.dense.weight', 'encoder.layer.9.output.dense.bias', 'encoder.layer.9.output.LayerNorm.weight', 'encoder.layer.9.output.LayerNorm.bias', 'encoder.layer.10.attention.self.query.weight', 'encoder.layer.10.attention.self.query.bias', 'encoder.layer.10.attention.self.key.weight', 'encoder.layer.10.attention.self.key.bias', 'encoder.layer.10.attention.self.value.weight', 'encoder.layer.10.attention.self.value.bias', 'encoder.layer.10.attention.output.dense.weight', 'encoder.layer.10.attention.output.dense.bias', 'encoder.layer.10.attention.output.LayerNorm.weight', 'encoder.layer.10.attention.output.LayerNorm.bias', 'encoder.layer.10.crossattention.self.query.weight', 'encoder.layer.10.crossattention.self.query.bias', 'encoder.layer.10.crossattention.self.key.weight', 'encoder.layer.10.crossattention.self.key.bias', 'encoder.layer.10.crossattention.self.value.weight', 'encoder.layer.10.crossattention.self.value.bias', 'encoder.layer.10.crossattention.output.dense.weight', 'encoder.layer.10.crossattention.output.dense.bias', 'encoder.layer.10.crossattention.output.LayerNorm.weight', 'encoder.layer.10.crossattention.output.LayerNorm.bias', 'encoder.layer.10.intermediate.dense.weight', 'encoder.layer.10.intermediate.dense.bias', 'encoder.layer.10.output.dense.weight', 'encoder.layer.10.output.dense.bias', 'encoder.layer.10.output.LayerNorm.weight', 'encoder.layer.10.output.LayerNorm.bias', 'encoder.layer.11.attention.self.query.weight', 'encoder.layer.11.attention.self.query.bias', 'encoder.layer.11.attention.self.key.weight', 'encoder.layer.11.attention.self.key.bias', 'encoder.layer.11.attention.self.value.weight', 'encoder.layer.11.attention.self.value.bias', 'encoder.layer.11.attention.output.dense.weight', 'encoder.layer.11.attention.output.dense.bias', 'encoder.layer.11.attention.output.LayerNorm.weight', 'encoder.layer.11.attention.output.LayerNorm.bias', 'encoder.layer.11.crossattention.self.query.weight', 'encoder.layer.11.crossattention.self.query.bias', 'encoder.layer.11.crossattention.self.key.weight', 'encoder.layer.11.crossattention.self.key.bias', 'encoder.layer.11.crossattention.self.value.weight', 'encoder.layer.11.crossattention.self.value.bias', 'encoder.layer.11.crossattention.output.dense.weight', 'encoder.layer.11.crossattention.output.dense.bias', 'encoder.layer.11.crossattention.output.LayerNorm.weight', 'encoder.layer.11.crossattention.output.LayerNorm.bias', 'encoder.layer.11.intermediate.dense.weight', 'encoder.layer.11.intermediate.dense.bias', 'encoder.layer.11.output.dense.weight', 'encoder.layer.11.output.dense.bias', 'encoder.layer.11.output.LayerNorm.weight', 'encoder.layer.11.output.LayerNorm.bias', 'pooler.dense.weight', 'pooler.dense.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.decoder.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias']
2020-06-21 18:00:58,816|transformers.modeling_utils|INFO> Weights from pretrained model not used in BertLMHeadModel: ['longformer.embeddings.word_embeddings.weight', 'longformer.embeddings.position_embeddings.weight', 'longformer.embeddings.token_type_embeddings.weight', 'longformer.embeddings.LayerNorm.weight', 'longformer.embeddings.LayerNorm.bias', 'longformer.encoder.layer.0.attention.self.query.weight', 'longformer.encoder.layer.0.attention.self.query.bias', 'longformer.encoder.layer.0.attention.self.key.weight', 'longformer.encoder.layer.0.attention.self.key.bias', 'longformer.encoder.layer.0.attention.self.value.weight', 'longformer.encoder.layer.0.attention.self.value.bias', 'longformer.encoder.layer.0.attention.self.query_global.weight', 'longformer.encoder.layer.0.attention.self.query_global.bias', 'longformer.encoder.layer.0.attention.self.key_global.weight', 'longformer.encoder.layer.0.attention.self.key_global.bias', 'longformer.encoder.layer.0.attention.self.value_global.weight', 'longformer.encoder.layer.0.attention.self.value_global.bias', 'longformer.encoder.layer.0.attention.output.dense.weight', 'longformer.encoder.layer.0.attention.output.dense.bias', 'longformer.encoder.layer.0.attention.output.LayerNorm.weight', 'longformer.encoder.layer.0.attention.output.LayerNorm.bias', 'longformer.encoder.layer.0.intermediate.dense.weight', 'longformer.encoder.layer.0.intermediate.dense.bias', 'longformer.encoder.layer.0.output.dense.weight', 'longformer.encoder.layer.0.output.dense.bias', 'longformer.encoder.layer.0.output.LayerNorm.weight', 'longformer.encoder.layer.0.output.LayerNorm.bias', 'longformer.encoder.layer.1.attention.self.query.weight', 'longformer.encoder.layer.1.attention.self.query.bias', 'longformer.encoder.layer.1.attention.self.key.weight', 'longformer.encoder.layer.1.attention.self.key.bias', 'longformer.encoder.layer.1.attention.self.value.weight', 'longformer.encoder.layer.1.attention.self.value.bias', 'longformer.encoder.layer.1.attention.self.query_global.weight', 'longformer.encoder.layer.1.attention.self.query_global.bias', 'longformer.encoder.layer.1.attention.self.key_global.weight', 'longformer.encoder.layer.1.attention.self.key_global.bias', 'longformer.encoder.layer.1.attention.self.value_global.weight', 'longformer.encoder.layer.1.attention.self.value_global.bias', 'longformer.encoder.layer.1.attention.output.dense.weight', 'longformer.encoder.layer.1.attention.output.dense.bias', 'longformer.encoder.layer.1.attention.output.LayerNorm.weight', 'longformer.encoder.layer.1.attention.output.LayerNorm.bias', 'longformer.encoder.layer.1.intermediate.dense.weight', 'longformer.encoder.layer.1.intermediate.dense.bias', 'longformer.encoder.layer.1.output.dense.weight', 'longformer.encoder.layer.1.output.dense.bias', 'longformer.encoder.layer.1.output.LayerNorm.weight', 'longformer.encoder.layer.1.output.LayerNorm.bias', 'longformer.encoder.layer.2.attention.self.query.weight', 'longformer.encoder.layer.2.attention.self.query.bias', 'longformer.encoder.layer.2.attention.self.key.weight', 'longformer.encoder.layer.2.attention.self.key.bias', 'longformer.encoder.layer.2.attention.self.value.weight', 'longformer.encoder.layer.2.attention.self.value.bias', 'longformer.encoder.layer.2.attention.self.query_global.weight', 'longformer.encoder.layer.2.attention.self.query_global.bias', 'longformer.encoder.layer.2.attention.self.key_global.weight', 'longformer.encoder.layer.2.attention.self.key_global.bias', 'longformer.encoder.layer.2.attention.self.value_global.weight', 'longformer.encoder.layer.2.attention.self.value_global.bias', 'longformer.encoder.layer.2.attention.output.dense.weight', 'longformer.encoder.layer.2.attention.output.dense.bias', 'longformer.encoder.layer.2.attention.output.LayerNorm.weight', 'longformer.encoder.layer.2.attention.output.LayerNorm.bias', 'longformer.encoder.layer.2.intermediate.dense.weight', 'longformer.encoder.layer.2.intermediate.dense.bias', 'longformer.encoder.layer.2.output.dense.weight', 'longformer.encoder.layer.2.output.dense.bias', 'longformer.encoder.layer.2.output.LayerNorm.weight', 'longformer.encoder.layer.2.output.LayerNorm.bias', 'longformer.encoder.layer.3.attention.self.query.weight', 'longformer.encoder.layer.3.attention.self.query.bias', 'longformer.encoder.layer.3.attention.self.key.weight', 'longformer.encoder.layer.3.attention.self.key.bias', 'longformer.encoder.layer.3.attention.self.value.weight', 'longformer.encoder.layer.3.attention.self.value.bias', 'longformer.encoder.layer.3.attention.self.query_global.weight', 'longformer.encoder.layer.3.attention.self.query_global.bias', 'longformer.encoder.layer.3.attention.self.key_global.weight', 'longformer.encoder.layer.3.attention.self.key_global.bias', 'longformer.encoder.layer.3.attention.self.value_global.weight', 'longformer.encoder.layer.3.attention.self.value_global.bias', 'longformer.encoder.layer.3.attention.output.dense.weight', 'longformer.encoder.layer.3.attention.output.dense.bias', 'longformer.encoder.layer.3.attention.output.LayerNorm.weight', 'longformer.encoder.layer.3.attention.output.LayerNorm.bias', 'longformer.encoder.layer.3.intermediate.dense.weight', 'longformer.encoder.layer.3.intermediate.dense.bias', 'longformer.encoder.layer.3.output.dense.weight', 'longformer.encoder.layer.3.output.dense.bias', 'longformer.encoder.layer.3.output.LayerNorm.weight', 'longformer.encoder.layer.3.output.LayerNorm.bias', 'longformer.encoder.layer.4.attention.self.query.weight', 'longformer.encoder.layer.4.attention.self.query.bias', 'longformer.encoder.layer.4.attention.self.key.weight', 'longformer.encoder.layer.4.attention.self.key.bias', 'longformer.encoder.layer.4.attention.self.value.weight', 'longformer.encoder.layer.4.attention.self.value.bias', 'longformer.encoder.layer.4.attention.self.query_global.weight', 'longformer.encoder.layer.4.attention.self.query_global.bias', 'longformer.encoder.layer.4.attention.self.key_global.weight', 'longformer.encoder.layer.4.attention.self.key_global.bias', 'longformer.encoder.layer.4.attention.self.value_global.weight', 'longformer.encoder.layer.4.attention.self.value_global.bias', 'longformer.encoder.layer.4.attention.output.dense.weight', 'longformer.encoder.layer.4.attention.output.dense.bias', 'longformer.encoder.layer.4.attention.output.LayerNorm.weight', 'longformer.encoder.layer.4.attention.output.LayerNorm.bias', 'longformer.encoder.layer.4.intermediate.dense.weight', 'longformer.encoder.layer.4.intermediate.dense.bias', 'longformer.encoder.layer.4.output.dense.weight', 'longformer.encoder.layer.4.output.dense.bias', 'longformer.encoder.layer.4.output.LayerNorm.weight', 'longformer.encoder.layer.4.output.LayerNorm.bias', 'longformer.encoder.layer.5.attention.self.query.weight', 'longformer.encoder.layer.5.attention.self.query.bias', 'longformer.encoder.layer.5.attention.self.key.weight', 'longformer.encoder.layer.5.attention.self.key.bias', 'longformer.encoder.layer.5.attention.self.value.weight', 'longformer.encoder.layer.5.attention.self.value.bias', 'longformer.encoder.layer.5.attention.self.query_global.weight', 'longformer.encoder.layer.5.attention.self.query_global.bias', 'longformer.encoder.layer.5.attention.self.key_global.weight', 'longformer.encoder.layer.5.attention.self.key_global.bias', 'longformer.encoder.layer.5.attention.self.value_global.weight', 'longformer.encoder.layer.5.attention.self.value_global.bias', 'longformer.encoder.layer.5.attention.output.dense.weight', 'longformer.encoder.layer.5.attention.output.dense.bias', 'longformer.encoder.layer.5.attention.output.LayerNorm.weight', 'longformer.encoder.layer.5.attention.output.LayerNorm.bias', 'longformer.encoder.layer.5.intermediate.dense.weight', 'longformer.encoder.layer.5.intermediate.dense.bias', 'longformer.encoder.layer.5.output.dense.weight', 'longformer.encoder.layer.5.output.dense.bias', 'longformer.encoder.layer.5.output.LayerNorm.weight', 'longformer.encoder.layer.5.output.LayerNorm.bias', 'longformer.encoder.layer.6.attention.self.query.weight', 'longformer.encoder.layer.6.attention.self.query.bias', 'longformer.encoder.layer.6.attention.self.key.weight', 'longformer.encoder.layer.6.attention.self.key.bias', 'longformer.encoder.layer.6.attention.self.value.weight', 'longformer.encoder.layer.6.attention.self.value.bias', 'longformer.encoder.layer.6.attention.self.query_global.weight', 'longformer.encoder.layer.6.attention.self.query_global.bias', 'longformer.encoder.layer.6.attention.self.key_global.weight', 'longformer.encoder.layer.6.attention.self.key_global.bias', 'longformer.encoder.layer.6.attention.self.value_global.weight', 'longformer.encoder.layer.6.attention.self.value_global.bias', 'longformer.encoder.layer.6.attention.output.dense.weight', 'longformer.encoder.layer.6.attention.output.dense.bias', 'longformer.encoder.layer.6.attention.output.LayerNorm.weight', 'longformer.encoder.layer.6.attention.output.LayerNorm.bias', 'longformer.encoder.layer.6.intermediate.dense.weight', 'longformer.encoder.layer.6.intermediate.dense.bias', 'longformer.encoder.layer.6.output.dense.weight', 'longformer.encoder.layer.6.output.dense.bias', 'longformer.encoder.layer.6.output.LayerNorm.weight', 'longformer.encoder.layer.6.output.LayerNorm.bias', 'longformer.encoder.layer.7.attention.self.query.weight', 'longformer.encoder.layer.7.attention.self.query.bias', 'longformer.encoder.layer.7.attention.self.key.weight', 'longformer.encoder.layer.7.attention.self.key.bias', 'longformer.encoder.layer.7.attention.self.value.weight', 'longformer.encoder.layer.7.attention.self.value.bias', 'longformer.encoder.layer.7.attention.self.query_global.weight', 'longformer.encoder.layer.7.attention.self.query_global.bias', 'longformer.encoder.layer.7.attention.self.key_global.weight', 'longformer.encoder.layer.7.attention.self.key_global.bias', 'longformer.encoder.layer.7.attention.self.value_global.weight', 'longformer.encoder.layer.7.attention.self.value_global.bias', 'longformer.encoder.layer.7.attention.output.dense.weight', 'longformer.encoder.layer.7.attention.output.dense.bias', 'longformer.encoder.layer.7.attention.output.LayerNorm.weight', 'longformer.encoder.layer.7.attention.output.LayerNorm.bias', 'longformer.encoder.layer.7.intermediate.dense.weight', 'longformer.encoder.layer.7.intermediate.dense.bias', 'longformer.encoder.layer.7.output.dense.weight', 'longformer.encoder.layer.7.output.dense.bias', 'longformer.encoder.layer.7.output.LayerNorm.weight', 'longformer.encoder.layer.7.output.LayerNorm.bias', 'longformer.encoder.layer.8.attention.self.query.weight', 'longformer.encoder.layer.8.attention.self.query.bias', 'longformer.encoder.layer.8.attention.self.key.weight', 'longformer.encoder.layer.8.attention.self.key.bias', 'longformer.encoder.layer.8.attention.self.value.weight', 'longformer.encoder.layer.8.attention.self.value.bias', 'longformer.encoder.layer.8.attention.self.query_global.weight', 'longformer.encoder.layer.8.attention.self.query_global.bias', 'longformer.encoder.layer.8.attention.self.key_global.weight', 'longformer.encoder.layer.8.attention.self.key_global.bias', 'longformer.encoder.layer.8.attention.self.value_global.weight', 'longformer.encoder.layer.8.attention.self.value_global.bias', 'longformer.encoder.layer.8.attention.output.dense.weight', 'longformer.encoder.layer.8.attention.output.dense.bias', 'longformer.encoder.layer.8.attention.output.LayerNorm.weight', 'longformer.encoder.layer.8.attention.output.LayerNorm.bias', 'longformer.encoder.layer.8.intermediate.dense.weight', 'longformer.encoder.layer.8.intermediate.dense.bias', 'longformer.encoder.layer.8.output.dense.weight', 'longformer.encoder.layer.8.output.dense.bias', 'longformer.encoder.layer.8.output.LayerNorm.weight', 'longformer.encoder.layer.8.output.LayerNorm.bias', 'longformer.encoder.layer.9.attention.self.query.weight', 'longformer.encoder.layer.9.attention.self.query.bias', 'longformer.encoder.layer.9.attention.self.key.weight', 'longformer.encoder.layer.9.attention.self.key.bias', 'longformer.encoder.layer.9.attention.self.value.weight', 'longformer.encoder.layer.9.attention.self.value.bias', 'longformer.encoder.layer.9.attention.self.query_global.weight', 'longformer.encoder.layer.9.attention.self.query_global.bias', 'longformer.encoder.layer.9.attention.self.key_global.weight', 'longformer.encoder.layer.9.attention.self.key_global.bias', 'longformer.encoder.layer.9.attention.self.value_global.weight', 'longformer.encoder.layer.9.attention.self.value_global.bias', 'longformer.encoder.layer.9.attention.output.dense.weight', 'longformer.encoder.layer.9.attention.output.dense.bias', 'longformer.encoder.layer.9.attention.output.LayerNorm.weight', 'longformer.encoder.layer.9.attention.output.LayerNorm.bias', 'longformer.encoder.layer.9.intermediate.dense.weight', 'longformer.encoder.layer.9.intermediate.dense.bias', 'longformer.encoder.layer.9.output.dense.weight', 'longformer.encoder.layer.9.output.dense.bias', 'longformer.encoder.layer.9.output.LayerNorm.weight', 'longformer.encoder.layer.9.output.LayerNorm.bias', 'longformer.encoder.layer.10.attention.self.query.weight', 'longformer.encoder.layer.10.attention.self.query.bias', 'longformer.encoder.layer.10.attention.self.key.weight', 'longformer.encoder.layer.10.attention.self.key.bias', 'longformer.encoder.layer.10.attention.self.value.weight', 'longformer.encoder.layer.10.attention.self.value.bias', 'longformer.encoder.layer.10.attention.self.query_global.weight', 'longformer.encoder.layer.10.attention.self.query_global.bias', 'longformer.encoder.layer.10.attention.self.key_global.weight', 'longformer.encoder.layer.10.attention.self.key_global.bias', 'longformer.encoder.layer.10.attention.self.value_global.weight', 'longformer.encoder.layer.10.attention.self.value_global.bias', 'longformer.encoder.layer.10.attention.output.dense.weight', 'longformer.encoder.layer.10.attention.output.dense.bias', 'longformer.encoder.layer.10.attention.output.LayerNorm.weight', 'longformer.encoder.layer.10.attention.output.LayerNorm.bias', 'longformer.encoder.layer.10.intermediate.dense.weight', 'longformer.encoder.layer.10.intermediate.dense.bias', 'longformer.encoder.layer.10.output.dense.weight', 'longformer.encoder.layer.10.output.dense.bias', 'longformer.encoder.layer.10.output.LayerNorm.weight', 'longformer.encoder.layer.10.output.LayerNorm.bias', 'longformer.encoder.layer.11.attention.self.query.weight', 'longformer.encoder.layer.11.attention.self.query.bias', 'longformer.encoder.layer.11.attention.self.key.weight', 'longformer.encoder.layer.11.attention.self.key.bias', 'longformer.encoder.layer.11.attention.self.value.weight', 'longformer.encoder.layer.11.attention.self.value.bias', 'longformer.encoder.layer.11.attention.self.query_global.weight', 'longformer.encoder.layer.11.attention.self.query_global.bias', 'longformer.encoder.layer.11.attention.self.key_global.weight', 'longformer.encoder.layer.11.attention.self.key_global.bias', 'longformer.encoder.layer.11.attention.self.value_global.weight', 'longformer.encoder.layer.11.attention.self.value_global.bias', 'longformer.encoder.layer.11.attention.output.dense.weight', 'longformer.encoder.layer.11.attention.output.dense.bias', 'longformer.encoder.layer.11.attention.output.LayerNorm.weight', 'longformer.encoder.layer.11.attention.output.LayerNorm.bias', 'longformer.encoder.layer.11.intermediate.dense.weight', 'longformer.encoder.layer.11.intermediate.dense.bias', 'longformer.encoder.layer.11.output.dense.weight', 'longformer.encoder.layer.11.output.dense.bias', 'longformer.encoder.layer.11.output.LayerNorm.weight', 'longformer.encoder.layer.11.output.LayerNorm.bias', 'longformer.pooler.dense.weight', 'longformer.pooler.dense.bias', 'lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight']
2020-06-21 18:00:58,818|transformers.configuration_encoder_decoder|INFO> Set `config.is_decoder=True` for decoder_config
```
## Motivation
I'd like to use the longformer as an `EncoderDecoderModel` and these changes are required for the `EncoderDecoderModel` to work properly. My end goal is to use a encoder-decoder architecture for long documents. The better solution appears to be #4406. However, I still wonder if my changes are correct.
## Your contribution
My changes are in the `longformer-encoder_hidden_states` branch of [HHousen/transformers](https://github.com/HHousen/transformers/tree/longformer-encoder_hidden_states). Link to modified file: [modeling_longformer.py](https://github.com/HHousen/transformers/blob/longformer-encoder_hidden_states/src/transformers/modeling_longformer.py).
| 06-21-2020 18:51:37 | 06-21-2020 18:51:37 | Hey @HHousen,
In order for Longformer to work some changes need to be done to the `longformer_modeling.py` file (move the level of abstraction much lower) and also we need to think about how to deal with cross - attention layers for Longformer's chunked self attention. I will do some refactoring for Longformer soon.
As it is now, I will not work properly within the encoder-decoder architecture<|||||>Linking this to https://github.com/huggingface/transformers/issues/4225#issuecomment-659467998.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,169 | closed | [Tokenizer] batch_encode_plus method cannot encode List[Tuple[str]] with is_pretokenized=True | # 🐛 Bug
## Information
Model I am using: BERT
Language I am using the model on: English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load a tokenizer
2. Create a generic List[Tuple[List[int], List[int]]], for example [([2023, 2573], [2023, 2573, 2205])]
3. Encode the list using the method batch_encode_plus with is_tokenized=True to encode the pairs together
4. Error
```python
from transformers import BertTokenizer
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
text = [('This works','this works too')]
print(tokenizer.batch_encode_plus(text, add_special_tokens=False)['input_ids']) # This works
input_ids = [([2023, 2573], [2023, 2573, 2205])]
tokenizer.batch_encode_plus(input_ids, is_pretokenized=True) # This raises error
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py in get_input_ids(text)
1715 else:
1716 raise ValueError(
-> 1717 "Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers."
1718 )
1719
ValueError: Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
batch_encode_plus would be able to encode List[Tuple[List[int], List[int]]], as described in the function description, hence, the example's input_ids would be:
`[[101, 2023, 2573, 102, 2023, 2573, 2205, 102]]`
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0+cu101 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: probably not, but irrelevant
- Using distributed or parallel set-up in script?: probably not | 06-21-2020 18:13:17 | 06-21-2020 18:13:17 | Hi @vjeronymo2, `encode_plus` method expects list of `str` or `tokens` if you are using `is_pretokenized=True`.
`tokens` here are not ints, they are wordpiece tokens, if you want to convert a string to tokens then you can use `.tokenize` method<|||||>> Hi @vjeronymo2, `encode_plus` method expects list of `str` or `tokens` if you are using `is_pretokenized=True`.
> `tokens` here are not ints, they are wordpiece tokens, if you want to convert a string to tokens then you can use `.tokenize` method
Thanks for clarifying that. I guess my biggest confusion was with the correct meaning of is_pretokenized flag.
Have a nice day, good sir. |
transformers | 5,168 | closed | Summarization Examples: Support label_smoothed_cross_entropy | This seems to be used by nearly all papers, but the HF implementation in `examples/summarization/finetune.py` uses the standard cross entropy loss.
Ideally, we could expose a command line arg to allow this choice to be toggled, like:
```python
parser.add_argument('--loss', type=str, choices=['cross_entropy', 'label_smoothed_cross_entropy'], default='cross_entropy')
``` | 06-21-2020 16:39:17 | 06-21-2020 16:39:17 | Yes, it's annoying that torch doesn't have an implementation of smoothed cross entropy.
Although note that using smoothed cross entropy will hurt the probability calibration of a seq2seq model (https://pubs.acs.org/doi/10.1021/acscentsci.9b00576)<|||||>@sshleifer This is good point. I would consider to work on this issue
As i see we need 2 things for this:
1) add an implementation of label_smoothed_cross_entropy
2) support choice of loss function in `finetune.py`
am i correct?
<|||||>Yes you are!<|||||>This issue is resolved as label smoothing has already been integrated in master with https://github.com/huggingface/transformers/pull/5919 |
transformers | 5,167 | closed | results for wikitext-2 clm using GPT-2 differ between paper and example code | Hi,
The example code [here](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) to train causal LM on wikitext-2 uses a gpt2 base version (117M parameters). This code claims and also yields a perplexity of ~20 on the test data.
As you can see, the example script uses ```gpt``` as the ```model_name_or_path``` param which is actually the 117M model.
```python
python run_language_modeling.py \
--output_dir=output \
--model_type=gpt2 \
--model_name_or_path=gpt2 \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE
```

However, the GPT-2 [paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) [page 5, top table] shows that they got a perplexity of 29.41 with the said model.

Did I miss something? Can you please explain me the mismatch?
Thanks in advance.
| 06-21-2020 15:23:26 | 06-21-2020 15:23:26 | I think the results in this table of the GPT2 paper are zero-shot results (without fine-tuning). |
transformers | 5,166 | closed | Cannot import AutoModelForSeq2SeqLM | # 🐛 Bug
Has the `AutoModelForSeq2SeqLM` class changed?
I am trying to run transformer examples, basically the [token-classification](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_pl.sh) with pytorch-lightning, which calls [AutoModelForSeq2SeqLM](https://github.com/huggingface/transformers/blob/master/examples/lightning_base.py#L18). However, I am getting an import error. See below.
## Information
Model I am using (Bert, XLNet ...):
`bert-base-multilingual-cased`
Language I am using the model on (English, Chinese ...):
`English`
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
cd transformers/examples/token-classification
./run_pl.sh
```
```
Traceback (most recent call last):
File "run_pl_ner.py", line 12, in <module>
from lightning_base import BaseTransformer, add_generic_args, generic_train
File "/transformers/examples/lightning_base.py", line 12, in <module>
from transformers import (
ImportError: cannot import name 'AutoModelForSeq2SeqLM' from 'transformers' (/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/__init__.py)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Reproduce the example.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux
- Python version: Python 3.8.3
- PyTorch version (GPU?): 1.5.1
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:
| 06-21-2020 13:57:43 | 06-21-2020 13:57:43 | I think this is because you don't have installed the library from source (see the [note here](https://github.com/huggingface/transformers/blob/master/examples/README.md#important-note)).
To install from source, follow the steps [here](https://huggingface.co/transformers/installation.html#installing-from-source)<|||||>@sgugger Thanks, it works when installing with the source (through small changes) :)
Also, as I said, the 2.11.0 version (installed by pip) still has this bug
```
from transformers import AutoModelForSeq2SeqLM
```
What do you suggest for a stable version of transformers? <|||||>I'm not sure I follow. 2.11.0 is the last stable version, but the examples script on the main branch only works with an installation from source.
If you want a version of the examples compatible with 2.11.0, you should use their version in the [2.11.0 tagged repo](https://github.com/huggingface/transformers/tree/v2.11.0).<|||||>Same Issue v2.11.0<|||||>Again this is not an issue. The examples in the master repo are on par with the version of transformers on master, so you need an installation from source to run them, which is clearly indicated in the README.
If you want to execute the examples script as they were for v2.11.0, you should use 2.11.0 tagged repo](https://github.com/huggingface/transformers/tree/v2.11.0).
Closing this issue, please reopen if anything I said was unclear. |
transformers | 5,165 | closed | Create README.md | Creating README.md file for model on Community contribution. | 06-21-2020 12:34:34 | 06-21-2020 12:34:34 | |
transformers | 5,164 | closed | output generate scores per hypothesis/token | # 🚀 Feature request
Thanks for doing such an awesome work.
i'm interested in the hypothesis score when running generate.
This could be done per hypothesis, or preferably per token in the hypothesis.
## Motivation
The motivation is to gain confidence for my generated text,
I suggest:
1. adding flag in modeling_utils.py to generate to return_scores
2. in _generate_beam_search :
if return_scores:
return also
for _generate_beam_search:
` best_scores = []
# retrieve best hypotheses
for i, hypotheses in enumerate(generated_hyps):
sorted_hyps = sorted(hypotheses.beams, key=lambda x: x[0])
for j in range(output_num_return_sequences_per_batch):
effective_batch_idx = output_num_return_sequences_per_batch * i + j
hyp_score, best_hyp = sorted_hyps.pop()
sent_lengths[effective_batch_idx] = len(best_hyp)
best.append(best_hyp)
best_scores.append(hyp_score)
# shorter batches are filled with pad_token
if sent_lengths.min().item() != sent_lengths.max().item():
assert pad_token_id is not None, "`Pad_token_id` has to be defined"
sent_max_len = min(sent_lengths.max().item() + 1, max_length)
decoded = input_ids.new(output_batch_size, sent_max_len).fill_(pad_token_id)
# fill with hypothesis and eos_token_id if necessary
for i, hypo in enumerate(best):
decoded[i, : sent_lengths[i]] = hypo
if sent_lengths[i] < max_length:
decoded[i, sent_lengths[i]] = eos_token_id
else:
# none of the hypotheses have an eos_token
assert (len(hypo) == max_length for hypo in best)
decoded = torch.stack(best).type(torch.long).to(next(self.parameters()).device)
if return_scores:
return decoded, best_scores `
for _generate_no_beam_search:
` output_score = 0
while cur_len < max_length:
model_inputs = self.prepare_inputs_for_generation(
input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache, **model_specific_kwargs
)
outputs = self(**model_inputs)
next_token_logits = outputs[0][:, -1, :]
# if model has past, then set the past variable to speed up decoding
if self._use_cache(outputs, use_cache):
past = outputs[1]
# repetition penalty from CTRL paper (https://arxiv.org/abs/1909.05858)
if repetition_penalty != 1.0:
self.enforce_repetition_penalty_(next_token_logits, batch_size, 1, input_ids, repetition_penalty)
if no_repeat_ngram_size > 0:
# calculate a list of banned tokens to prevent repetitively generating the same ngrams
# from fairseq: https://github.com/pytorch/fairseq/blob/a07cb6f40480928c9e0548b737aadd36ee66ac76/fairseq/sequence_generator.py#L345
banned_tokens = calc_banned_ngram_tokens(input_ids, batch_size, no_repeat_ngram_size, cur_len)
for batch_idx in range(batch_size):
next_token_logits[batch_idx, banned_tokens[batch_idx]] = -float("inf")
if bad_words_ids is not None:
# calculate a list of banned tokens according to bad words
banned_tokens = calc_banned_bad_words_ids(input_ids, bad_words_ids)
for batch_idx in range(batch_size):
next_token_logits[batch_idx, banned_tokens[batch_idx]] = -float("inf")
# set eos token prob to zero if min_length is not reached
if eos_token_id is not None and cur_len < min_length:
next_token_logits[:, eos_token_id] = -float("inf")
if do_sample:
# Temperature (higher temperature => more likely to sample low probability tokens)
if temperature != 1.0:
next_token_logits = next_token_logits / temperature
# Top-p/top-k filtering
next_token_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p)
# Sample
probs = F.softmax(next_token_logits, dim=-1)
next_token = torch.multinomial(probs, num_samples=1).squeeze(1)
else:
# Greedy decoding
next_token = torch.argmax(next_token_logits, dim=-1)
from IPython import embed; embed()
next_score = torch.gather(next_token_logits, -1, next_tokens) # (batch_size, num_beams * 2)
# update generations and finished sentences
if eos_token_id is not None:
# pad finished sentences if eos_token_id exist
tokens_to_add = next_token * unfinished_sents + (pad_token_id) * (1 - unfinished_sents)
else:
tokens_to_add = next_token
# add token and increase length by one
input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1)
output_score+=next_score
cur_len = cur_len + 1
if eos_token_id is not None:
eos_in_sents = tokens_to_add == eos_token_id
# if sentence is unfinished and the token to add is eos, sent_lengths is filled with current length
is_sents_unfinished_and_token_to_add_is_eos = unfinished_sents.mul(eos_in_sents.long()).bool()
sent_lengths.masked_fill_(is_sents_unfinished_and_token_to_add_is_eos, cur_len)
# unfinished_sents is set to zero if eos in sentence
unfinished_sents.mul_((~eos_in_sents).long())
# stop when there is a </s> in each sentence, or if we exceed the maximul length
if unfinished_sents.max() == 0:
break
# extend attention_mask for new generated input if only decoder
if self.config.is_encoder_decoder is False:
attention_mask = torch.cat(
[attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1
)
# if there are different sentences lengths in the batch, some batches have to be padded
if sent_lengths.min().item() != sent_lengths.max().item():
assert pad_token_id is not None, "`Pad_token_id` has to be defined if batches have different lengths"
# finished sents are filled with pad_token
decoded = input_ids.new(batch_size, sent_lengths.max().item()).fill_(pad_token_id)
else:
decoded = input_ids
for hypo_idx, hypo in enumerate(input_ids):
decoded[hypo_idx, : sent_lengths[hypo_idx]] = hypo[: sent_lengths[hypo_idx]]
if return_scores:
return decoded, output_score
return decoded `
In the next step we could save the score per token to allow the user to decide where he wants to truncate the generated text as function of confidence
| 06-21-2020 12:34:29 | 06-21-2020 12:34:29 | I guess similar to `output_attentions` and `output_hidden_states`, we could output the `scores / probabilities` for generation, but I'm really not sure if it is required that often. What do you think @sshleifer @yjernite ? <|||||>I would suggest trying it on a branch and seeing if it produces better generations. I have been inspecting the scores this week (just by saving hypotheses to disk) and have not gotten much utility. If it helps produce better generations, however, we should obviously add this!
<|||||>Dear @patrickvonplaten and @sshleifer, Thanks for the quick reply.
I'm interested in the perplexity of my generated text as function of different generated methods. This can done using the probabilities of the output tokens.
Another interesting case that jumps to mind is the case of auto complete, where you wanna present the user a generated text only if it passes some threshold of confidence.
<|||||>Those are actually very useful applications! We will soon have a bigger refactoring of the generate method I think and will hopefully include this.
As @sshleifer said, for now, it would be great if you can show how you would integrate it on a branch including some interesting results. <|||||>Fantastic. Will do. <|||||>Thanks for raising the issue @guyeyal. IT would definitely be helpful to have a running example.
More generally @patrickvonplaten I think this is functionality will be helpful for the line of research concerned with analyzing the role of preplexity as a training objective as well as work on re-ranking generations or using stuff like noisy channel modeling, so definitely think it should be in the next big refactor.
[https://arxiv.org/abs/1904.09751](https://arxiv.org/abs/1904.09751)
[https://arxiv.org/abs/1908.05731](https://arxiv.org/abs/1908.05731)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I'll add a vote here that I'm interested in this too. I wrote some code locally very similar to guyeyal's.<|||||>Thanks for the great work!
I would also be interested in this functionality. I am using an autoregressive transformer model as part of a reinforcement learning problem. To alleviate the sample inefficiency of RL, it is very attractive to generate data using beam search, in order to add `num_beams > 1` of data to a buffer per time step. I would then like to bias the sampling of data from this buffer according to the probability of the generated sequence, defined like the diagram in this example:
https://huggingface.co/blog/how-to-generate#beam-search
@patrickvonplaten is this something that is likely to be covered in the PR here: https://github.com/huggingface/transformers/pull/6949
or is it better to open a new issue? Thanks!<|||||>There seems to be a lot of interest in this functionality! If someone feels like opening a PR that would be great!<|||||>> There seems to be a lot of interest in this functionality! If someone feels like opening a PR that would be great!
I saw a PR here, but not committed. #6289<|||||>> > There seems to be a lot of interest in this functionality! If someone feels like opening a PR that would be great!
>
> I saw a PR here, but not committed. #6289
Any idea why this wasn't commited?<|||||>> > > There seems to be a lot of interest in this functionality! If someone feels like opening a PR that would be great!
> >
> >
> > I saw a PR here, but not committed. #6289
>
> Any idea why this wasn't commited?
Wasn't quite working for me properly when I tried it. I did a fix locally based on 3.4.0, but the big refactor of generation_utils in 3.5.x broke it entirely again. Would be better to start afresh at this point, I think with all the changes to that file.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Is this feature available? Or still in the works?<|||||>See: https://discuss.huggingface.co/t/announcement-generationoutputs-scores-attentions-and-hidden-states-now-available-as-outputs-to-generate/3094<|||||>I am using the tensor flow implementation of T5. When I use `model.generate(input_ids=input_ids, num_beams=5, num_return_sequences=5, return_dict_in_generate=True)`, it returns a tensor of shape `(1,5,vocab_size)`. Essentially, it is giving me the probability of a **single** word for each beam. In the description for `beam_search` in the link above, it says that score refers to "all processed lm head logits + the current beam_scores for **each output token**". In this link, https://huggingface.co/transformers/internal/generation_utils.html, it says that scores is "the prediction scores of the language modelling head, for **each generation step**". Ideally, we want a score for each token at every step of the generation for each beam search. So, wouldn't the shape of the output be (`batch_size`,`number_of_beams`,`sequence_length`,`vocab_size`)? That way, we can follow the path that each beam search went through to get the max probability sequence. In other words, for each beam, we have the probability of each token in our vocabulary for each generation step (until the max length).
I want to use these token probabilities to calculate the sequence probabilities. In the blog for `generate()` (https://huggingface.co/blog/how-to-generate#beam-search), it shows that beam search looks for the highest product of probabilities between all sequences. "_At time step 2, beam search finds that the word sequence ("The","dog","has"), has with 0.36 a higher probability than ("The","nice","woman"), which has 0.2_". How can we get access to this sequence level probability, as show in this blog? <|||||>@patrickvonplaten, in a follow up to the post above, does the **tensorflow** implementation of `model.generate()` produce either the `sequence_scores` that is also available in the pytorch implementation? Or, somehow the `scores` returns a tensor that is in the shape `(batch_size,number_of_beams,sequence_length,vocab_size)`, where we can calculate the product of the token probabilities at each step in the beam_search for each sequence? Thanks for your help! <|||||>how can I get the gold sequence generate score?<|||||>I don't think we have the TF implementation of this function yet. Also cc @gante here<|||||>@xxllp for PT, we have a function to compute the scores with beam search. It is not documented yet, but you can check the function [here](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py#L804).
For TF, that function is yet to be implemented (but it is in our TODO list :) ) |
transformers | 5,163 | closed | Why doesn't stride in squad_convert_example_to_features‘s encode_plus set to doc_stride? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 06-21-2020 05:39:35 | 06-21-2020 05:39:35 | Can you give more details as asked in the issue template?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,162 | closed | How to save the created embedding of the text corpus. | After converting the text corpus into word embedding how can we save the embeddings into a file. It is not practical and feasible to create word embedding each and every time. Is there ant work around to save the embedding. | 06-21-2020 03:14:06 | 06-21-2020 03:14:06 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,161 | closed | Keras model created from individual Bert Layers has weights not shown in trainable_weights nor non_trainable_weights. model.summary() / utils.plot_model shows those weights as part of graph though | # 🐛 Bug
## Information
I created a model which takes 2 layers from from a bert model, and creates a mini-bert model.
However, not all the weights/layers from the component layers are in this model. But These weights are shown was connected in `model.summary()` and in
```
tf.keras.utils.plot_model(
model, to_file='model.png', show_shapes=False, show_layer_names=True,
rankdir='TB', expand_nested=False, dpi=96
)
```
Model I am using (Bert, XLNet ...): bert-base-uncased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ X] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
Here is a Colabnotebook with a minimal example that reproduced the issue.
https://colab.research.google.com/drive/1n3_XNhdgH6Qo7GT-M570lIKWAoU3TML5?usp=sharing
And here is the code
```
!pip install transformers --q
%tensorflow_version 2.x
from transformers import TFBertModel, AutoModel, TFRobertaModel, AutoTokenizer
import tensorflow as tf
import tensorflow_addons as tfa
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
from tensorflow import keras
from tensorflow.keras import layers
from copy import deepcopy
logger = tf.get_logger()
logger.info(tf.__version__)
def get_mini_models():
tempModel = TFRobertaModel.from_pretrained('bert-base-uncased', from_pt=True)
layer9 = deepcopy(tempModel.layers[0].encoder.layer[8])
layer10 = deepcopy(tempModel.layers[0].encoder.layer[9])
inputHiddenVals = tf.keras.Input(shape=[None, None], dtype=tf.float32, name='input_Q',
batch_size=None)
hidden1 = layer9((inputHiddenVals, None, None))
hidden2 = layer10((hidden1[0], None, None))
modelNew = tf.keras.Model(inputs=inputHiddenVals, outputs=hidden2)
del tempModel
return modelNew
@tf.function
def loss_fn(_, probs):
bs = tf.shape(probs)[0]
labels = tf.eye(bs, bs)
return tf.losses.categorical_crossentropy(labels,
probs,
from_logits=True)
model = get_mini_models()
model.compile(loss=loss_fn,
optimizer=tfa.optimizers.AdamW(weight_decay=1e-4, learning_rate=1e-5,
epsilon=1e-06))
# Get model and layers directly to compare
tempModel = TFRobertaModel.from_pretrained('bert-base-uncased', from_pt=True)
layer9 = deepcopy(tempModel.layers[0].encoder.layer[8])
layer10 = deepcopy(tempModel.layers[0].encoder.layer[9])
# Only one layer, and that layer also has missing weights.
for i, var in enumerate(model.weights):
print(model.weights[i].name)
# Full weights for one layer
for i, var in enumerate(layer9.weights):
print(layer9.weights[i].name)
# Test what correct output should be
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
inputt = tokenizer.encode('This is a sentence', return_tensors='tf')
outt = tempModel(inputt)[0]
# Test model output. Not the same.
model(outt)
# Model summary somehow lists the weights
model.summary()
# Model diagram shows the correct connections between all the layers.
tf.keras.utils.plot_model(
model, to_file='model.png', show_shapes=False, show_layer_names=True,
rankdir='TB', expand_nested=False, dpi=96
)
```
Edit: I also tried making the layers from scratch, and setting the weights directly, same result. Here's a colab notebook that does htis. https://colab.research.google.com/drive/1EC_fObSp9lUsj_PFaYgFtRI93ErPYmU9?usp=sharing
## Expected behavior
The model should contain all the weights from the component layers, and the output of the model should be the same as executing the layers in the same way outside the model.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0+cu101 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?:no
- Using distributed or parallel set-up in script?: no
| 06-20-2020 21:57:51 | 06-20-2020 21:57:51 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,160 | closed | Create README.md | Creating README.md file for model on Community contribution. | 06-20-2020 18:26:48 | 06-20-2020 18:26:48 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5160?src=pr&el=h1) Report
> Merging [#5160](https://codecov.io/gh/huggingface/transformers/pull/5160?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/68e19f1c228c92d5d800533f558faff24b57127a&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5160?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5160 +/- ##
==========================================
- Coverage 77.93% 77.92% -0.01%
==========================================
Files 137 137
Lines 23475 23475
==========================================
- Hits 18295 18294 -1
- Misses 5180 5181 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5160?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5160/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.81% <0.00%> (-0.30%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5160/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.28% <0.00%> (+0.12%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5160?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5160?src=pr&el=footer). Last update [68e19f1...2bf6959](https://codecov.io/gh/huggingface/transformers/pull/5160?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,159 | closed | Transformer pipeline loading model and tokenizer on every prediction request | # 🐛 Bug
## Information
I'm trying to build API which accept question and context, response should be the answer. I'm trying to use Transformer Question Answering pipeline task with BERT model.
I'm initializing the model and pipeline object on __init__ method and trying to use pipeline object in another method, so that on server startup the model is loaded to memory and prediction will be fast.
But on every prediction request, __init__method is called, when i debugged the flow, i think its because of feature calculation is happening using Pool threads.
https://github.com/huggingface/transformers/blob/68e19f1c228c92d5d800533f558faff24b57127a/src/transformers/data/processors/squad.py#L318
Can you please to resolve the issue?
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] my own modified scripts: Own script
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD
## To reproduce
Steps to reproduce the behavior:
Source Code:
```
from transformers import BertTokenizer, BertForQuestionAnswering
import torch
import time
from transformers import pipeline
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using {device} for QNA Model prediction")
class QNAModel:
def __init__(self):
start_time = time.time()
self.tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
self.model = BertForQuestionAnswering.from_pretrained(
'bert-large-uncased-whole-word-masking-finetuned-squad')
self.pipeline = pipeline("question-answering", model=self.model, tokenizer=self.tokenizer)
print("--- %s seconds model load time ---" % (time.time() - start_time))
def get_answers(self, passages, question):
start_time = time.time()
answers = []
for passage in passages:
context = passage["passage"]
print(self.pipeline(question=question, context=context))
print("--- %s seconds Total QNA model prediction---" % (time.time() - start_time))
return answers
```
Testing
```
qna_model = QNAModel()
qna_model.get_answers(passages1, question1)
qna_model.get_answers(passages1, question2)
```
Logs,
```
Using cpu for QNA Model prediction
--- 16.12714695930481 seconds model load time ---
convert squad examples to features: 100%|██████████| 1/1 [00:00<00:00, 58.58it/s]
add example index and unique id: 100%|██████████| 1/1 [00:00<?, ?it/s]
{'score': 0.39260570705747, 'start': 364, 'end': 380, 'answer': '<answer1>'}
Using cpu for QNA Model prediction
--- 14.559933185577393 seconds model load time ---
convert squad examples to features: 100%|██████████| 1/1 [00:00<00:00, 187.58it/s]
add example index and unique id: 100%|██████████| 1/1 [00:00<?, ?it/s]
{'score': 0.2136769815853059, 'start': 0, 'end': 84, 'answer': '<answer2>'}
--- 51.28193521499634 seconds Total QNA model prediction---
```
## Expected behavior
Model should load once so that prediction is fast
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform: Windows
- Python version: 3.7
- PyTorch version (GPU?): 1.5.0+cpu
- Tensorflow version (GPU?): NA
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 06-20-2020 16:17:19 | 06-20-2020 16:17:19 | Hi @Sharathmk99, can post the snippet of your server code where you are initialising `QNAModel()` <|||||>I am also facing the same issue? please help<|||||>Any fix for this?<|||||>This windows issue has not fixed even in the latest versions of transformers. @patil-suraj Please help!
```
from transformers import pipeline
model_name = "deepset/roberta-base-squad2"
nlp = pipeline("question-answering", model=model_name, tokenizer=model_name, framework="pt")
nlp(question=question, context=text)
```
This only happens on Windows<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,158 | closed | Fix PABEE's result table | 06-20-2020 12:16:04 | 06-20-2020 12:16:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5158?src=pr&el=h1) Report
> Merging [#5158](https://codecov.io/gh/huggingface/transformers/pull/5158?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aa6a29bc25b663e1311c5c4fb96b004cf8a6d2b6&el=desc) will **increase** coverage by `0.02%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5158?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5158 +/- ##
==========================================
+ Coverage 77.92% 77.94% +0.02%
==========================================
Files 137 137
Lines 23475 23475
==========================================
+ Hits 18292 18298 +6
+ Misses 5183 5177 -6
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5158?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.96% <0.00%> (+0.14%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5158/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5158?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5158?src=pr&el=footer). Last update [aa6a29b...23f6a30](https://codecov.io/gh/huggingface/transformers/pull/5158?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,157 | closed | [examples] fixes arguments for summarization finetune scripts | This PR fixes arguments in bash scripts for finetuning bart/bart_tiny/t5 models in summarization examples.
After changes that were merged in #4951 i noticed some small problems. That includes:
* missing default arguments for `finetune.py` (--data_dir, output_dir)
* invalid argument name for number of gpu to use (`n_gpu` instead of `gpus`)
* argument `model_type` that is not present in list of arguments and crashes the script | 06-20-2020 11:11:03 | 06-20-2020 11:11:03 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5157?src=pr&el=h1) Report
> Merging [#5157](https://codecov.io/gh/huggingface/transformers/pull/5157?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/68e19f1c228c92d5d800533f558faff24b57127a&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5157?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5157 +/- ##
=======================================
Coverage 77.93% 77.94%
=======================================
Files 137 137
Lines 23475 23475
=======================================
+ Hits 18295 18297 +2
+ Misses 5180 5178 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5157?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.96% <0.00%> (-0.15%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5157/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5157?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5157?src=pr&el=footer). Last update [68e19f1...49b1e42](https://codecov.io/gh/huggingface/transformers/pull/5157?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,156 | closed | BertForMaskedLM "labels" is an unexpected keyword | # 🐛 Bug
## Information
The official BertForMaskedLM example: https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm
has a bug. When running: outputs = model(input_ids, labels=input_ids), it alerts: TypeError: forward() got an unexpected keyword argument 'labels'
Model I am using (Bert, XLNet ...):
BERT
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ ] the official example scripts: (give details below)
from transformers import BertTokenizer, BertForMaskedLM
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=input_ids)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
should work with labels??
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux
- Python version: 3.7
- PyTorch version (GPU?): 1.5.1
- Tensorflow version (GPU?): 2.2.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| 06-20-2020 08:47:11 | 06-20-2020 08:47:11 | Hi, @guoxuxu , `lm_labels` is changed to `labels` in a recent commit on master. If you are using master then use `labels` otherwise use `lm_labels`
@sgugger This change is causing a lot of confusion. Would it be a good idea to keep master and release docs separate ? <|||||>This has just been done. The [documentation](https://huggingface.co/transformers/) now shows the latest stable release (v2.11.0) and you have to opt-in to see the [master documentation](https://huggingface.co/transformers/master/).
I'll work on a version selector next.<|||||>Thanks @sgugger !<|||||>I think we can close the issue as a result. Please reopen if any problem persists. |
transformers | 5,155 | closed | new tokenizer backend breaks old code | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): any model
Language I am using the model on (English, Chinese ...): english
The problem arises when using:
* [x] the official example scripts: (give details below) QA script from official repo
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) Squad 1 and 2
## To reproduce
Steps to reproduce the behavior:
Just try to run the example script for QA task
The error output is too large to copy in here.
The new tokenizer backend is not backward compatible. The truncation is set to false by default.
With the new tokenizer backend the truncation need to get initialized with True value, but in example or old codes there is often no possibility to set it while calling.
I think the feature conversion code is not updated to the new backend, right ?
## Expected behavior
It should train the model correctly
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: any
- Python version: 3.7-3.8
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| 06-20-2020 08:46:26 | 06-20-2020 08:46:26 | Can you share the exact command line you ran and the last 20-30 lines of the error message?<|||||>The command given in the readme
Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'only_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you may want to check this is the right behavior.
This is getting printed some hundreth of times
<|||||>Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'only_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you may want to check this is the right behavior.
"""
Traceback (most recent call last):
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/data/processors/squad.py", line 134, in squad_convert_example_to_features
encoded_dict = tokenizer.encode_plus(
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1504, in encode_plus
return self._encode_plus(
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 358, in _encode_plus
return self._prepare_for_model(
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 573, in _prepare_for_model
ids, pair_ids, overflowing_tokens = self.truncate_sequences(
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 675, in truncate_sequences
assert len(ids) > num_tokens_to_remove
AssertionError
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "runsquad.py", line 827, in <module>
main()
File "runsquad.py", line 765, in main
train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False)
File "runsquad.py", line 451, in load_and_cache_examples
features, dataset = squad_convert_examples_to_features(
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers/data/processors/squad.py", line 326, in squad_convert_examples_to_features
features = list(
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/multiprocessing/pool.py", line 420, in <genexpr>
return (item for chunk in result for item in chunk)
File "/home/a-ware/anaconda3/envs/transformers/lib/python3.8/multiprocessing/pool.py", line 868, in next
raise value
AssertionError
Edit: http://prntscr.com/t3ak3t
The error occures while processing the datas<|||||>Same behaviour, any idea to debug it? <|||||>As temporary fix I took an older fork without the new tokenizers logic and copied the new models to it<|||||>> Can you share the exact command line you ran and the last 20-30 lines of the error message?
It seems it it related with the issue I created here: https://github.com/huggingface/tokenizers/issues/307#issuecomment-647603101
cc: @sshleifer <|||||>Yes, it the same error
> > Can you share the exact command line you ran and the last 20-30 lines of the error message?
>
> It seems it it related with the issue I created here: [huggingface/tokenizers#307 (comment)](https://github.com/huggingface/tokenizers/issues/307#issuecomment-647603101)
> cc: @sshleifer
<|||||>Still facing the same issue, did the fix work?<|||||>Was this issue solved? I am facing the same problem, it was working just fine this evening, and I re-installed transformers module on a new notebook, thats when it fetched the newer version and is causing this error, I've tested it with Bert and gpt2, and it still presists.
I read the code for the tokenization_utils_base, here is where it's originating, based on the "if conditional block", i coded padding to be True, but still it gives same error.
```
def _get_padding_truncation_strategies(
self, padding=False, truncation=False, max_length=None, pad_to_multiple_of=None, verbose=True, **kwargs
):
""" Find the correct padding/truncation strategy with backward compatibility
for old arguments (truncation_strategy and pad_to_max_length) and behaviors.
"""
old_truncation_strategy = kwargs.pop("truncation_strategy", "do_not_truncate")
old_pad_to_max_length = kwargs.pop("pad_to_max_length", False)
# Backward compatibility for previous behavior, maybe we should deprecate it:
# If you only set max_length, it activates truncation for max_length
if max_length is not None and padding is False and truncation is False:
if verbose:
logger.warning(
"Truncation was not explicitely activated but `max_length` is provided a specific value, "
"please use `truncation=True` to explicitely truncate examples to max length. "
"Defaulting to 'only_first' truncation strategy. "
"If you encode pairs of sequences (GLUE-style) with the tokenizer you may want to check this is the right behavior."
)
truncation = "only_first"
```<|||||>Are you installing it from source?<|||||>@mrm8488 No, currently not, its from pip, could it be that the pip isn't getting the updated code, the version shows transformers-3.0.0, and I remember yesterday it wasn't 3.0.0<|||||>Closing as adressing @llStringll's issue in #5377. |
transformers | 5,154 | closed | Lite transformer | # 🌟 New model addition
## Model description
Transformer has become ubiquitous in natural language processing (e.g., machine translation, question answering); however, it requires enormous amount of computations to achieve high performance, which makes it not suitable for mobile applications that are tightly constrained by the hardware resources and battery. In this paper, we present an efficient mobile NLP architecture, Lite Transformer to facilitate deploying mobile NLP applications on edge devices. The key primitive is the Long-Short Range Attention (LSRA), where one group of heads specializes in the local context modeling (by convolution) while another group specializes in the long-distance relationship modeling (by attention). Such specialization brings consistent improvement over the vanilla transformer on three well-established language tasks: machine translation, abstractive summarization, and language modeling. Under constrained resources (500M/100M MACs), Lite Transformer outperforms transformer on WMT'14 English-French by 1.2/1.7 BLEU, respectively. Lite Transformer reduces the computation of transformer base model by 2.5x with 0.3 BLEU score degradation. Combining with pruning and quantization, we further compressed the model size of Lite Transformer by 18.2x. For language modeling, Lite Transformer achieves 1.8 lower perplexity than the transformer at around 500M MACs. Notably, Lite Transformer outperforms the AutoML-based Evolved Transformer by 0.5 higher BLEU for the mobile NLP setting without the costly architecture search that requires more than 250 GPU years. Code has been made available at https://github.com/mit-han-lab/lite-transformer
<!-- Important information -->
## Open source status
* [x] the model implementation is available: (give details)https://github.com/mit-han-lab/lite-transformer
* [x] the model weights are available: (give details) linked in readme of GitHub repo
* [x] who are the authors: (mention them, if possible by @gh-username)Zhanghao Wu • Zhijian Liu • Ji Lin • Yujun Lin • Song Han
| 06-20-2020 06:35:28 | 06-20-2020 06:35:28 | Very cool!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,153 | closed | Create README.md | electra_large_discriminator_squad2_512 LM for Question Answering | 06-20-2020 05:27:12 | 06-20-2020 05:27:12 | File extension is missing, can you add it?<|||||>(should be `model_cards/ahotrod/electra_large_discriminator_squad2_512/README.md`)
Thanks! |
transformers | 5,152 | closed | Create README.md | Creating README.md file for model on Community contribution. | 06-20-2020 03:32:05 | 06-20-2020 03:32:05 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5152?src=pr&el=h1) Report
> Merging [#5152](https://codecov.io/gh/huggingface/transformers/pull/5152?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5ed94b231269972c59d53bf4134a842c2273e814&el=desc) will **decrease** coverage by `0.38%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5152?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5152 +/- ##
==========================================
- Coverage 78.32% 77.93% -0.39%
==========================================
Files 137 137
Lines 23472 23472
==========================================
- Hits 18385 18294 -91
- Misses 5087 5178 +91
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5152?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5152/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5152/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <0.00%> (-0.83%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5152/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <0.00%> (-0.38%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5152?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5152?src=pr&el=footer). Last update [5ed94b2...9874049](https://codecov.io/gh/huggingface/transformers/pull/5152?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,151 | closed | TFBertForSequenceClassification: TypeError: call() got an unexpected keyword argument 'labels' | # 🐛 Bug
## Information
Model I am using TFBertForSequenceClassification
Language I am using the model on: English
The problem arises when using:
* [X] the official example scripts: (give details below)
The tasks I am working on is:
* [X] my own task or dataset: (give details below)
To classify text and learn the model and package.
## To reproduce
Steps to reproduce the behavior:
1. pip install transformers (currently 2.11.0)
2. run default code on website: https://huggingface.co/transformers/model_doc/bert.html#tfbertforsequenceclassification
3. I tried to follow this: https://github.com/huggingface/transformers/issues/4848; and the issue remains the same.
```
import tensorflow as tf
from transformers import BertTokenizer, TFBertForSequenceClassification
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
labels = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2]
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.7
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 06-20-2020 03:28:30 | 06-20-2020 03:28:30 | You should try this
`pip install git+https://github.com/huggingface/transformers`
not
`pip install transformers`
because the latest version isn't available in any release<|||||>It is available in the version v3.0.0 which was released this morning :)<|||||>I'm having the same issue with v3.0.2, following is the error msg:
> `TypeError: tf__call() got an unexpected keyword argument 'labels'`<|||||>> I'm having the same issue with v3.0.2, following is the error msg:
>
> > `TypeError: tf__call() got an unexpected keyword argument 'labels'`
I would like to elaborate more upon this issue. I carefully checked the source code and the error is that the `TFDistilBertModel` get the `label` keyword argument and throws this error. I have ensured that the data is fitted as the desired form in the type of `tf.data.Dataset`. The code I wrote is effectively identical to the [`run_tf_glue.py`](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_tf_glue.py) and the example code is running correctly.
> Note: I pull down the github repo and install the dependency as described [here](https://huggingface.co/transformers/examples.html#important-note). Is this possibly related to the issue?
I also tried the following code to reinstall transformers but it still doesn't work
```
pip uninstall transformers
pip install git+https://github.com/huggingface/transformers
```
This has been to a point where I'm extremely frustrated, it would be really appreciated if someone can point me to a right direction.<|||||>> > I'm having the same issue with v3.0.2, following is the error msg:
> > > `TypeError: tf__call() got an unexpected keyword argument 'labels'`
>
> I would like to elaborate more upon this issue. I carefully checked the source code and the error is that the `TFDistilBertModel` get the `label` keyword argument and throws this error. I have ensured that the data is fitted as the desired form in the type of `tf.data.Dataset`. The code I wrote is effectively identical to the [`run_tf_glue.py`](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_tf_glue.py) and the example code is running correctly.
>
> > Note: I pull down the github repo and install the dependency as described [here](https://huggingface.co/transformers/examples.html#important-note). Is this possibly related to the issue?
>
> I also tried the following code to reinstall transformers but it still doesn't work
>
> ```
> pip uninstall transformers
> pip install git+https://github.com/huggingface/transformers
> ```
>
> This has been to a point where I'm extremely frustrated, it would be really appreciated if someone can point me to a right direction.
Did you wanna do classification task?
If so, you may need to use `TFDistilBertForSequenceClassification` or `TFDistilBertForTokenClassification`。
The `TFDistilBertModel ` class doesn't contain a classification_layer, so there isn't a 'labels' argument exists。<|||||>Post your code, as installing the latest transformers fixed the issue for me as recommended here.<|||||>@afogarty85 Hi, please see the code below.
Regarding to the training data, the size of data is around 5000 text-label pair, I created this `tf.data.Dataset` inspired by the [`glue_convert_examples_to_features.py`](https://github.com/huggingface/transformers/blob/eae6d8d14f1d25d62c3fe9e7e410607bbaf69787/src/transformers/data/processors/glue.py#L35)
```python
import os
import json
import re
from pprint import pprint
from dataclasses import dataclass, field
from dotenv import load_dotenv
import numpy as np
import pandas as pd
import tensorflow as tf
from transformers import (
AutoConfig,
AutoTokenizer,
TFAutoModel,
TFTrainer,
TFTrainingArguments,
)
from sklearn.metrics import precision_recall_fscore_support
from tc_data import TopCoder
load_dotenv()
def build_dataset(tokenizer):
""" Build td.data.Dataset out of text and prize range."""
# Load TopCoder data
tc = TopCoder()
tc_req = tc.get_filtered_requirements()
tc_meta = tc.get_filtered_challenge_info()
# Convert float prize into categorical prize range
interval = np.linspace(0, 3000, 31)[:-1]
tc_prz_range = tc_meta['total_prize'].apply(lambda prz: np.searchsorted(interval, prz, side='right') - 1)
tc_prz_range.name = 'prize_cat'
req_prz_df = pd.concat([tc_req['requirement'], tc_prz_range], axis=1) # user this df to ensure the index of text and label is aligned
dataset_size = len(req_prz_df)
# batched encode the str to `input_ids` and `attention_mask`
batched_encoded = tokenizer(req_prz_df['requirement'].to_list(), padding='max_length', truncation=True)
# Features are tuple of {'input_ids': [...], 'attention_mask': [...]} and prize range label
features = [({k: batched_encoded[k][i] for k in batched_encoded}, req_prz_df['prize_cat'].iloc[i]) for i in range(len(req_prz_df))]
input_names = tuple(batched_encoded.keys())
def gen():
""" generator used in `tf.data.Dataset.from_generator`."""
for encoded_str, label in features:
yield encoded_str, label
return (
tf.data.Dataset.from_generator(
gen,
({k: tf.int32 for k in batched_encoded}, tf.int32),
({k: tf.TensorShape([512]) for k in batched_encoded}, tf.TensorShape([]))
),
dataset_size
)
def compute_metrics(pred):
""" Compute eval metrics
reference: https://huggingface.co/transformers/training.html#tensorflow
"""
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='binary')
acc = (preds == labels).mean()
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
def finetune_with_tftrainer():
""" Fine tune with TFTrainer"""
config = AutoConfig.from_pretrained(os.getenv('MODEL_NAME'), cache_dir=os.getenv('OUTPUT_DIR'), num_labels=30)
tokenizer = AutoTokenizer.from_pretrained(os.getenv('MODEL_NAME'), cache_dir=os.getenv('OUTPUT_DIR'))
training_args = TFTrainingArguments(
output_dir=os.getenv('OUTPUT_DIR'),
logging_dir=os.getenv('OUTPUT_DIR'),
overwrite_output_dir=True,
do_train=True,
do_eval=True,
learning_rate=2e-5,
)
with training_args.strategy.scope():
model = TFAutoModel.from_pretrained(os.getenv('MODEL_NAME'), config=config, cache_dir=os.getenv('OUTPUT_DIR'))
# Get data for fine-tuning
dataset, dataset_size = build_dataset(tokenizer)
# shuffle and split train/test tasks manuanly
dataset = dataset.shuffle(dataset_size)
train_size, test_size = int(dataset_size * (4 / 5)), dataset_size - int(dataset_size * (4 / 5)) # 8-2 split
train_data, test_data = dataset.take(train_size), dataset.skip(train_size)
trainer = TFTrainer(
model=model,
args=training_args,
train_dataset=train_data,
eval_dataset=test_data,
compute_metrics=compute_metrics
)
# Train the model
trainer.train()
trainer.save_model()
tokenizer.save_pretrained(os.getenv('OUTPUT_DIR'))
# Evaluate the model
result = trainer.evaluate()
pprint(result)
with open(os.path.join(os.getenv('OUTPUT_DIR'), 'eval_results.json'), 'w') as fwrite:
json.dump(result, fwrite, indent=4)
if __name__ == "__main__":
finetune_with_tftrainer()
```<|||||>@BenjiTheC
try replace `TFAutoModel ` with `TFAutoModelForSequenceClassification`<|||||>@QixinLi My purpose for fine-tuning the model is to use the last hidden layer state of BERT combining with some other features to continue forward in a bigger NN. Will using `TFAutoModelForSequenceClassification` compromise this goal?
Thanks!<|||||>> @QixinLi My purpose for fine-tuning the model is to use the last hidden layer state of BERT combining with some other features to continue forward in a bigger NN. Will using `TFAutoModelForSequenceClassification` compromise this goal?
>
> Thanks!
@QixinLi It's working after the replacement, but still wondering if this will impact my goal for finetuning, thanks a lot!<|||||>`TFAutoModelForSequenceClassification ` 是一个封装好的用于文本分类的bert模型。以你貌似用到的`TFDistilBertForSequenceClassification`为例:
```python
class TFDistilBertForSequenceClassification(TFDistilBertPreTrainedModel, TFSequenceClassificationLoss):
def __init__(self, config, *inputs, **kwargs):
super().__init__(config, *inputs, **kwargs)
self.num_labels = config.num_labels
self.distilbert = TFDistilBertMainLayer(config, name="distilbert")
self.pre_classifier = tf.keras.layers.Dense(
config.dim,
kernel_initializer=get_initializer(config.initializer_range),
activation="relu",
name="pre_classifier",
)
self.classifier = tf.keras.layers.Dense(
config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier"
)
self.dropout = tf.keras.layers.Dropout(config.seq_classif_dropout)
```
它有两层线性层和一层dropout。
不过它在库里是封装好的。如果你想在bert最后一层输出之后再去加一些自定义的网络结构,可能需要自定义一个model类,并且继承`TFDistilBertPreTrainedModel`。
如果你只是想做文本分类,那么`TFAutoModelForSequenceClassification`应该能满足你的要求。<|||||>> > @QixinLi My purpose for fine-tuning the model is to use the last hidden layer state of BERT combining with some other features to continue forward in a bigger NN. Will using `TFAutoModelForSequenceClassification` compromise this goal?
> > Thanks!
>
> @QixinLi It's working after the replacement, but still wondering if this will impact my goal for finetuning, thanks a lot!
I do not think so. People have tend to have found that extracting the CLS token from the hidden layer is what you want, instead of the embeddings for all your tokens. Some discussion on that is here: https://github.com/huggingface/transformers/issues/1950<|||||>> > > @QixinLi My purpose for fine-tuning the model is to use the last hidden layer state of BERT combining with some other features to continue forward in a bigger NN. Will using `TFAutoModelForSequenceClassification` compromise this goal?
> > > Thanks!
> >
> >
> > @QixinLi It's working after the replacement, but still wondering if this will impact my goal for finetuning, thanks a lot!
>
> I do not think so. People have tend to have found that extracting the CLS token from the hidden layer is what you want, instead of the embeddings for all your tokens. Some discussion on that is here: #1950
Can you refer me to a specific comment? My purpose is exactly described in [this comment](https://github.com/huggingface/transformers/issues/1950#issuecomment-558679189), but it seems like a `...ForSequenceClassification` adds a specific mission type. Thanks!<|||||>> `TFAutoModelForSequenceClassification ` 是一个封装好的用于文本分类的bert模型。以你貌似用到的`TFDistilBertForSequenceClassification`为例:
>
> ```python
> class TFDistilBertForSequenceClassification(TFDistilBertPreTrainedModel, TFSequenceClassificationLoss):
> def __init__(self, config, *inputs, **kwargs):
> super().__init__(config, *inputs, **kwargs)
> self.num_labels = config.num_labels
>
> self.distilbert = TFDistilBertMainLayer(config, name="distilbert")
> self.pre_classifier = tf.keras.layers.Dense(
> config.dim,
> kernel_initializer=get_initializer(config.initializer_range),
> activation="relu",
> name="pre_classifier",
> )
> self.classifier = tf.keras.layers.Dense(
> config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier"
> )
> self.dropout = tf.keras.layers.Dropout(config.seq_classif_dropout)
> ```
>
> 它有两层线性层和一层dropout。
> 不过它在库里是封装好的。如果你想在bert最后一层输出之后再去加一些自定义的网络结构,可能需要自定义一个model类,并且继承`TFDistilBertPreTrainedModel`。
>
> 如果你只是想做文本分类,那么`TFAutoModelForSequenceClassification`应该能满足你的要求。
@QixinLi 您好!我的目的就是把文本喂到fine-tuned BERT里之后获取最后一层输出,缀上一些其他的features之后继续通过神经网络去做分类。 目前使用distillbert在本地机器上调试,跑通了之后会放到云上跑bert-large这样。如果是这样的话,我应该用哪一个类呢?或者我可以通过`output[1]`来从`...SequenceClassification`获得last hidden state吗?谢谢!<|||||>> > > > @QixinLi My purpose for fine-tuning the model is to use the last hidden layer state of BERT combining with some other features to continue forward in a bigger NN. Will using `TFAutoModelForSequenceClassification` compromise this goal?
> > > > Thanks!
> > >
> > >
> > > @QixinLi It's working after the replacement, but still wondering if this will impact my goal for finetuning, thanks a lot!
> >
> >
> > I do not think so. People have tend to have found that extracting the CLS token from the hidden layer is what you want, instead of the embeddings for all your tokens. Some discussion on that is here: #1950
>
> Can you refer me to a specific comment? My purpose is exactly described in [this comment](https://github.com/huggingface/transformers/issues/1950#issuecomment-558679189), but it seems like a `...ForSequenceClassification` adds a specific mission type. Thanks!
Are you classifying something? If so, use `...ForSequenceClassification`. This seems to be what you want to do given your text-label data. Extracting the embedding is slightly different when using `...ForSequenceClassification` rather than the plain `TFDistilBertModel`.
But for your purpose, to classify something and to then get those embeddings, look toward this comment, as it illustrates the difference.
https://github.com/huggingface/transformers/issues/1950#issuecomment-558683444<|||||>如果你想要last_hidden_states,可以这么做。
```python
class MyOwnModel(TFDistilBertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.distilbert = TFDistilBertMainLayer(config, name="distilbert")
self.classifier = tf.keras.layers.Dense(config.num_labels)
def call(self, inputs=None, mask=None, token_type_ids=None, labels=None):
outputs = self.distilbert(inputs, attention_mask=mask,token_type_ids=token_type_ids)
last_hidden_states = outputs[0]
# do whatever you want
processed_hidden_states = .........
logits = self.classifier(processed_hidden_states)
outputs = logits
if labels is not None:
loss = self.compute_loss(labels, logits)
outputs = (loss,) + outputs
return outputs
```
```python
model = MyOwnModel.from_pretrained(os.getenv('MODEL_NAME'), config=config, cache_dir=os.getenv('OUTPUT_DIR'))
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
outputs = model(input_ids)
```
p.s.刚刚看了一下`...SequenceClassification`的call()函数,发现他的返回值output[1]是过了分类层后的结果,并没有你想要的last_hidden_states。所以应该不能满足你的需求。
<|||||>> 如果你想要last_hidden_states,可以这么做。
>
> ```python
> class MyOwnModel(TFDistilBertPreTrainedModel):
> def __init__(self, config):
> super().__init__(config)
> self.distilbert = TFDistilBertMainLayer(config, name="distilbert")
> self.classifier = tf.keras.layers.Dense(config.num_labels)
>
> def call(self, inputs=None, mask=None, token_type_ids=None, labels=None):
> outputs = self.distilbert(inputs, attention_mask=mask,token_type_ids=token_type_ids)
> last_hidden_states = outputs[0]
> # do whatever you want
> processed_hidden_states = .........
> logits = self.classifier(processed_hidden_states)
> outputs = logits
> if labels is not None:
> loss = self.compute_loss(labels, logits)
> outputs = (loss,) + outputs
> return outputs
> ```
>
> ```python
> model = MyOwnModel.from_pretrained(os.getenv('MODEL_NAME'), config=config, cache_dir=os.getenv('OUTPUT_DIR'))
> input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
> outputs = model(input_ids)
> ```
>
> p.s.刚刚看了一下`...SequenceClassification`的call()函数,发现他的返回值output[1]是过了分类层后的结果,并没有你想要的last_hidden_states。所以应该不能满足你的需求。
@QixinLi 您好,你的回复解答了我的问题,十分感谢!另外有两个follow up:
1. distilbert 和 bert 要通过两个不同的类来读模型,但是在我的代码实现上继承`TFDistilBertModel`和`TFBertModel`都是没有区别的是吗
2. `...PreTrainedModel`貌似是一个abstract class,是不是应该继承具体的BERT/DISTILLBERT类呢?
谢谢!<|||||>> > > > > @QixinLi My purpose for fine-tuning the model is to use the last hidden layer state of BERT combining with some other features to continue forward in a bigger NN. Will using `TFAutoModelForSequenceClassification` compromise this goal?
> > > > > Thanks!
> > > >
> > > >
> > > > @QixinLi It's working after the replacement, but still wondering if this will impact my goal for finetuning, thanks a lot!
> > >
> > >
> > > I do not think so. People have tend to have found that extracting the CLS token from the hidden layer is what you want, instead of the embeddings for all your tokens. Some discussion on that is here: #1950
> >
> >
> > Can you refer me to a specific comment? My purpose is exactly described in [this comment](https://github.com/huggingface/transformers/issues/1950#issuecomment-558679189), but it seems like a `...ForSequenceClassification` adds a specific mission type. Thanks!
>
> Are you classifying something? If so, use `...ForSequenceClassification`. This seems to be what you want to do given your text-label data. Extracting the embedding is slightly different when using `...ForSequenceClassification` rather than the plain `TFDistilBertModel`.
>
> But for your purpose, to classify something and to then get those embeddings, look toward this comment, as it illustrates the difference.
>
> [#1950 (comment)](https://github.com/huggingface/transformers/issues/1950#issuecomment-558683444)
Thanks for the answer! I've found the inspiration I need from your referring issue. Much appreciated!<|||||>> > 如果你想要last_hidden_states,可以这么做。
> > ```python
> > class MyOwnModel(TFDistilBertPreTrainedModel):
> > def __init__(self, config):
> > super().__init__(config)
> > self.distilbert = TFDistilBertMainLayer(config, name="distilbert")
> > self.classifier = tf.keras.layers.Dense(config.num_labels)
> >
> > def call(self, inputs=None, mask=None, token_type_ids=None, labels=None):
> > outputs = self.distilbert(inputs, attention_mask=mask,token_type_ids=token_type_ids)
> > last_hidden_states = outputs[0]
> > # do whatever you want
> > processed_hidden_states = .........
> > logits = self.classifier(processed_hidden_states)
> > outputs = logits
> > if labels is not None:
> > loss = self.compute_loss(labels, logits)
> > outputs = (loss,) + outputs
> > return outputs
> > ```
> >
> >
> > ```python
> > model = MyOwnModel.from_pretrained(os.getenv('MODEL_NAME'), config=config, cache_dir=os.getenv('OUTPUT_DIR'))
> > input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
> > outputs = model(input_ids)
> > ```
> >
> >
> > p.s.刚刚看了一下`...SequenceClassification`的call()函数,发现他的返回值output[1]是过了分类层后的结果,并没有你想要的last_hidden_states。所以应该不能满足你的需求。
>
> @QixinLi 您好,你的回复解答了我的问题,十分感谢!另外有两个follow up:
>
> 1. distilbert 和 bert 要通过两个不同的类来读模型,但是在我的代码实现上继承`TFDistilBertModel`和`TFBertModel`都是没有区别的是吗
> 2. `...PreTrainedModel`貌似是一个abstract class,是不是应该继承具体的BERT/DISTILLBERT类呢?
>
> 谢谢!
1. 是的。之后送到机器上跑,如果要更换模型的话,需要将`TFDistilBertPreTrainedModel`换成`TFBertPreTrainedModel`。`TFDistilBertMainLayer`也要换成`TFBertMainLayer`。
可以去[官方文档](https://huggingface.co/transformers/model_doc/bert.html)查看,或直接浏览transformers关于这些类的源码。
2.`...PreTrainedModel`也是继承自`tf.keras.Model`,所以直接继承没有问题。(transformers里头就是这么做的。我上面的代码仅供参考,具体实现可以学习huggingface大佬们的模型代码) |
transformers | 5,150 | closed | [MobileBert] fix dropout | This PR fixes dropout in class TFMobileBertOutput. | 06-20-2020 03:18:31 | 06-20-2020 03:18:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5150?src=pr&el=h1) Report
> Merging [#5150](https://codecov.io/gh/huggingface/transformers/pull/5150?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d97b4176e5e9acdab930d73a7cb308b12bd4ad9e&el=desc) will **decrease** coverage by `0.37%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5150?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5150 +/- ##
==========================================
- Coverage 78.31% 77.93% -0.38%
==========================================
Files 137 137
Lines 23472 23472
==========================================
- Hits 18381 18292 -89
- Misses 5091 5180 +89
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5150?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `93.32% <0.00%> (ø)` | |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-28.03%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.45% <0.00%> (-0.83%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <0.00%> (-0.38%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.38% <0.00%> (-0.24%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.26% <0.00%> (+0.29%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5150?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5150?src=pr&el=footer). Last update [d97b417...66d7d53](https://codecov.io/gh/huggingface/transformers/pull/5150?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks!
I think it's indeed a bug! In Tensorflow 2.0, one should always pass the training parameter to `Dropout` explicitly. |
transformers | 5,149 | closed | Create README.md | Creating README.md file for model on Community contribution. | 06-20-2020 03:17:13 | 06-20-2020 03:17:13 | |
transformers | 5,148 | closed | Update glossary | Update the glossary to the new tokenizer API. I also added a first section about general terms that might confuse a beginner (which anyone is welcome to expand when they see a term that's not necessarily obvious and that we use a lot) and proper references to the subsections about the model inputs to make sure none of the links break.
There was a bad indent introduced since my cleanup of sphinx warnings, so fixed that in passing. | 06-19-2020 21:33:49 | 06-19-2020 21:33:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5148?src=pr&el=h1) Report
> Merging [#5148](https://codecov.io/gh/huggingface/transformers/pull/5148?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f45e873910e60d89511ae0193711e71c5c710468&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5148?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5148 +/- ##
=======================================
Coverage 77.19% 77.20%
=======================================
Files 133 133
Lines 22233 22233
=======================================
+ Hits 17163 17164 +1
+ Misses 5070 5069 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5148?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VsZWN0cmEucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5148/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.28% <0.00%> (+0.12%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5148?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5148?src=pr&el=footer). Last update [f45e873...fde64f8](https://codecov.io/gh/huggingface/transformers/pull/5148?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>That's great! I left some nits.<|||||>Addressed (something wrong with the time in my WSL so the commit appears before the comments....) |
transformers | 5,147 | closed | Typo in Reformer model card | 06-19-2020 20:25:07 | 06-19-2020 20:25:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5147?src=pr&el=h1) Report
> Merging [#5147](https://codecov.io/gh/huggingface/transformers/pull/5147?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f45e873910e60d89511ae0193711e71c5c710468&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5147?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5147 +/- ##
==========================================
- Coverage 77.19% 77.19% -0.01%
==========================================
Files 133 133
Lines 22233 22233
==========================================
- Hits 17163 17162 -1
- Misses 5070 5071 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5147?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5147?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5147?src=pr&el=footer). Last update [f45e873...e655c44](https://codecov.io/gh/huggingface/transformers/pull/5147?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks a lot @flozi00 ! |
|
transformers | 5,146 | closed | Upgrade examples to pl=0.8.1 | This upgrades to the latest PL and follows the pl docs to avoid Deprecation warnings.
### Known Issue
`trainer.test` fails in multigpu, this is not a new issue, but here we add a test that will pass when the bug is fixed upstream.
Failure discussed [here](https://github.com/PyTorchLightning/pytorch-lightning/issues/2267) can be invoked on a multigpu machine with
```bash
pytest examples/summarization
```
This failure is not new, it also existed in previous code.
Traceback:
```bash
examples/summarization/finetune.py:322: in main
trainer.test(model) # this breaks in DDP, known lightning issue. See evaluate_ch
eckpoint to recover metrics.
../.conda/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py:11
55: in test
self.barrier('test_setup')
../.conda/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py:12
60: in barrier
torch_distrib.barrier()
../.conda/envs/nb/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:1
484: in barrier
_check_default_pg()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def _check_default_pg():
"""
Helper that checks if the default ProcessGroup has been initialized, with
assertion
"""
assert _default_pg is not None, \
> "Default process group is not initialized"
E AssertionError: Default process group is not initialized
../.conda/envs/nb/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:1
87: AssertionError
```
| 06-19-2020 19:43:34 | 06-19-2020 19:43:34 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5146?src=pr&el=h1) Report
> Merging [#5146](https://codecov.io/gh/huggingface/transformers/pull/5146?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1262495a912b9cd97e2ae174fd627a9d8a502341&el=desc) will **increase** coverage by `1.66%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5146?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5146 +/- ##
==========================================
+ Coverage 76.37% 78.04% +1.66%
==========================================
Files 138 138
Lines 23772 23772
==========================================
+ Hits 18157 18553 +396
+ Misses 5615 5219 -396
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5146?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <100.00%> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.16% <0.00%> (-0.13%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.57% <0.00%> (+1.42%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.14% <0.00%> (+2.57%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.11% <0.00%> (+3.84%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `33.43% <0.00%> (+4.77%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `91.31% <0.00%> (+42.22%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5146/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5146?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5146?src=pr&el=footer). Last update [1262495...c0bd407](https://codecov.io/gh/huggingface/transformers/pull/5146?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Flaxy failure, merging. |
transformers | 5,145 | closed | Quick tour | This PR replaces the old quickstart guide by splitting it in two:
- a quick tour that is before the installation page,
- the philosophy page, after the installation.
It also renames Usage to Task summary since it's really what it is, and renames the files usage to task_summary, summary to model_summary.
The quick tour tries to go over all the API introduced in Transformers at a high level, pointing out tutorials (existing or upcoming) for each of them. | 06-19-2020 19:31:19 | 06-19-2020 19:31:19 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5145?src=pr&el=h1) Report
> Merging [#5145](https://codecov.io/gh/huggingface/transformers/pull/5145?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f45e873910e60d89511ae0193711e71c5c710468&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5145?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5145 +/- ##
=======================================
Coverage 77.19% 77.19%
=======================================
Files 133 133
Lines 22233 22233
=======================================
Hits 17163 17163
Misses 5070 5070
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5145?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5145/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.96% <0.00%> (+0.14%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5145?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5145?src=pr&el=footer). Last update [f45e873...ae14c1a](https://codecov.io/gh/huggingface/transformers/pull/5145?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>That's awesome, I think this PR is really cool! I wonder whether A short section showing a super simple training example should already be in the quicktour<|||||>For the quick training example, I have it in mind but for when the integration with nlp is smooth: gathering the data takes a bit too long in code right now, and we have to talk about some obscure function that prepares it, which doesn't really fit the rest. |
transformers | 5,144 | closed | Scoring each word from the sentence using Pretrained LM | # ❓ Questions & Help
## Details
I am interested in scoring each word from the sentence using the LM. Detect how likely possible is the word in the given context by scoring it using the LM.
Is there a way out for this ?
| 06-19-2020 19:00:05 | 06-19-2020 19:00:05 | Something like [this](https://github.com/huggingface/transformers/issues/5000#issuecomment-647560847)?<|||||>You mean to say that use soft-max on the the hidden states of every word to get the score ?
I think that should work.<|||||>Great!<|||||>Since the code referenced tries to get the next prediction and I want to score existing sentence I modified to code to take the soft-max on the 2nd axis which is the word. Here is output. But I am unable to score the sentence or find the perplexity.
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
model = GPT2LMHeadModel.from_pretrained("gpt2")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
inputs = tokenizer.encode("This is just now", return_tensors="pt")
output = model(inputs)
model_output = output[0]
last_token_softmax = torch.softmax(model_output, dim=-1).squeeze()
last_token_softmax.shape
top_n_values = last_token_softmax.topk(1)
for index, value in zip(top_n_values.indices, top_n_values.values):
print("Score: ", value.tolist())
print("This is just" + tokenizer.decode(index.tolist()))
```
```
Score: [0.04588045924901962]
This is just is
Score: [0.1578938215970993]
This is just a
Score: [0.17527997493743896]
This is just a
Score: [0.13700434565544128]
This is just,
```<|||||>Hi, welcome back 🤗!
We now have a document for perplexity especially: https://huggingface.co/transformers/perplexity.html
Could you take a look and let me know if it's helpful for your use-case? |
transformers | 5,143 | closed | Passing in own embeddings for image input | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
I am trying to use your pretrained BERT/RoBERTa models to write multimodal classifiers such as the bitransformer (see diagram at top of page 2):
https://arxiv.org/pdf/1909.02950.pdf
and Visual BERT:
https://arxiv.org/pdf/1908.03557.pdf
It would be nice to be able to change the input layer of the transformer models to handle custom input embeddings rather than just input_ids. If there is already a way to do this, it would be great to get some instructions on how. I have tried printing the layers of the pretrained RoBERTa model, but it only gives:
`<transformers.modeling_tf_roberta.TFRobertaMainLayer object at 0x7f79fc04dcf8>`
Edit: I actually see that there is a MMBT model mentioned in the documentation, but I cannot find the pre-trained model or any examples with it. Please help. Thank you.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
I would like to be able to use your pretrained models to handle image embeddings as well. I am working on a project with this multimodal dataset:
https://gombru.github.io/2019/10/09/MMHS/
Any help would be greatly appreciated. Thanks!
| 06-19-2020 18:40:14 | 06-19-2020 18:40:14 | Check out the `inputs_embeds` parameter. |
transformers | 5,142 | closed | T5 special tokens not mapped to unique indices in vocabulary | The docs recommend adding the special eos_token `<\s>` to the end of each string when encoding/decoding with `T5Tokenizer`. However, this (and the other special tokens e.g. `unk_token`, `pad_token` aren't assigned unique ids in the lookup vocabulary (they are mapped to `{0,1,2}`, which are indices for other common words in the vocab). In practice, I find my model fails to properly produce the `eos_token` since it is associated with blank spaces, so the model produces run-ons during generation
## To reproduce
```
>>> from transformers import T5Tokenizer
>>> tokenizer = T5Tokenizer.from_pretrained('t5-base')
>>> tokenizer.pad_token
'<pad>'
>>> tokenizer.pad_token_id
0
>>> tokenizer.eos_token
'</s>'
>>> tokenizer.eos_token_id
1
>>> tokenizer.unk_token
'<unk>'
>>> tokenizer.unk_token_id
2
```
```
>>> tokenizer.decode([0])
''
>>> tokenizer.decode([1])
''
>>> tokenizer.decode([2])
' ⁇ '
```
## Expected behavior
```
>>> tokenizer.decode([0])
'<pad>'
>>> tokenizer.decode([1])
'</s>'
>>> tokenizer.decode([2])
'<unk>'
```
## Environment info
- `transformers` version: 2.9.1
| 06-19-2020 17:10:47 | 06-19-2020 17:10:47 | Hey @sarahwie,
Thanks for your issue. I can reproduce the problem and see the reason for it. Currently, we rely on Google's sentencepiece tokenizer: https://github.com/google/sentencepiece for encoding and decoding in T5. What happens is that the `tokenizer.decode(tokens)` depends on the function
`sp_model.decode_pieces(tokens)` with `sp_model` being an instance of `sentencepiece.SentencePieceProcessor()`. To correctly convert a string of tokens: `["<unk>", "</s>"]` to **one** string we thus rely on `sp_model.decode_pieces`, so it is a bit out of our control to do the correct decoding here.
To quickly see the problem @thomwolf @mfuntowicz @n1t0 one can run the following code
```python
from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained('t5-base')
tokenizer.convert_tokens_to_string(["<unk>", "</s>"]) # gives ' ⁇ '
```
What do you think how we should handle this problem at the moment @thomwolf @n1t0 @mfuntowicz ?<|||||>For anyone looking for a quick, temporary fix to the unending-generation problem: override the EOS token with a custom one (note this fix does not work for `unk_token` or `pad_token`; for some reason they can't be re-mapped)
```
tokenizer = T5Tokenizer.from_pretrained('t5-base')
tokenizer.add_special_tokens({'eos_token':'[EOS]'})
model.resize_token_embeddings(len(tokenizer))
>>> tokenizer.eos_token_id
32100
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Is there any update on this? Does the bug still exist in version 3.4?<|||||>Hey guys, I would recommend using our new `T5TokenizerFast` which solves this problem as can be seen below:
```python
>>> from transformers import T5TokenizerFast
>>> tokenizer = T5TokenizerFast.from_pretrained('t5-base')
>>> tokenizer.pad_token
'<pad>'
>>> tokenizer.pad_token_id
0
>>> tokenizer.eos_token
'</s>'
>>> tokenizer.eos_token_id
1
>>> tokenizer.unk_token
'<unk>'
>>> tokenizer.unk_token_id
2
>>> tokenizer.decode([0])
'<pad>'
>>> tokenizer.decode([1])
'</s>'
>>> tokenizer.decode([2])
'<unk>'
```
<|||||>I also made a PR to fix the slow T5Tokenizer. It probably won't make it into v3.5, but into the next version.<|||||>@patrickvonplaten
Two quick questions:
- Is there any downside to using fasttokenizer?
- What's the best way to patch this fix to slowtokenizer into an existing transformers install?
Bigger question:
I ran into this no-EOS generation problem when using finetune.py, but when I set up my own T5 trainer, I somehow managed to sidestep the issue. Here are the details. Any idea why I wasn't affected once I set it up on my own?
Each item of my data set (source and target) is configured as
```
# max_src_len is length of longest sentence in input set
tokenized_inputs = self.tokenizer.batch_encode_plus(
[src], max_length=max_src_len, padding="max_length", return_tensors="pt")
```
where each `src` is a string of words, with no EOS token appended (since batch_encode will append it).
I then train with this forward function:
```
def forward(model, device, batch):
src_ids = batch["source_ids"].to(device, dtype=torch.long)
src_mask = batch["source_mask"].to(device, dtype=torch.long)
tgt_ids = batch["target_ids"].to(device, dtype=torch.long)
# padded ids (pad=0) are set to -100, which means ignore for loss calculation
tgt_ids[tgt_ids[: ,:] == 0 ] = -100
label_ids = tgt_ids.to(device)
out_dict = model(src_ids, attention_mask=src_mask, labels=label_ids, return_dict=True)
loss, logits = out_dict['loss'], out_dict['logits']
return loss, logits
# then do appropriate zero_grad(), loss.backward, etc
```
Models I train in this way *do* learn to generate a final token with ID=1. In particular I wrote the following verification function:
```
def masked_token_match(tgt_ids: torch.tensor, outputs: torch.tensor,
return_indices=False) -> Union[Tuple[int,int], Tuple[int, int, torch.tensor]]:
# left-shift
output_shifted = outputs[:,1:]
# create output_padded, which truncates output at tgt_ids size, filling with pad tokens
if output_shifted.shape <= tgt_ids.shape:
output_padded = torch.zeros_like(tgt_ids)
output_padded[:output_shifted.shape[0], :output_shifted.shape[1]] = output_shifted
else: # output_shifted is bigger
# so copy only up to the target IDs length
output_padded = output_shifted[:,:tgt_ids.shape[1]] # copy all rows (bs) and up to tgt_ids length
# compare where tokens are > 1 (i.e. not pad or EOS)
match_indices = output_padded == tgt_ids # either they match
matches_no_eos = torch.logical_or(match_indices, tgt_ids < 2) # or we ignore them (pad and eos)
matches_with_eos = torch.logical_or(match_indices, tgt_ids < 1) # or we ignore them (just pad)
total_matches_no_eos = torch.sum(torch.all(matches_no_eos, axis=1))
total_matches_with_eos = torch.sum(torch.all(matches_with_eos, axis=1))
return total_matche_no_eos, total_matches_with_eos
```
For a copy task (I was debugging the original finetune behavior), where I ask T5 to
1) copy src => src (i.e. just copy word for word)
2) copy src => (first word of source); (i.e. just copy the first word and then generate EOS)
The model learns to complete both tasks and append an EOS token after only 15-20k training examples.
So why did this setup work? Maybe we think that the model could still be generating additional non-zero (i.e. non-pad) tokens after EOS==1 in these sequences. But I also seem to have verified that isn't occurring because I use this generation code during eval:
```
generated_ids = model.generate(src_ids, attention_mask=src_mask) # (batch x seq length)
outputs_decoded = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)
```
and the outputs_decoded do correctly stop where they are supposed to stop<|||||>No real downside using the fast tokenizers if you don't have to look into the code. <|||||>You can take a look into the PR to see what one would have to change to make it work with an existing code base<|||||>@sshleifer , @patrickvonplaten
I still don't understand why my tweaked version worked and did appropriately truncate generations (see above details). sshleifer, maybe you can see easily what finetune.py is doing differently?<|||||>@jsrozner I don't know either, but interested to find out.
1) what is `max_src_len`
2) when were you using finetune.py? Do you remember your command? T5Tokenizer started adding `</s>` to inputs a few months ago, so maybe you were before that? Can you reproduce the breakage with current finetune.py/current `transformers`?
Antoher random idea: finetune.py automatically uses `config.task_specific_params['summarization']` for generation, which might be bad for your use case.<|||||>cc @danyaljj
I'm just going to consolidate discussion from (#7796) here. (Also relevant is [HF forum](https://discuss.huggingface.co/t/issue-with-finetuning-a-seq-to-seq-model/1680/28))
`max_src_len` above is the maximum length of any input sequence, counted in...wait for it...number of characters. Whoops. That was dumb. I intended to go through and find the maximum sequence length in *tokens*. I'll fix that, but I don't think it affects other things: it turns out that `max_src, max_tgt_len = (250, 250)` for the inputs I was using. But that just means we had a lot of padding.
I was using finetune.py just last month, so I don't think it was the EOS token.
The "gibberish" generation still occurs if I just use finetune_t5.sh as written. If I do either of the following, the outputs are correct:
1) Comment out `use_task_specific_params(self.model, "summarization")` in finetune.py
2) Add min_len to the `generate` call:
```
generated_ids = self.model.generate(
batch["input_ids"],
attention_mask=batch["attention_mask"],
use_cache=True,
decoder_start_token_id=self.decoder_start_token_id,
num_beams=self.eval_beams,
max_length=self.eval_max_length,
min_length=0
)
```
This is because config.json for t5-small and t5-base has the following (@danyaljj this is also the answer to our question about where prefix is getting picked up in the HF forum)
```
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
```
But it looks like the only param that really matters was the min_length. Beam size, max_length, prefix, etc all weren't causing the problem. I verified on both the (sent) => (sent) copy and the (sent) => (first word of sent) tasks.
So at least for my use case it seems like the tokenizer decode bug was not causing a problem? It seems like even though *we, as ~users* couldn't decode tokens correctly, the model still knew that 1==EOS and that after an EOS it should print PAD. The problem was that we were forcing it to generate at least 30 tokens, hence all the gibberish that I was seeing.
@sshleifer, does this make sense with your understanding of the finetune script? i.e., that failing to decode EOS shouldn't matter?
@danyaljj, given that you wanted relatively short outputs of the answers to questions, this seems like it might fix the issue for you? Give it a try and see what happens?<|||||>Thanks, @jsrozner! 🙏
In theory, this explains my issue as well since my outputs were quite short. I will repeat it and report the results here! <|||||>yes that makes sense and thanks for consolidating.
we should link to this discussion in the t5 docs ! <|||||>What files should I change to update docstrings?
Also can you take a look at a few more questions / related issues so that we can clean things up? These are roughly the same questions I had in a [post in the HF thread](https://discuss.huggingface.co/t/issue-with-finetuning-a-seq-to-seq-model/1680/24?u=jsrozner)
## `decoder_input_ids` vs `labels`
- When would we want to pass both?
- Here's an [example](https://github.com/abhimishra91/transformers-tutorials/blob/0cf2be0c81221877966d017d6f591e011174979e/transformers_summarization_wandb.ipynb) (that has been linked to in HF forums) that seems to do it wrong. In particular, passes both `decoder_input_ids` and `lm_labels` but does not right_shift the `decoder_input_ids`. This seems like it does not give the effect that we want since we never right_shift. He probably wants to pass only `labels` and omit `decoder_input_ids`?
- Finally, Documentation for [T5forConditionalGeneration](https://huggingface.co/transformers/model_doc/t5.html#t5forconditionalgeneration) says that if decoder_input_ids are not provided then input_ids will be used. But actually labels will be used?
## in Finetune.py
- `_step` is manually right_shifting rather than letting `model` do it for us by just passing `label`. Why?
- `_step` calculates the loss manually, but I want to confirm that if we had also passed `labels` into the `self(..)` call that we would have gotten the same loss output when `label_smoothing == 0`<|||||>@jsrozner
+ Docstrings are in modeling_t5.py and https://github.com/huggingface/transformers/blob/master/docs/source/model_doc/t5.rst
+ I can't think of a good reason to pass both `decoder_input_ids` and `labels`
+ correct that example is wrong.
+ Documentation for [T5forConditionalGeneration](https://huggingface.co/transformers/model_doc/t5.html#t5forconditionalgeneration) is wrong, as you suggest.
#### in `finetune.py`
+ we pass `decoder_input_ids` to avoid allowing the model to calculate the loss. The reasoning is that, in some environments (namely TPU), replacing `pad_token_id` with -100 is expensive and we do not want the loss to consider `pad_token_id`.
+ You would not get the same loss if you passed `labels` to the model, because it would not ignore `pad_token_id`
<|||||>anyone know how to add all standard special tokens like bos to t5? https://stackoverflow.com/questions/73322462/how-to-add-all-standard-special-tokens-to-my-hugging-face-tokenizer-and-model @jsrozner ? |
transformers | 5,141 | closed | [bart-mnli] Fix class flipping bug | Previously, `eval_mnli/acc` was 0.35, now .9. Same issue as other models ported from fairseq.
New results of Victor's command.
```bash
wandb: eval_mnli-mm/acc 0.9001220504475184
wandb: _runtime 274.7172155380249
wandb: eval_loss 0.3496225342093929
wandb: eval_mnli/acc 0.9011716760061131
wandb: _timestamp 1592585765.2334569
``` | 06-19-2020 16:57:45 | 06-19-2020 16:57:45 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5141?src=pr&el=h1) Report
> Merging [#5141](https://codecov.io/gh/huggingface/transformers/pull/5141?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e33929ef1e98a71ba7fc411f39cbb5451396ef02&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5141?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5141 +/- ##
=======================================
Coverage 77.19% 77.20%
=======================================
Files 133 133
Lines 22232 22233 +1
=======================================
+ Hits 17162 17164 +2
+ Misses 5070 5069 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5141?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5141/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.56% <100.00%> (+0.20%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5141/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5141/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.28% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5141/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.96% <0.00%> (+0.14%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5141?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5141?src=pr&el=footer). Last update [e33929e...70c5736](https://codecov.io/gh/huggingface/transformers/pull/5141?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,140 | closed | [pl_examples] deprecate BaseTransformer.is_logger | - proc_rank is deprecated in new PL
- updates pl to 0.8.1
@williamFalcon suggestions more than welcome!
- The test that is failing is `test_bdc_multigpu` | 06-19-2020 16:35:03 | 06-19-2020 16:35:03 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5140?src=pr&el=h1) Report
> Merging [#5140](https://codecov.io/gh/huggingface/transformers/pull/5140?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/84be482f6698fac822a5113735f2242c6d3abc76&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5140?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5140 +/- ##
==========================================
- Coverage 77.19% 77.19% -0.01%
==========================================
Files 133 133
Lines 22232 22232
==========================================
- Hits 17162 17161 -1
- Misses 5070 5071 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5140?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5140/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.81% <0.00%> (-0.15%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5140?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5140?src=pr&el=footer). Last update [84be482...2aad9d1](https://codecov.io/gh/huggingface/transformers/pull/5140?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,139 | closed | `facebook/bart-large-mnli` - random accuracy on MNLI | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BART
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: MNLI
* [ ] my own task or dataset: (give details below)
## To reproduce
There might be something wrong with the `facebook/bart-large-mnli` checkpoint (https://huggingface.co/facebook/bart-large-mnli). I can only get a random accuracy (0.35) on MNLI:
`python examples/text-classification/run_glue.py --data_dir <GLUE DATA DIR/MNLI> --task MNLI --model_name_or_path facebook/bart-large-mnli --output_dir ./dbg/ --max_seq_length 128 --overwrite_cache --do_eval`
I checked and the checkpoint has a classification head (`'classification_head.dense.bias', 'classification_head.out_proj.weight', 'classification_head.out_proj.bias'`) i.e. the classification head is not initialized randomly.
cc @sshleifer as discussed | 06-19-2020 14:30:40 | 06-19-2020 14:30:40 | |
transformers | 5,138 | closed | Fix in Reformer Config documentation | Fixed a variable name in the Reformer Config docstring (`lsh_chunk_length` -> `lsh_attn_chunk_length`) | 06-19-2020 13:17:24 | 06-19-2020 13:17:24 | Greath thanks for the fix :-) |
transformers | 5,137 | closed | xlm-mlm-17-1280: after run model to get embeddings shape 20000 | i want to get embeddings for :ru: text by `xlm-mlm-17-1280`
at the end get embeddings with shape 2k
example code (using the last version transformers & torch on ubuntu):
```
from transformers import AutoTokenizer, AutoModelWithLMHead
xlm_mlm = 'xlm-mlm-17-1280'
tokenizer_xlm_mlm = AutoTokenizer.from_pretrained(xlm_mlm)
model_xlm_mlm = AutoModelWithLMHead.from_pretrained(xlm_mlm)
input_ids = torch.tensor([tokenizer_xlm_mlm.encode(my_input_text)]) # batch size of 1
print(f'{input_ids.shape=}')
# input_ids.shape=torch.Size([1, 373])
lang_id_ru = tokenizer_xlm_mlm.lang2id['ru']
langs_ru = torch.tensor([lang_id_ru] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0])
print(f'{langs_ru.shape=}')
# langs_ru.shape=torch.Size([373])
langs_ru = langs_ru.view(1, -1) # is now of shape [1, sequence_length]
print(f'{langs_ru.shape=}')
# langs_ru.shape=torch.Size([1, 373])
outputs = model_xlm_mlm(input_ids, langs=langs_ru)
outputs[0].shape
# torch.Size([1, 373, 200000])
```
so it's a bug or my bad? | 06-19-2020 12:45:43 | 06-19-2020 12:45:43 | hmm, i found `(proj): Linear(in_features=1280, out_features=200000, bias=True))`
why `out_features = 200000`?<|||||>That's the vocab size, which is of size 200 000. You can see it in the [config.json](https://s3.amazonaws.com/models.huggingface.co/bert/xlm-mlm-17-1280-config.json).<|||||>it's true, my bad
and how i can get embeddings 1280 for input text? <|||||>@LysandreJik maybe u can help, how i can get embeddings to sentence after run `outputs = model_xlm_mlm(input_ids, langs=langs_ru)`
at the older version 2.3.0 the `outputs` had the last size <emb_dim> but no it's <vocab_size><|||||>That's probably because you were using the `AutoModel` factory instead of `AutoModelWithLMHead`. The former returns the transformer embeddings of dimension `hidden_size` (1280 in your case), while the latter returns the projected embeddings on the vocabulary, of dimension `vocab_size` (200 000 in your case).
Change the two lines:
```py
from transformers import AutoTokenizer, AutoModelWithLMHead
model_xlm_mlm = AutoModelWithLMHead.from_pretrained(xlm_mlm)
```
to
```py
from transformers import AutoTokenizer, AutoModel
model_xlm_mlm = AutoModel.from_pretrained(xlm_mlm)
``` |
transformers | 5,136 | closed | Transformer-XL tokenizer cannot properly tokenize brackets | # 🐛 Bug
## Information
The `TransfoXLTokenizer` is not able to tokenize words with surrounding brackets correctly. I compared it with the `BertTokenizer` from `bert-base-uncased` which gives the expected result. Example text is: `"Hello (bracket)"`
Model I am using: **Transformer-XL**
Language I am using the model on: **English**
The problem arises when using:
* [x] my own modified scripts
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import BertTokenizer, TransfoXLTokenizer
bert = BertTokenizer.from_pretrained('bert-base-uncased')
transfoxl = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
def test_bracket(tokenizer):
enc = tokenizer.encode("Hello (bracket)")
dec = tokenizer.decode(enc)
print(f"ORG: Hello (bracket)\nENC: {enc}\nDEC: {dec}")
```
Results:
`test_bracket(bert)` gives the following output:
```
ORG: Hello (bracket)
ENC: [101, 7592, 1006, 21605, 1007, 102]
DEC: [CLS] hello ( bracket ) [SEP]
```
`test_bracket(transfoxl)` gives the following output:
```
ORG: Hello (bracket)
ENC: [14049, 24]
DEC: Hello <unk>
```
If the parameter `add_space_before_punct_symbol=True` is passed, then the result is:
```
ORG: Hello (bracket)
ENC: [14049, 24, 21]
DEC: Hello <unk> )
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The `TransfoXLTokenizer` should detect the punctuation symbols, e.g. `(`, separately and thus give the same result as the `BertTokenizer` (except the special tokens of course): `hello ( bracket )`
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.6.10
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
| 06-19-2020 10:20:08 | 06-19-2020 10:20:08 | **UPDATE**
I've done some further research and discovered that the tokenization of strings containing either
1. any opening bracket, e.g. `( [ {`
2. words with dashes, e.g. `10-year-old`
3. other symbols with no space afterwards, e.g. (`km/h` or `$3`)
4. numbers, either floating point, e.g. `3.23`, or large comma separated, e.g. `5,000`
result in tokenization errors. See the following example:
Example string:
```
"Hello (bracket) and side-scrolled [and] Henry's $5,000 km/h with 3.34 m. What's up!?"
```
Encoded and decoded again with `TransfoXLTokenizer`:
```
Hello <unk> ) and side <unk> <unk> ] <unk> <unk> <unk> km <unk> with 3 <unk> m . <unk> up ! ?
```
In the [Transformer-XL paper](http://arxiv.org/abs/1901.02860) they used the WikiText-103 dataset. The authors of the [WikiText-103 paper](http://arxiv.org/abs/1609.07843) stated that they used the *Moses tokenizer* for tokenization of the wikipedia articles which can deal with the errors stated above (except for 4. but I implemented a custom solution for it - the authors did this too for WikiText-103). This tokenizer replaces dashes with `@-@`, e.g. `10-year-old` gets `10 @-@ year @-@ old`, and dots or commas in number the same way, e.g. `3.5` gets `3 @.@ 5` or `5,000` gets `5 @,@ 000`.
Since the pretrained Transformer-XL model is trained with the tokenization above it would make sense to use the same rules for the `TransfoXLTokenizer`, in my opinion. I have found a python package for the *Moses tokenizer* (see [link](https://github.com/alvations/sacremoses)) but I would understand if you do not prefer using it here.
Otherwise some logic of the `BertTokenizer` could be used to because it does perfectly fine with the string above:
```
[CLS] hello ( bracket ) and side - scrolled [ and ] henry ' s $ 5 , 000 km / h with 3 . 34 m . what ' s up ! ? [SEP]
```
Then the only thing to add would be the replacements with the `@` character, from my point of view.
What do you think?<|||||>Hi, yes we already have a [dependency on sacremoses](https://github.com/huggingface/transformers/blob/master/setup.py#L128) for XLM so you can use it.
Do you want to try to propose a PR fixing this issue?<|||||>Ah ok, I didn't know that.
Sure, but could you please give me a hint where it's best to implement this piece of code? I lost track a bit since the tokenization refactoring and I'm not sure what method I would have to overwrite in `TransfoXLTokenizer`.<|||||>Yes of course, so the new API didn't touch any model-specific behavior, it was all about the user-facing up-stream methods.
In your case, I think you'll probably want to update the `_tokenize()` method of Transfo-XL tokenizer here: https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_transfo_xl.py#L339-L356
This is the method in charge of splitting words in token strings.
You can have a look at the XLM tokenizer if you want to see how people have been using sacremoses:
https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_xlm.py<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>unstale<|||||>Thanks :) |
transformers | 5,135 | closed | src/transformers/trainer.py relies on path to infer global training steps, skips training for glue example | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): bert-base-uncased that has been retrained stored somewhere else, with a timestamp in the path
Language I am using the model on (English, Chinese ...):EN
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) MRPC
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. store a pretrained model with a different name, say /funny/path/2020-06-19_120000/retrained ,
with the tokenizer and stuff inside.
2. Run the glue example run_glue on that path with --do_train, --do_eval
3. The training will be skipped because there's [this](https://github.com/huggingface/transformers/blob/84be482f6698fac822a5113735f2242c6d3abc76/src/transformers/trainer.py#L431) funny line in the trainer that will bump the global steps because of the parsing of the date..
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I would not expect the trainer to rely on an arbitrary internal convention for naming checkpoints or any kind of path to yield a spurious global step count.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-5.4.0-29-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.5
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 06-19-2020 08:25:39 | 06-19-2020 08:25:39 | I have the same problem. The script shouldn't run differently depending on what naming scheme you decide to use for a model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,134 | closed | What does adjust_logits_during_generation (formerly prepare_logits_for_generation) do? | https://github.com/huggingface/transformers/blob/49c5202522bdaf66e45df505b3a3c566e56134c3/src/transformers/modeling_utils.py#L794
Hi. Anyone can share why there is this method here? | 06-19-2020 07:59:32 | 06-19-2020 07:59:32 | Pro Sam @sshleifer, Can you help here. Many thx.<|||||>I think it helps to force the model generate a token such as `bos_token` or `eos_token` by setting the probability of generation of all other tokens to 0 <|||||>It is only used by `MarianMTModel` and `BartForConditionalGeneration`.
For Bart, what @mariamabarham said is exactly correct
> it helps to force the model generate a token such as bos_token or eos_token by setting the probability of generation of all other tokens to 0.
For Marian, it does that and one extra job, it prevents the model from ever predicting `pad_token_id`. In Marian, it's important that that logic happens before the softmax so that the probabilities of other "legal" tokens are not super low.
Great Q!
<|||||>> It is only used by `MarianMTModel` and `BartForConditionalGeneration`.
> For Bart, what @mariamabarham said is exactly correct
>
> > it helps to force the model generate a token such as bos_token or eos_token by setting the probability of generation of all other tokens to 0.
>
> For Marian, it does that and one extra job, it prevents the model from ever predicting `pad_token_id`. In Marian, it's important that that logic happens before the softmax so that the probabilities of other "legal" tokens are not super low.
>
> Great Q!
@mariamabarham @sshleifer Many thanx for your response!
Actually, I am learning the code https://github.com/huggingface/transformers/blob/49c5202522bdaf66e45df505b3a3c566e56134c3/src/transformers/modeling_utils.py#L816
I am still don't understand how https://github.com/huggingface/transformers/blob/49c5202522bdaf66e45df505b3a3c566e56134c3/src/transformers/modeling_utils.py#L794 can do this. It just returns the same logits back right? What kind of operation in this method? <|||||>That method is now called `adjust_logits_during_generation`.
It's overwritten for bart:
https://github.com/huggingface/transformers/blob/482a5993c20ef32512a42661bfa404516763b72e/src/transformers/modeling_bart.py#L1028
and marian
https://github.com/huggingface/transformers/blob/482a5993c20ef32512a42661bfa404516763b72e/src/transformers/modeling_marian.py#L49
<|||||>Hi @sshleifer - Just a quick doubt. Why ```adjust_logits_during_generation``` is required for Bart. Is it because of any specific constraints the model is having?
What I have observed, the moment we disable it, generation is going haywire.
I understood, its doing nothing more than forcing everything except 0 ( bos_token_id ) th index to be small or 0.0. But what is the logical reasoning behind it for Bart Model only. Its not applicable for all models ( say T5 ). |
transformers | 5,133 | closed | How to find and use fine-tuned model for GLUE-CoLA? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
I am trying to use fine-tuned model for CoLA from GLUE. Is there a quick guide? Thanks | 06-19-2020 05:23:55 | 06-19-2020 05:23:55 | Hi @517030910405 , you can search for the models on huggingface model hub here https://huggingface.co/models,
you can filter the models by task, CoLa is sequence classification task so you'll be able to find the model there.
Here's one I found which is trained on CoLA https://huggingface.co/textattack/bert-base-uncased-CoLA.
And to see how to use sequence classification models, check this usage guide https://huggingface.co/transformers/usage.html#sequence-classification<|||||>> Hi @517030910405 , you can search for the models on huggingface model hub here https://huggingface.co/models,
> you can filter the models by task, CoLa is sequence classification task so you'll be able to find the model there.
> Here's one I found which is trained on CoLA https://huggingface.co/textattack/bert-base-uncased-CoLA.
>
> And to see how to use sequence classification models, check this usage guide https://huggingface.co/transformers/usage.html#sequence-classification
Thank you very much. The recommended fine-tuned model works very well. |
transformers | 5,132 | closed | fix bart doc | fix bart doc
change the model name in the example from "bart-large-cnn" to "facebook/bart-large-cnn" | 06-19-2020 04:03:46 | 06-19-2020 04:03:46 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5132?src=pr&el=h1) Report
> Merging [#5132](https://codecov.io/gh/huggingface/transformers/pull/5132?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/84be482f6698fac822a5113735f2242c6d3abc76&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5132?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5132 +/- ##
=======================================
Coverage 77.19% 77.19%
=======================================
Files 133 133
Lines 22232 22232
=======================================
Hits 17162 17162
Misses 5070 5070
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5132?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5132/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.25% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5132/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.81% <0.00%> (-0.15%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5132/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5132?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5132?src=pr&el=footer). Last update [84be482...992cbe3](https://codecov.io/gh/huggingface/transformers/pull/5132?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Awesome thanks a lot :-) |
transformers | 5,131 | closed | Update BERT-of-Theseus model card to avoid confusion | @VictorSanh said this checkpoint cannot be evaluated directly on MNLI. Indeed this model is for intermediate task transfer (in BERT-of-Theseus [arxiv version](https://arxiv.org/abs/2002.02925), it's called a "general-purpose" model) so it doesn't (and shouldn't) contain a classification head. This note helps prevent confusion. | 06-19-2020 03:56:37 | 06-19-2020 03:56:37 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5131?src=pr&el=h1) Report
> Merging [#5131](https://codecov.io/gh/huggingface/transformers/pull/5131?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/84be482f6698fac822a5113735f2242c6d3abc76&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5131?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5131 +/- ##
=======================================
Coverage 77.19% 77.19%
=======================================
Files 133 133
Lines 22232 22232
=======================================
Hits 17162 17162
Misses 5070 5070
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5131?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.81% <0.00%> (-0.15%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5131/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.40% <0.00%> (+0.40%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5131?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5131?src=pr&el=footer). Last update [84be482...3fe3e47](https://codecov.io/gh/huggingface/transformers/pull/5131?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>The note makes it clearer! Thanks!
As discussed, I think the confusion arises because of the naming: in the hub, most of the checkpoints with a task in the checkpoint name are full checkpoint with the specific task head. Sometimes there is `finetuned` in the checkpoint name, sometimes not.
This enables a user to directly use a fine-tuned checkpoint for the task at hand out of the box (without any training).
In the long run, it might be worth it to have explicit "guidelines" on checkpoint naming in the hub to avoid these confusions (which should also be mentioned in the model cards if there is any ambiguity). Namely, checkpoints that are general purpose (like `bert-base-uncased`) should be clearly distinguishable from the task-specific checkpoints (like `bert-large-uncased-whole-word-masking-finetuned-squad`). What do you think @julien-c?<|||||>IMO, the most important thing is to allow the authors to specify the example code. A possible design would be: auto-generated example by default but authors can overwrite. We can never completely rule out the ambiguity in the name without a super complicated protocol and long names. At least a code snippet can help. |
transformers | 5,130 | closed | added subtitle for recent contributors in readme | 06-19-2020 02:52:49 | 06-19-2020 02:52:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5130?src=pr&el=h1) Report
> Merging [#5130](https://codecov.io/gh/huggingface/transformers/pull/5130?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/84be482f6698fac822a5113735f2242c6d3abc76&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5130?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5130 +/- ##
=======================================
Coverage 77.19% 77.19%
=======================================
Files 133 133
Lines 22232 22232
=======================================
Hits 17162 17162
Misses 5070 5070
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5130?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5130/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.96% <0.00%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5130?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5130?src=pr&el=footer). Last update [84be482...4341287](https://codecov.io/gh/huggingface/transformers/pull/5130?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>:shipit: |
|
transformers | 5,129 | closed | Add mbart-large-cc25, support translation finetuning | TODO:
- [x] fix config (max_length=100)
- [x] split test_modeling_mbart.py to new file
- [x] decoder_input_ids should start with decoder_start_token_id
- [x] AutoTokenizer
- [x] test finetuning. Requires adding `--src_lang` and `--tgt_lang` clargs.
- [x] Documentation
- [ ] Model Card
- [x] fix mbart tokenizer tests
- [x] tokenizer docs | 06-19-2020 00:35:45 | 06-19-2020 00:35:45 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5129?src=pr&el=h1) Report
> Merging [#5129](https://codecov.io/gh/huggingface/transformers/pull/5129?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d2a93991158f15993eba9ab421d82766b892f948&el=desc) will **increase** coverage by `0.25%`.
> The diff coverage is `92.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5129?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5129 +/- ##
==========================================
+ Coverage 76.84% 77.09% +0.25%
==========================================
Files 141 141
Lines 24685 24702 +17
==========================================
+ Hits 18969 19044 +75
+ Misses 5716 5658 -58
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5129?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.45% <92.00%> (+1.57%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `82.99% <0.00%> (-6.13%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |
| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/5129/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5129?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5129?src=pr&el=footer). Last update [d2a9399...3685a81](https://codecov.io/gh/huggingface/transformers/pull/5129?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>> Could we test the MBart tokenizer? Or do you think it's too similar to the XLM-R tokenizer to deserve its own test suite?
<3 that idea. Will do in this PR. Thanks for reminding me.<|||||>This does not need review at the moment. I need to fix the tests first.<|||||>Oh, and you have the `build_doc` failing (warnings make the build fail) with the following error:
```
tokenization_bart.py:docstring of transformers.MBartTokenizer.build_inputs_with_special_tokens:4:Unexpected indentation.
```<|||||>Hi Sam,
Thanks for your great work to add MBART-cc25.
I've been following issues and threads regarding MBART generation problems for a while now.
Are those problems fixed in this PR or is the model still generating English?<|||||>The model is still generating english, but I'm not sure whether that's a bug. I can finetune it to achieve good scores on WMT en-ro, and nobody has replied to https://github.com/pytorch/fairseq/issues/2258, so I moved on.<|||||>Sad :( I was really looking forward to trying it on language pairs that Marian does not support (e.g. English->Chinese, English->Arabic). Is there any way to get MBART to work without any fine-tuning?
The problem I'm facing is it always generates BOS tokens at the first decoding step even though the tgt_lang id is provided as the starting token. I tried forcing zero probability on BOS token at each step but then it generates gibberish text...<|||||>I don't know how the preprocessing is done in the original setup, but I found a potential problems in generating decoder_input_ids (i.e target sentence) for training. According to the documentation, it's supposed to have the format:
```
language code - tokens - eos
```
(which is supposedly different from the format of the input_ids), but the tokenizer currently generates for both source (input_ids) and target (decoder_input_ids) the current format:
```
tokens - eos - language code
```
Maybe this is why you are unable to generate? Could sb verify this?<|||||>This was a bug, but is fixed on master (by this PR) if you use `prepare_translation_batch`.
For example
```python
from transformers import MBartTokenizer
model_name = 'facebook/mbart-large-cc25'
tok = MBartTokenizer.from_pretrained(model_name)
src_text = ['I think I fixed the tokenizer']
tgt_text = ['Cred că am rezolvat tokenizatorul']
batch = tok.prepare_translation_batch(src_text, tgt_texts=tgt_text)
batch.input_ids # (*tokens, eos, lang_code)
=> tensor([[ 87, 5351, 87, 188347, 70, 47, 1098, 52825, 2, 250004]])
batch.decoder_input_ids # (lang_code, *tokens, eos)
=> tensor([[250020, 68523, 1362, 444, 102432, 18, 47, 1098, 164077, 202, 2]])
```
More info: mbart knows how to switch between src and tgt modes using the `set_lang` method: https://github.com/huggingface/transformers/blob/d6eab53058015483e9cbcbfee4bf900c3a8ab772/src/transformers/tokenization_bart.py#L186 |
transformers | 5,128 | closed | Pin `sphinx-rtd-theme` | 06-18-2020 22:03:41 | 06-18-2020 22:03:41 | ||
transformers | 5,127 | closed | Cached feature files - Naming introduces confusion/model mixing | # 🐛 Bug
The problem arises when using:
* [X] the official example scripts: glue.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: any example of GLUE
* [ ] my own task or dataset: (give details below)
In `run_glue.py` (and possibly in other run_* scripts interfaced with the new Trainer), the cache naming for the pre-processed features introduces confusion between models.
https://github.com/huggingface/transformers/blob/3d3e605affb792b78c918aac48f6bc82cfbf7e3e/src/transformers/data/datasets/glue.py#L87
`tokenizer.__class__.__name__` is `BertTokenizer`, `DistilBertTokenizer`, etc. which removes the information of which tokenization is used. For instance, `bert-base-uncased` and `bert-base-cased` would end up with the same file name while being potentially two really different files.
In scripts not interfaced yet with the Trainer, the `model_name_or_path` is used which prevents conflicting files to have the same name:
https://github.com/huggingface/transformers/blob/3d3e605affb792b78c918aac48f6bc82cfbf7e3e/examples/text-classification/run_xnli.py#L315 | 06-18-2020 21:04:42 | 06-18-2020 21:04:42 | Good point. We can do a simple hack for the time being but the cleanest way out will be to use 🤗nlp to process the data (which I'm adding at the moment but probably won't be fully operational in the examples before early next week).
🤗nlp has a lot more reliable and general hash-naming caching scheme for all data processing based on the function and all the inputs/outputs when processing the dataset (using some smart serialization with `dill` and hashing algo).<|||||>Ok! I might go back to the old scripts to experiment in the meantime.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,126 | closed | [fix] Move _adjust_logits above postprocess to fix Marian.generate | Fixes slow gpu test failures caused by #5031
This logic was already discussed in a previous PR, but, to summarize, the marian model loves to predict `pad_token_id` and if we allow pad_token_id's high score to enter the softmax we get very low probabilities for everything else.
I also rename `prepare_logits_for_generation` -> `_adjust_logits` | 06-18-2020 20:56:04 | 06-18-2020 20:56:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5126?src=pr&el=h1) Report
> Merging [#5126](https://codecov.io/gh/huggingface/transformers/pull/5126?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3d3e605affb792b78c918aac48f6bc82cfbf7e3e&el=desc) will **increase** coverage by `0.06%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5126?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5126 +/- ##
==========================================
+ Coverage 77.26% 77.32% +0.06%
==========================================
Files 133 133
Lines 22163 22163
==========================================
+ Hits 17124 17138 +14
+ Misses 5039 5025 -14
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5126?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.25% <100.00%> (ø)` | |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `88.88% <100.00%> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.16% <100.00%> (ø)` | |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.65% <0.00%> (+2.29%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5126?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5126?src=pr&el=footer). Last update [3d3e605...c6bd992](https://codecov.io/gh/huggingface/transformers/pull/5126?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Don't really think that `_adjust_logits` is a better name. Also why the underscore `_` ? we don't do this for `prepare_input_ids_for_generation()` either<|||||>Because it's not user facing, but I guess that's inconsistent. What do you think about `adjust_logits`?<|||||>Hmm, I like `prepare_logits_for_generation` better than `adjust_logits`, since it has the name `generation` in it, so people know that this function only belongs to generation in bart & marian |
transformers | 5,125 | closed | [tokenizers] Fix #5081 and improve backward compatibility | Adds a generic fallback for `get_special_tokens_mask()` when there is only one input sentence and `already_has_special_tokens=True`.
It's still recommended to use `return_special_mask=True` in any encoding method for efficiency but this should work as well the above indicated cases.
Removed one test related to `get_special_tokens_mask()` which didn't test its operation in a reliable manner across tokenizers. | 06-18-2020 20:54:02 | 06-18-2020 20:54:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5125?src=pr&el=h1) Report
> Merging [#5125](https://codecov.io/gh/huggingface/transformers/pull/5125?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d2a7c86dc33d6def6dba44f6ed2b71e8a1644130&el=desc) will **decrease** coverage by `0.04%`.
> The diff coverage is `33.33%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5125?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5125 +/- ##
==========================================
- Coverage 78.04% 77.99% -0.05%
==========================================
Files 138 138
Lines 23766 23772 +6
==========================================
- Hits 18548 18541 -7
- Misses 5218 5231 +13
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5125?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `94.81% <ø> (ø)` | |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `91.07% <33.33%> (-0.63%)` | :arrow_down: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-5.42%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.81% <0.00%> (-0.15%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.57% <0.00%> (+1.18%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5125?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5125?src=pr&el=footer). Last update [d2a7c86...e4b0dc0](https://codecov.io/gh/huggingface/transformers/pull/5125?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,124 | closed | SLOW GPU test Failures | ```bash
=========================== short test summary info ============================
FAILED tests/test_modeling_marian.py::TestMarian_RU_FR::test_batch_generation_ru_fr
FAILED tests/test_modeling_marian.py::TestMarian_MT_EN::test_batch_generation_mt_en
==== 2 failed, 1110 passed, 392 skipped, 361 warnings in 1347.28s (0:22:27) ====
```
https://github.com/huggingface/transformers/runs/782480196?check_suite_focus=true | 06-18-2020 20:33:26 | 06-18-2020 20:33:26 | Status Badge:
/badge.svg) |
transformers | 5,123 | closed | Fine-Tuning GPT2 | This post references a page that no longer exists (the run_lm_finetuning.py). Where is the updated page?
"You can look into GPT-2's training on the CLM task, which is done on WikiText-2 in this example."
_Originally posted by @LysandreJik in https://github.com/huggingface/transformers/issues/1145#issuecomment-526322616_ | 06-18-2020 20:21:45 | 06-18-2020 20:21:45 | Hi! This script has been replaced by `language-modeling/run_language_modeling.py`, because it now handles pre-training as well, not only fine-tuning. [Here's a useful link.](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) |
transformers | 5,122 | closed | Fix #5114 | This fixes #5114, with a test to make sure there is no regression again. | 06-18-2020 20:18:36 | 06-18-2020 20:18:36 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5122?src=pr&el=h1) Report
> Merging [#5122](https://codecov.io/gh/huggingface/transformers/pull/5122?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ca2d0f98c4a89d50b79ddb06b59b6bffc31ff137&el=desc) will **decrease** coverage by `0.12%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5122?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5122 +/- ##
==========================================
- Coverage 77.39% 77.27% -0.13%
==========================================
Files 133 133
Lines 22167 22167
==========================================
- Hits 17157 17130 -27
- Misses 5010 5037 +27
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5122?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `98.33% <100.00%> (ø)` | |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-4.78%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.18% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5122/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.59% <0.00%> (+0.15%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5122?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5122?src=pr&el=footer). Last update [ca2d0f9...bee759a](https://codecov.io/gh/huggingface/transformers/pull/5122?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>👍 |
transformers | 5,121 | closed | AutoTokenizer supports mbart-large-en-ro | - adds `MBartConfig` so that `MBartTokenizer` can have it's own key in `TOKENIZER_MAPPING`.
- I will remove all mentions of `sshleifer/mbart-large-en-ro` right before this is merged.
It's only difference with `facebook/mbart-large-en-ro` is that `config.model_type='mbart'`. Since I didn't want to break existing Mbart callers, I will do the s3 migration right before I merge.
| 06-18-2020 19:48:07 | 06-18-2020 19:48:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5121?src=pr&el=h1) Report
> Merging [#5121](https://codecov.io/gh/huggingface/transformers/pull/5121?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/355954ffca798bb81d9db8886e30ce10f11e8a40&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5121?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5121 +/- ##
==========================================
+ Coverage 77.28% 77.29% +0.01%
==========================================
Files 133 133
Lines 22134 22136 +2
==========================================
+ Hits 17107 17111 +4
+ Misses 5027 5025 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5121?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.02% <100.00%> (ø)` | |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.61% <100.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.67% <100.00%> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5121/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5121?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5121?src=pr&el=footer). Last update [355954f...65b8c8a](https://codecov.io/gh/huggingface/transformers/pull/5121?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,120 | closed | AutoTokenizer.from_pretrained('facebook/mbart-large-en-ro') fails | It is not in TOKENIZER_MAPPING because it has same config as Bart.
Need to make separate MBartConfig.
| 06-18-2020 19:12:19 | 06-18-2020 19:12:19 | |
transformers | 5,119 | closed | [cleanup] remove redundant code in SummarizationDataset | 06-18-2020 18:57:02 | 06-18-2020 18:57:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5119?src=pr&el=h1) Report
> Merging [#5119](https://codecov.io/gh/huggingface/transformers/pull/5119?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/355954ffca798bb81d9db8886e30ce10f11e8a40&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5119?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5119 +/- ##
==========================================
- Coverage 77.28% 77.28% -0.01%
==========================================
Files 133 133
Lines 22134 22134
==========================================
- Hits 17107 17106 -1
- Misses 5027 5028 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5119?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5119?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5119?src=pr&el=footer). Last update [355954f...09ff62a](https://codecov.io/gh/huggingface/transformers/pull/5119?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,118 | closed | UnboundLocalError: local variable 'next_tokens' referenced before assignment when using Generate() | # 🐛 Bug
I have pre-trained GPT2 on a summarisation dataset such that summarisation is a language modelling task i.e. input = concat( padded_to_max_len(body , "TL;DR:" , summary)).
For some reason, this error occurs when I try to generate via beam search using GPT2's with language modelling head. Here is my code:
```python
from transformers import AutoTokenizer, GPT2LMHeadModel
import torch
# define tokenizer
tokenizer_kwargs = {"bos_token": "<|startoftext|>", "eos_token": "<|endoftext|>", "pad_token": "<|pad|>"}
tokenizer = AutoTokenizer.from_pretrained("gpt2", **tokenizer_kwargs)
# define model
model = GPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id)
# define input
input_ids = torch.tensor(tokenizer.encode("some text that ends in TL;DR:")).unsqueeze(0)
# attempt to generate
y_pred_tensor = model.generate(input_ids=input_ids,
num_beams=5,
early_stopping=True,
no_repeat_ngram_size=2,
max_length=100
)
```
```bash
File "/Users/christopherdoyle/cp_projects/scribbl-ai/Scribbl/Scribbl/summarizer/models.py", line 147, in summarize
y_pred_tensor = self.model.generate(input_ids=input_tensor,
File "/Users/christopherdoyle/cp_projects/scribbl-ai/Scribbl/venv/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/Users/christopherdoyle/cp_projects/scribbl-ai/Scribbl/venv/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1100, in generate
output = self._generate_beam_search(
File "/Users/christopherdoyle/cp_projects/scribbl-ai/Scribbl/venv/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1499, in _generate_beam_search
(token_id % vocab_size).item() is not eos_token_id for token_id in next_tokens[batch_idx]
UnboundLocalError: local variable 'next_tokens' referenced before assignment
```
Model I am using: ```GPT2LMHeadModel```
Language I am using the model on (English, Chinese ...): ```en```
Rather strangely, it works when ```max_length = 1024```, but not with smaller values.
## To reproduce
Running on CPU
python==3.8, transformers==2.11.0, torch==1.5.0
| 06-18-2020 17:58:03 | 06-18-2020 17:58:03 | @patrickvonplaten, do you want to take a look?<|||||>Hey @chrisdoyleIE ,
Using your code example above (I corrected some typos and missing imports), I am not able to reproduce the error. If a longer input is need to produce this error, please provide all necessary code to reproduce this error. Ideally, I should be able to copy paste the code into a console and get the same error as you :-)<|||||>Hey @patrickvonplaten ,
I'll do some digging and see if I can't reproduce it myself such that it's easily paste-able and then I can share this code (I currently have a few custom packages calling eachother which is hairier than i'd like and not trivial to insert into an issue).<|||||>I found a similar error when doing summarization, and just wanted to follow up on this. I have been stuck on this for a little bit now and I just wanted to check if there was a simple user-end solution to this, maybe incorrect arguments, etc.
This is a simplified notebook with the error: https://colab.research.google.com/drive/1Fj74x2NDJbhsty-oXzfhOCO185T80zw3?usp=sharing<|||||>Thanks for the notebook @MathewPerez! Will checkit now <|||||>Awesome, I can reproduce the error - will look at a fix now :-) <|||||>The problem is that `max_length` is not bigger than `cur_len` so that model will not produce any text.
This will fix the problem:
```python
outputs = model.generate(input_ids=input_ids, num_beams=3, max_length=75)
``` |
transformers | 5,117 | closed | An Implementation of ERNIE For Language Understanding (including Pre-training models and Fine-tuning tools) | # 🌟 New model addition
ERNIE 2.0 is a continual pre-training framework for language understanding in which pre-training tasks can be incrementally built and learned through multi-task learning. ERNIE 2.0 builds a strong basic for nearly every NLP tasks: Text Classification, Ranking, NER, machine reading comprehension, text genration and so on.
## Model description
<!-- Important information -->
https://github.com/PaddlePaddle/ERNIE
## Open source status
* [ ] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [ ] who are the authors: (mention them, if possible by @gh-username)
| 06-18-2020 16:42:05 | 06-18-2020 16:42:05 | Can we update to add this into transformers<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,116 | closed | support local_files_only option for tf models | 06-18-2020 16:33:08 | 06-18-2020 16:33:08 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5116?src=pr&el=h1) Report
> Merging [#5116](https://codecov.io/gh/huggingface/transformers/pull/5116?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/355954ffca798bb81d9db8886e30ce10f11e8a40&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5116?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5116 +/- ##
=======================================
Coverage 77.28% 77.29%
=======================================
Files 133 133
Lines 22134 22135 +1
=======================================
+ Hits 17107 17109 +2
+ Misses 5027 5026 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5116?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.44% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5116/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5116?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5116?src=pr&el=footer). Last update [355954f...fa3e6dc](https://codecov.io/gh/huggingface/transformers/pull/5116?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,115 | closed | [cleanup] generate_beam_search comments | This PR does two small things:
1) try to make `generate_beam_search`'s inline comments work with the code, and fix some typos.
2) make the `cur_len` argument to `BeamHypothesis.is_done` mandatory, since it is always specified. This simplifies the logic a tiny bit.
| 06-18-2020 16:26:08 | 06-18-2020 16:26:08 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5115?src=pr&el=h1) Report
> Merging [#5115](https://codecov.io/gh/huggingface/transformers/pull/5115?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/355954ffca798bb81d9db8886e30ce10f11e8a40&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5115?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5115 +/- ##
==========================================
+ Coverage 77.28% 77.30% +0.01%
==========================================
Files 133 133
Lines 22134 22130 -4
==========================================
+ Hits 17107 17108 +1
+ Misses 5027 5022 -5
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5115?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5115/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.85% <100.00%> (+0.42%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5115/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.04% <100.00%> (+0.09%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5115/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5115?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5115?src=pr&el=footer). Last update [355954f...5d77afc](https://codecov.io/gh/huggingface/transformers/pull/5115?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great! |
transformers | 5,114 | closed | data_collator.py does not allow NoneType labels for test set predictions on Glue | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Distilbert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Open the example Colab for text-classification from here https://huggingface.co/transformers/examples.html
2. Try to run the prediction function found in run_glue.py to predict on the official Glue test set.
3. The error is as shown below.
Earlier, this has worked with the exact same program, ever since an update recently, this error shows up.
`TypeError Traceback (most recent call last)
<ipython-input-16-9eecdd4d48b1> in <module>()
2 output_mode = "classification"
3
----> 4 predictions = trainer.predict(test_dataset=test_dataset).predictions
5 if output_mode == "classification":
6 predictions = np.argmax(predictions, axis=1)
7 frames
/usr/local/lib/python3.6/dist-packages/transformers/data/data_collator.py in default_data_collator(features)
45 if "label" in first:
46 dtype = torch.long if type(first["label"]) is int else torch.float
---> 47 batch["labels"] = torch.tensor([f["label"] for f in features], dtype=dtype)
48 elif "label_ids" in first:
49 if isinstance(first["label_ids"], torch.Tensor):
TypeError: must be real number, not NoneType`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The error can be seen in the Colab notebook here https://colab.research.google.com/drive/1H_92qdsOOql2hS210qNrfMEEMRcAoHD_?usp=sharing
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Colab
- Python version: NA
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:
| 06-18-2020 15:28:43 | 06-18-2020 15:28:43 | |
transformers | 5,113 | closed | GPU out of memory with Reformer enwik8 model | # ❓ Questions & Help
I'm trying to run the pretrained model `google/reformer-enwik8` but I'm getting CUDA out of memory errors unless I limit the sequences to one-fourth of the model capacity (~16k instead of the 65k).
This happens with a Titan Xp with 12GB RAM; I expected all the tricks of the Reformer to make the model with the original sequence size fit.
The code I'm running:
```
model = ReformerModelWithLMHead.from_pretrained('google/reformer-enwik8')
model.cuda()
config = model.config
max_len = config.max_position_embeddings
dataset = Enwik8Dataset(
path, max_len, pad_id=config.pad_token_id,
eos_id=config.eos_token_id)
loader = DataLoader(dataset, batch_size=1, shuffle=False)
acc_loss = 0
for batch in loader:
with torch.no_grad():
batch_loss = model(input_ids=batch, labels=batch)[0]
acc_loss += batch_loss.mean().item()
acc_loss /= len(dataset)
```
The Enwik8Dataset inherits from Dataset and does the basic data preprocessing, I can post the code if necessary.
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: https://stackoverflow.com/questions/62373033/gpu-out-of-memory-with-enwik8-reformer-from-huggingface
| 06-18-2020 15:24:41 | 06-18-2020 15:24:41 | Did you try to train with half precision using the apex/amp package?<|||||>No, I didn't. Anyway, this was only evaluating the pretrained model.<|||||>I now have installed nvidia apex and tried training a new model with fp16. It ran out of memory after around 10% of the evaluation loop. I didn't expect it, since the model shouldn't need more memory in the middle of evaluation, with all batches having the full sequence length (in my case, 16k).
Is there maybe something to optimize GPU memory usage?<|||||>Hmm, lemme check...
What is the sequence length and batch size you use excatly? Also, you can reduce `num_hashes` to save memory.<|||||>I'm using this training setup with the `Trainer` from `huggingface`:
```
axial_pos_emb_dim = 128, 384
hidden_dim = sum(axial_pos_emb_dim)
axial_pos_max = 64, 256 # the product of this is the maximum length
max_length = axial_pos_max[0] * axial_pos_max[1]
hidden_dropout = 0.2
attn_dropout = 0.1
ff_dim = 2 * hidden_dim
num_heads = 8
dim_per_head = hidden_dim // num_heads
num_layers = 6
layers = ['local', 'lsh'] * (num_layers // 2)
chunk_size = 0
bucket_size = 64
num_hashes = 2
vocab_size = 258
config = ReformerConfig(
dim_per_head, layers, chunk_size_feed_forward=chunk_size,
axial_pos_embds_dim=axial_pos_emb_dim,
axial_pos_shape=axial_pos_max,
max_position_embeddings=max_length,
eos_token_id=1, feed_forward_size=ff_dim,
hidden_dropout_prob=hidden_dropout, hidden_size=hidden_dim,
lsh_attention_probs_dropout_prob=attn_dropout,
local_attention_probs_dropout_prob=attn_dropout,
num_attention_heads=num_heads, num_buckets=None,
pad_token_id=0, lsh_attn_chunk_length=bucket_size,
num_hashes=num_hashes, vocab_size=vocab_size)
model = ReformerModelWithLMHead(config)
training_args = TrainingArguments(
'model', do_train=True, do_eval=True,
do_predict=False, evaluate_during_training=True,
gradient_accumulation_steps=1,
learning_rate=0.001,
logging_dir='model/tensorboard',
logging_steps=5,
save_steps=1000,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
fp16=True)
```
Unless I'm doing something wrong, it's a batch size of 1 (both for training and evaluating) and a sequence length of 16384 (64 * 256).<|||||>Ok, so I found that the main culprit was that the `Trainer` was storing all model predictions in GPU memory during evaluation at https://github.com/huggingface/transformers/blob/c01480bba3b2f0bd8516679476235f4701c21b3b/src/transformers/trainer.py#L775
Passing `prediction_loss_only=False` avoided that. By the way, I believe this should be the default value in the `Trainer`, and that the `cat` operation could use cpu tensors, in case the validation dataset is big. <|||||>Out of curiosity, how big is your validation dataset/how large is the in-memory size of `preds` in that prediction loop?<|||||>@julien-c this happened with the enwik8 test set, 5M characters.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,112 | closed | Strange exception | # ❓ Questions & Help
Hi everybody,
I'm keeping experiment with the following colab:
https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=X9_Go99fvW-z
however I got a new error after the fine-tuning starts:
Exception in thread Thread-20:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/parallel_loader.py", line 141, in _loader_worker
_, data = next(data_iter)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 354, in __next__
data = self._next_data()
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 394, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "<ipython-input-6-cc30f67d4a07>", line 34, in collate_batch
input_ids = torch.stack([example['input_ids'] for example in batch])
File "<ipython-input-6-cc30f67d4a07>", line 34, in <listcomp>
input_ids = torch.stack([example['input_ids'] for example in batch])
TypeError: string indices must be integers
Is this due to the new changes introduced yesterday for the DataCollate ?
| 06-18-2020 15:14:42 | 06-18-2020 15:14:42 | The notebook is not using the collation function from `transformers` but the `T2TDataCollator` it defines. Apart from stopping subclassing `DataCollator` (as it's not a class anymore on master) this shouldn't be a problem.<|||||>Hi, @antoniomastro1996 , if you are using transformers from master branch then there are few changes.
As @sgugger said, `DataCollator` is not a class anymore, so just don't subclass. Also as datacollator is now a `callable`, rename the `batch_collate` method to `__call__`.
Or you can use version 2.11.0 and it'll work as it is. I'll update the notebook once the new version is released. Let me know if you run into any other issues.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,111 | closed | [Marian] Run predictions on GPU? RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select` | Is it possible to run the Marian NMT model predictions on GPU, should not it be faster? Once I try to move the model to GPU I get the error described below. Is there any quick fix to the code to achieve this?
```
torch.__version__
Out[15]: '1.5.0'
```
The code to reproduce:
```
from transformers import MarianMTModel, MarianTokenizer
model_name = 'Helsinki-NLP/opus-mt-fr-en'
model = MarianMTModel.from_pretrained(model_name).cuda()
tokenizer = MarianTokenizer.from_pretrained(model_name)
```
Then I check the current device:
```
model.device
Out[33]: device(type='cuda', index=0)
```
Then I try to predict following the examples from documentation:
```
batch = tokenizer.prepare_translation_batch(list_of_sents)
gen = model.generate(**batch, num_beam=1, early_stopping = True, no_repeat_ngram_size = 3)
File "<ipython-input-9-54f44d46fb14>", line 1, in <module>
gen = model.generate(**batch, num_beam=1, early_stopping = True, no_repeat_ngram_size = 3)
File "/home/ms/anaconda3/envs/git_face/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/home/ms/anaconda3/envs/git_face/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1146, in generate
encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask)
File "/home/ms/anaconda3/envs/git_face/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/ms/anaconda3/envs/git_face/lib/python3.7/site-packages/transformers/modeling_bart.py", line 292, in forward
inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
File "/home/ms/anaconda3/envs/git_face/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/ms/anaconda3/envs/git_face/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/ms/anaconda3/envs/git_face/lib/python3.7/site-packages/torch/nn/functional.py", line 1724, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select`
``` | 06-18-2020 15:07:34 | 06-18-2020 15:07:34 | Is your `batch` on the correct device too?<|||||>Thank you, I moved `batch` to the GPU and it works.
`batch = tokenizer.prepare_translation_batch(sentences).to('cuda')`<|||||>More generally: `batch.to(model.device)`. |
transformers | 5,110 | closed | XLMForMultipleChoice | This PR adds `XLMForMultipleChoice`. One of the missing models in this [project](https://github.com/huggingface/transformers/projects/17)
@sgugger | 06-18-2020 14:57:42 | 06-18-2020 14:57:42 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5110?src=pr&el=h1) Report
> Merging [#5110](https://codecov.io/gh/huggingface/transformers/pull/5110?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/355954ffca798bb81d9db8886e30ce10f11e8a40&el=desc) will **increase** coverage by `0.03%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5110?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5110 +/- ##
==========================================
+ Coverage 77.28% 77.32% +0.03%
==========================================
Files 133 133
Lines 22134 22164 +30
==========================================
+ Hits 17107 17139 +32
+ Misses 5027 5025 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5110?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.16% <ø> (ø)` | |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.76% <ø> (ø)` | |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `90.23% <100.00%> (+0.98%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.42% <0.00%> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5110/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5110?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5110?src=pr&el=footer). Last update [355954f...439c7d0](https://codecov.io/gh/huggingface/transformers/pull/5110?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>In the end this was added by #5614, so closing here. |
transformers | 5,109 | closed | Flaky tests sometimes caused by S3 failures | # 🐛 Bug
## Information
The test: `tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_fill_mask` is flaky on circle ci: https://app.circleci.com/pipelines/github/huggingface/transformers/7734/workflows/7c51cfcf-5425-4172-aa15-ed677e37f7fc/jobs/49891/steps
## To reproduce
Not sure. It pops up from time to time on circle ci.
Need to investigate more
## Expected behavior
The test should not be flaky
## Environment info
On circle ci | 06-18-2020 14:43:28 | 06-18-2020 14:43:28 | I can't replicate (I understand that's the point :))
The traceback from your link is:
```python
OSError: Can't load weights for 'sshleifer/tiny-distilroberta-base'. Make sure that:
E
E - 'sshleifer/tiny-distilroberta-base' is a correct model identifier listed on 'https://huggingface.co/models'
E
E - or 'sshleifer/tiny-distilroberta-base' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
```
Contents are fine, it seems:

@julien-c have you seen similar intermittent S3 failures?<|||||>From what I experience, this happens not just for the same model, but for different ones. Very hard to traceback what is going on there IMO.<|||||>Yes, I had a similar circleci failure trying to get the bert tokenizer yesterday.
<|||||>I have found a clue. I hypothesize that some models have user posted metadata that somehow ducks them up.
I have never seen flaky bart-large failure, and there is no metadata:
```bash
aws s3api head-object --bucket models.huggingface.co --key bert/facebook/bart-large/config.json
=> bytes 1264 application/json "faf4c3c00764dc47a3a10a63a004dcc3" Fri, 24 Apr 2020 15:58:48 GMT JhIFsOvvLtrLn0vJjGNN6ZhJGUlbXBEP
```
whereas there are `Helsinki-NLP/opus-mt-en-ROMANCE` failures, and metadata:
```bash
aws s3api head-object --bucket models.huggingface.co --key bert/Helsinki-NLP/opus-mt-en-ROMANCE/config.json
=> bytes 1113 text/plain "13ca8d49ee7f02a26f935cb4a60e6557" Tue, 12 May 2020 22:39:10 GMT jAI94kJ_exk0tG6z0Yr.Rea4_j0g02Ih
METADATA atime:1589321950/ctime:1589321949/gid:1007/gname:shleifer/md5:13ca8d49ee7f02a26f935cb4a60e6557/mode:33188/mtime:1589321949/uid:1006/uname:shleifer
```
Now I need to figure out how to get rid of the metadata to test my hypothesis, if anyone knows how.
**Update:**
```bash
bytes 474 application/json "622babbd58848ec69a2433ba0c6edab3" Tue, 12 May 2020 01:26:15 GMT UKLWouonOr8duRlrHqH4.iNZEF5bmOQY
METADATA atime:1589246769/ctime:1589246726/gid:20/gname:staff/md5:622babbd58848ec69a2433ba0c6edab3/mode:33188/mtime:1589246726/uid:501/uname:shleifer
```
also shows metadata, and there are flaky failures with that model.
<|||||>Should be easy, I can guide you through the S3 doc if needed.
I doubt that's the reason though. What about intermittent connectivity issues? They seem more likely.<|||||>It could certainly be connectivity. I manually deleted the metadata for all contents of `Helsinki-NLP/opus-mt-en-ROMANCE/` and `sshleifer/tiny-distilroberta-base`, so if there are more failures there we will know that my theory is wrong. I think the metadata is caused by using `s3cmd` instead of `awscli`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,108 | closed | Create README.md | 06-18-2020 14:31:16 | 06-18-2020 14:31:16 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5108?src=pr&el=h1) Report
> Merging [#5108](https://codecov.io/gh/huggingface/transformers/pull/5108?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/355954ffca798bb81d9db8886e30ce10f11e8a40&el=desc) will **decrease** coverage by `0.08%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5108?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5108 +/- ##
==========================================
- Coverage 77.28% 77.20% -0.09%
==========================================
Files 133 133
Lines 22134 22134
==========================================
- Hits 17107 17088 -19
- Misses 5027 5046 +19
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5108?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `76.35% <0.00%> (-2.30%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.49% <0.00%> (-0.94%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `75.00% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5108?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5108?src=pr&el=footer). Last update [355954f...6330b4e](https://codecov.io/gh/huggingface/transformers/pull/5108?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,107 | closed | Create README.md | @julien-c check out that dataset meta tag is right | 06-18-2020 13:40:25 | 06-18-2020 13:40:25 | > @julien-c check out that dataset meta tag is right
It seems everythigs is right |
transformers | 5,106 | closed | How can I initialize RobertaForSequenceClassification empty? | I am having my own custom dataset which is completely different from the one with roberta. Now using ByteLevelBPETokenizer, I am creating vocab.json and merges.txt . I am using these two to files to initialize RobertaTokenizerFast for encoding my corpus. Now I am training a RobertaForSequenceClassification for a binary classification problem. When I am initializing RobertaForSequenceClassification with any of the pre-trained models I am getting
`IndexError: index out of range in self` on the RAM while on GPU I am getting `RuntimeError: CUDA error: an illegal memory access was encountered` . I have followed other [issues ](https://github.com/pytorch/pytorch/issues/21819) but of no help. My understanding since I am creating my own vocabulary, some of the tokens are not in pre-trained model. So is there a way to initialize RobertaForSequenceClassification empty or to train this classification model for my dataset? | 06-18-2020 13:31:51 | 06-18-2020 13:31:51 | Hi! If you want to initialize a new `RobertaForSequenceClassification` model you can do so as such:
```py
from transformers import RobertaForSequenceClassification, RobertaConfig
config = RobertaConfig(
# put the args you need, like the vocab size
vocab_size=100
)
model = RobertaForSequenceClassification(config)
```<|||||>I had the same problem when i try to finetune RoBERTa model with my own tokenizer as @raj5287 did. I have already updated all packages (pytorch, transformers)
Code:
> tokenizer = RobertaTokenizerFast.from_pretrained(tokenizer_path)
> model = RobertaForMaskedLM.from_pretrained(self.config)
> dataset = LineByLineTextDataset(
> tokenizer=tokenizer,
> file_path=input_text_path,
> block_size=512,
> )
>
> data_collator = DataCollatorForLanguageModeling(
> tokenizer=tokenizer, mlm=True, mlm_probability=0.15
> )
> training_args = TrainingArguments(
> output_dir=output_path,
> overwrite_output_dir=True,
> num_train_epochs=3,
> per_gpu_train_batch_size=8,
> save_steps=50_000,
> save_total_limit=2,
> )
>
> trainer = Trainer(
> model=model,
> args=training_args,
> data_collator=data_collator,
> train_dataset=dataset,
> prediction_loss_only=True
> )
> trainer.train()
> trainer.save_model(output_path)
>
Output:
File "Fine_tuning_tokenizer.py", line 106, in <module>
main(sys.argv[1:])
File "Fine_tuning_tokenizer.py", line 94, in main
modelRoberta.fine_tune_LM(inputfile, outputDir,tokenizer_path=outputDir)
File "/sentiment-embeddings/projeto-modularizar/src/uff/ic/mell/sentimentembedding/modelos/modelo_roberta.py", line 58, in fine_tune_LM
trainer.train()
File "/home/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 549, in train
tr_loss += self.training_step(model, inputs)
File "/home/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 762, in training_step
return loss.item()
RuntimeError: CUDA error: an illegal memory access was encountered<|||||>> I had the same problem when i try to finetune RoBERTa model with my own tokenizer as @raj5287 did. I have already updated all packages (pytorch, transformers)
>
> Code:
>
> > tokenizer = RobertaTokenizerFast.from_pretrained(tokenizer_path)
> > model = RobertaForMaskedLM.from_pretrained(self.config)
> > dataset = LineByLineTextDataset(
> > tokenizer=tokenizer,
> > file_path=input_text_path,
> > block_size=512,
> > )
> > ```
> > data_collator = DataCollatorForLanguageModeling(
> > tokenizer=tokenizer, mlm=True, mlm_probability=0.15
> > )
> > training_args = TrainingArguments(
> > output_dir=output_path,
> > overwrite_output_dir=True,
> > num_train_epochs=3,
> > per_gpu_train_batch_size=8,
> > save_steps=50_000,
> > save_total_limit=2,
> > )
> >
> > trainer = Trainer(
> > model=model,
> > args=training_args,
> > data_collator=data_collator,
> > train_dataset=dataset,
> > prediction_loss_only=True
> > )
> > trainer.train()
> > trainer.save_model(output_path)
> > ```
>
> Output:
>
> File "Fine_tuning_tokenizer.py", line 106, in
> main(sys.argv[1:])
> File "Fine_tuning_tokenizer.py", line 94, in main
> modelRoberta.fine_tune_LM(inputfile, outputDir,tokenizer_path=outputDir)
> File "/sentiment-embeddings/projeto-modularizar/src/uff/ic/mell/sentimentembedding/modelos/modelo_roberta.py", line 58, in fine_tune_LM
> trainer.train()
> File "/home/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 549, in train
> tr_loss += self.training_step(model, inputs)
> File "/home/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 762, in training_step
> return loss.item()
> RuntimeError: CUDA error: an illegal memory access was encountered
Is it ok to use "DataCollatorForLanguageModeling" in a classification task? <|||||>> I had the same problem when i try to finetune RoBERTa model with my own tokenizer as @raj5287 did. I have already updated all packages (pytorch, transformers)
>
> Code:
>
> > tokenizer = RobertaTokenizerFast.from_pretrained(tokenizer_path)
> > model = RobertaForMaskedLM.from_pretrained(self.config)
> > dataset = LineByLineTextDataset(
> > tokenizer=tokenizer,
> > file_path=input_text_path,
> > block_size=512,
> > )
> > ```
> > data_collator = DataCollatorForLanguageModeling(
> > tokenizer=tokenizer, mlm=True, mlm_probability=0.15
> > )
> > training_args = TrainingArguments(
> > output_dir=output_path,
> > overwrite_output_dir=True,
> > num_train_epochs=3,
> > per_gpu_train_batch_size=8,
> > save_steps=50_000,
> > save_total_limit=2,
> > )
> >
> > trainer = Trainer(
> > model=model,
> > args=training_args,
> > data_collator=data_collator,
> > train_dataset=dataset,
> > prediction_loss_only=True
> > )
> > trainer.train()
> > trainer.save_model(output_path)
> > ```
>
> Output:
>
> File "Fine_tuning_tokenizer.py", line 106, in
> main(sys.argv[1:])
> File "Fine_tuning_tokenizer.py", line 94, in main
> modelRoberta.fine_tune_LM(inputfile, outputDir,tokenizer_path=outputDir)
> File "/sentiment-embeddings/projeto-modularizar/src/uff/ic/mell/sentimentembedding/modelos/modelo_roberta.py", line 58, in fine_tune_LM
> trainer.train()
> File "/home/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 549, in train
> tr_loss += self.training_step(model, inputs)
> File "/home/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 762, in training_step
> return loss.item()
> RuntimeError: CUDA error: an illegal memory access was encountered
Hey @SergioBarretoJr ,I guess one of the causes is when the model encounters an unseen token. and no we cannot use DataCollatorForLanguageModeling for classification task @Chandrak1907. will have to loop through epochs for training and fine tuning.<|||||>Thks guys @raj5287 @Chandrak1907 ! I solved this problems! When I trained my BPE tokenizer, I defined my vocabulary size of 52.000 and I used RoBERTa config from hugging face that has a different vocabulary size. This caused this error.<|||||>@raj5287 , @SergioBarretoJr Can you share github link for your code? I could not find proper reference on huggingface.co. Thanks.<|||||>@SergioBarretoJr I am also facing the same issue, can you please help me?<|||||>> @SergioBarretoJr I am also facing the same issue, can you please help me?
@amandalmia14 Have you ever checked if you set vocab size equal in tokenizer and RoBERTa.config file?
def train_tokenizer(self,file_path,outDir):
# Initialize a tokenizer
tokenizer = ByteLevelBPETokenizer()
# Customize training
tokenizer.train(files=file_path, **vocab_size=52_000**, min_frequency=2, special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
])
self.tokenizer=tokenizer
tokenizer.save(outDir)
This was how I solver my issue. Can I help with something more?<|||||>@SergioBarretoJr Thanks for the snippet, I am able to train the custom language model with the help of [this](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb). However, after creating the custom Roberta Model, I want to use it for creating a multi-label sentence classifier.
Can you help me with that snippet of code? Thanks |
transformers | 5,105 | closed | Is there a helper script to randomly mask spans of text for T5 pretraining? | Hello team,
If I have an input text like
```The cute dog walks in the park```
Is there a library within HuggingFace that can output the following for me by masking random spans of text by (optional) taking an input masking probability?
```
masked input: The <extra_id_1> walks in <extra_id_2> park
target: <extra_id_1> cute dog <extra_id_2> the <extra_id_3> </s>
```
This is for preparing data for T5 within my own codebase. | 06-18-2020 13:09:50 | 06-18-2020 13:09:50 | Not yet -> on the roadmap :-) <|||||>any updates here? |
transformers | 5,104 | closed | Trainer evaluation doesn't return eval loss for question-answering. | As mentioned in this issue:
"""
Just a note that I tried `python run_squad_trainer.py --model_name_or_path bert-base-uncased --model_type bert --data_dir squad --output_dir /tmp/debug_squad/ --overwrite_output_dir --do_train --do_eval --evaluate_during_training --logging_steps 100`.
For some reason I don't get any evaluation metric during training (I was expecting `loss` or `eval_loss`).
_Originally posted by @borisdayma in https://github.com/huggingface/transformers/pull/4829#issuecomment-644338712_
"""
I'm facing the same problem. I'm trying to train with Trainer class over a QA dataset different from SQUAD. Everything works fine, the model learns based on train loss. However, I haven't been able to get the eval loss. I hope these pieces of code show how I'm configuring Trainer. Can somebody tell me if I'm doing something wrong?
```
training_args = TrainingArguments(
output_dir="./models/prueba_2",
per_gpu_train_batch_size=16,
per_gpu_eval_batch_size=32,
num_train_epochs=10,
logging_steps=10,
save_steps=25,
do_eval=True,
evaluate_during_training=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=collator,
prediction_loss_only=True,
compute_metrics = EvalPrediction
)
```
I also tried without EvalPrediction in compute_metrics.
Thanks in advance !
| 06-18-2020 12:20:34 | 06-18-2020 12:20:34 | I solved it editing trainer.py, in this line https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L765 . I added start_positions and end_positions to that "possible labels names" list, and it worked. Review the bug, please.<|||||>Thanks @alexvaca0 !
cc @julien-c , @sgugger <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> I solved it editing trainer.py, in this line https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L765 . I added start_positions and end_positions to that "possible labels names" list, and it worked. Review the bug, please.
I am having the same issue. Can you elaborate on how you solved this problem? I could not locate a possible label list and start/end positions?<|||||>#7191 should solve the problem described here.<|||||>@sgugger How can we calculate validation and evaluation loss for question answering finetuning pipeline (run_qa.py) |
transformers | 5,103 | closed | Tokenizers API developments | Some developments on the tokenizer's API to make it more flexible and easy to use for dynamic/uniform size batching.
- add `return_lengths` methods to both slow and fast tokenizer to have a column with input lengths (useful to sort them afterward)
- make `tokenizer.pad()` accept a list of dicts so it can be used as a `collate_fn` in a PyTorch data loader.
- safety check that all the `kwargs` inputs to encoding methods are recognized (avoid silent errors). raise a warning if not, not an error for now (reduce breaking change behaviors).
- standardized arguments of the encoding methods in unified names (`return_attention_masks` => `return_attention_mask` and `return_special_tokens_masks` => `return_special_tokens_mask`). This is a slight breaking change but even the maintainers of the lib were actually making the confusion as well (which usually resulted in silent error hidden behind the above mentioned generic use of kwargs in encoding errors).
- use the `tokenizers` library `AddedToken` class to control more finely how added (and special) tokens are tokenized (do we tokenize them inside words, do we strip left and/or right white spaces around them). This class gives a simpler and cleaner behavior for `RobertaTokenizer`.
- change the way special tokens can be added to the tokenizers. We now have more separated paths:
* giving a string at tokenizer initialization or setting an attribute (e.g. `tokenizer.mask_token`) update the relevant attribute in the tokenizer class but doesn't add tokens in the vocabulary. We do that because the initialization of the special tokens is usually done before the backend vocabulary is setup. This automatic and non explicit addition of tokens can also be error-prone and a source of silent-errors.
* the recommended way to add a token and use it as a special token is to use `add_special_tokens` which will store the attribute AND add the token to the vocabulary if needed and which returns the number of added tokens
* a new method `tokenizer.sanitize_special_tokens()` can be used to make sure all the special tokens are in the vocabulary and to automatically add them if it's not the case.
This PR should in particular fix cleanly the Roberta specific behavior discussed in:
- https://github.com/huggingface/transformers/pull/2778
- https://github.com/huggingface/transformers/issues/3788
Roberta now behave like GPT2, i.e. without a prefix token. It can still be used in the masked-fill pipeline since with can control the behavior of the mask token with the parameters of `AddedToken` and in particular keep a space after the mask token as required to use the pipeline correctly. | 06-18-2020 11:27:08 | 06-18-2020 11:27:08 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5103?src=pr&el=h1) Report
> Merging [#5103](https://codecov.io/gh/huggingface/transformers/pull/5103?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/417e492f1e832c0b93512600d3385aa4c8a887c9&el=desc) will **decrease** coverage by `0.89%`.
> The diff coverage is `87.26%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5103?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5103 +/- ##
==========================================
- Coverage 77.98% 77.09% -0.90%
==========================================
Files 138 138
Lines 23786 23836 +50
==========================================
- Hits 18550 18376 -174
- Misses 5236 5460 +224
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5103?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.48% <78.78%> (-3.34%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `91.97% <85.71%> (-0.63%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.86% <91.93%> (-0.21%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <100.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `97.12% <100.00%> (+0.04%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <100.00%> (ø)` | |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <100.00%> (+0.13%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `19.92% <0.00%> (-75.00%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5103/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.86% <0.00%> (-0.15%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5103?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5103?src=pr&el=footer). Last update [417e492...ca291b5](https://codecov.io/gh/huggingface/transformers/pull/5103?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,102 | closed | Loading Fine Tuned BERT Sequence Model after Training | # 🐛 Bug
Loading Fine Tuned BERT Sequence Model after Training
## Information
Hi All,
iam facing following issue while loading pretrained BERT Sequence model with my own data
RuntimeError: Error(s) in loading state_dict for DataParallel:
Missing key(s) in state_dict: "module.out.weight", "module.out.bias".
Unexpected key(s) in state_dict: "bert.embeddings.word_embeddings.weight", "bert.embeddings.position_embeddings.weight", "bert.embeddings.token_type_embeddings.weight", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.self.query.weight", "bert.encoder.layer.0.attention.self.query.bias", "bert.encoder.layer.0.attention.self.key.weight", "bert.encoder.layer.0.attention.self.key.bias", "bert.encoder.layer.0.attention.self.value.weight", "bert.encoder.layer.0.attention.self.value.bias", "bert.encoder.layer.0.attention.output.dense.weight", "bert.encoder.layer.0.attention.output.dense.bias", "bert.encoder.layer.0.attention.output.LayerNorm.weight", "bert.encoder.layer.0.attention.output.LayerNorm.bias", "bert.encoder.layer.0.intermediate.dense.weight", "bert.encoder.layer.0.intermediate.dense.bias", "bert.encoder.layer.0.output.dense.weight", "bert.encoder.layer.0.output.dense.bias", "bert.encoder.layer.0.output.LayerNorm.weight", "bert.encoder.layer.0.output.LayerNorm.bias", "bert.encoder.layer.1.attention.self.query.weight", "bert.encoder.layer.1.attention.self.query.bias", "bert.encoder.layer.1.attention.self.key.weight", "bert.encoder.layer.1.attention.self.key.bias", "bert.encoder.layer.1.attention.self.value.weight", "bert.encoder.layer.1.attention.self.value.bias", "bert.encoder.layer.1.attention.output.dense.weight", "bert.encoder.layer.1.attention.output.dense.bias", "bert.encoder.layer.1.attention.output.LayerNorm....
any idea about this error
Model I am using (Bert, XLNet ...):
BERT SequenceClassification
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
Not able to load the Fined Tuned BERT Model from saved model directory
MODEL = BERTBaseUncased()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
MODEL = nn.DataParallel(MODEL)
MODEL.load_state_dict(torch.load(MODEL_PATH, map_location=device))
MODEL.to(DEVICE)
MODEL.eval()
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
my own task with my dataset
## To reproduce
Steps to reproduce the behavior:
1. Save the Fined Tuned BERT Sequence Classification Model
2. Create MODEL_PATH and config file Path from Saved Model Location
3. Run below code to test the Fined Model using Rest API to test the test data
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
import config
import torch
import flask
import time
from flask import Flask
from flask import request
#import functools
import torch.nn as nn
import joblib
from transformers import BertTokenizer,BertConfig,BertModel
app = Flask(__name__)
MODEL = None
MODEL_PATH = r'/content/model_save/pytorch_model.bin'
config_file = r'/content/model_save/config.json'
DEVICE = "cuda"
PREDICTION_DICT = dict()
memory = joblib.Memory("../input/", verbose=0)
class BERTBaseUncased(nn.Module):
def __init__(self):
super(BERTBaseUncased, self).__init__()
#self.bert = transformers.BertModel.from_pretrained(config.BERT_PATH)
#config = BertConfig.from_pretrained("bert-base-uncased", output_hidden_states=True)
config = BertConfig(config_file)
#self.bert = BertModel(config)
self.bert = BertTokenizer.from_pretrained("bert-base-uncased", config=config)
#self.bert = BertTokenizer.from_pretrained(r"/content/model_save/pytorch_model.bin", do_lower_case=False,encoding="utf-8")
self.bert_drop = nn.Dropout(0.3)
self.out = nn.Linear(768, 1)
def forward(self, ids, mask, token_type_ids):
_, o2 = self.bert(ids, attention_mask=mask, token_type_ids=token_type_ids)
bo = self.bert_drop(o2)
output = self.out(bo)
return output
def predict_from_cache(sentence):
if sentence in PREDICTION_DICT:
return PREDICTION_DICT[sentence]
else:
result = sentence_prediction(sentence)
PREDICTION_DICT[sentence] = result
return result
@memory.cache
def sentence_prediction(sentence):
tokenizer = config.TOKENIZER
max_len = config.MAX_LEN
review = str(sentence)
review = " ".join(review.split())
inputs = tokenizer.encode_plus(
review, None, add_special_tokens=True, max_length=max_len
)
ids = inputs["input_ids"]
mask = inputs["attention_mask"]
token_type_ids = inputs["token_type_ids"]
padding_length = max_len - len(ids)
ids = ids + ([0] * padding_length)
mask = mask + ([0] * padding_length)
token_type_ids = token_type_ids + ([0] * padding_length)
ids = torch.tensor(ids, dtype=torch.long).unsqueeze(0)
mask = torch.tensor(mask, dtype=torch.long).unsqueeze(0)
token_type_ids = torch.tensor(token_type_ids, dtype=torch.long).unsqueeze(0)
ids = ids.to(DEVICE, dtype=torch.long)
token_type_ids = token_type_ids.to(DEVICE, dtype=torch.long)
mask = mask.to(DEVICE, dtype=torch.long)
outputs = MODEL(ids=ids, mask=mask, token_type_ids=token_type_ids)
outputs = torch.sigmoid(outputs).cpu().detach().numpy()
return outputs[0][0]
@app.route("/predict")
def predict():
sentence = request.args.get("sentence")
start_time = time.time()
article_prediction = sentence_prediction(sentence)
response = {}
response["response"] = {
"sentence": str(article_prediction),
"time_taken": str(time.time() - start_time),
}
return flask.jsonify(response)
if __name__ == "__main__":
MODEL = BERTBaseUncased()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
MODEL = nn.DataParallel(MODEL)
MODEL.load_state_dict(torch.load(MODEL_PATH, map_location=device))
MODEL.to(DEVICE)
MODEL.eval()
app.run()
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: GPU
- Python version: 3.6
- PyTorch version (GPU?): 1.5.0+cu101
- Tensorflow version (GPU?): 2.x
- Using GPU in script?:Yes
- Using distributed or parallel set-up in script?: Dont Know
| 06-18-2020 09:52:02 | 06-18-2020 09:52:02 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,101 | closed | OpenAIGPTDoubleHeadsModel Don't have the "labels" attributes as it is described in the documentation | # 🐛 Bug
## Information
Model I am using OpenAIGPTDoubleHeadsModel:
Language I am using the model on English:
The problem arises when using:
* The official example scripts:
I'm using transformer version : '2.11.0'
In the documentation i found that i can pass a "labels" attribute to the OpenAIGPTDoubleHeadsModel Model to be used in the loss:
[Documentation](https://huggingface.co/transformers/model_doc/gpt.html#openaigptdoubleheadsmodel)
But when i pass it he didn't recognize it :
``model(input_ids=input_ids,` token_type_ids=token_type_ids, position_ids=None, head_mask=None, mc_token_ids = mc_token_ids, labels=lm_labels, mc_labels = mc_labels)``
``TypeError Traceback (most recent call last)
<ipython-input-67-cc8f7864d8a7> in <module>()
----> 1 model(input_ids=input_ids, token_type_ids=token_type_ids, position_ids=None, head_mask=None, mc_token_ids = mc_token_ids, labels=lm_labels, mc_labels = mc_labels)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
TypeError: forward() got an unexpected keyword argument 'labels' ``
## Question :
Is passing the labels thorough GPT implimentation is depricated ? if yes is there a way to pass it ?
Thank you | 06-18-2020 09:36:47 | 06-18-2020 09:36:47 | Hi @ghsama `lm_labels` is changed to `labels` after 2.11.0. So if you are using version <=2.11.0 use `lm_labels` and if you're using master branch then use `labels`<|||||>Yes ! i just figure it out . Thank you @patil-suraj :D |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.