repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 6,101 | closed | Use HFArgParser instead of Fire | 07-28-2020 17:43:18 | 07-28-2020 17:43:18 | added to my list<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>keepalive<|||||>I'm pretty happy with `fire`, closing. |
|
transformers | 6,100 | closed | [Fix] position_ids tests again | Yesterday, in my haste, I added authorized_missing_keys to `BertModel`, rather than `BertPretrainedModel`. Since position_ids are allowed to be missing for all Bert variants, we want the latter, not the former.
I also improved the traceback for the failing test. | 07-28-2020 17:42:17 | 07-28-2020 17:42:17 | Failure is test_tokenization_auto.py, and spurious . |
transformers | 6,099 | closed | [fix] add bart to LM_MAPPING | This fixes 2/5 failing slow tests in #6094 | 07-28-2020 17:38:03 | 07-28-2020 17:38:03 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6099?src=pr&el=h1) Report
> Merging [#6099](https://codecov.io/gh/huggingface/transformers/pull/6099?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/06834bc33255f5fb8fabb72c9ff114764b3c7ce5&el=desc) will **increase** coverage by `0.16%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6099?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6099 +/- ##
==========================================
+ Coverage 77.77% 77.93% +0.16%
==========================================
Files 146 146
Lines 26325 26325
==========================================
+ Hits 20474 20517 +43
+ Misses 5851 5808 -43
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6099?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.48% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6099/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6099?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6099?src=pr&el=footer). Last update [06834bc...1c7f573](https://codecov.io/gh/huggingface/transformers/pull/6099?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,098 | closed | Fix #6096: MBartTokenizer's mask token | add 3 regression tests. | 07-28-2020 16:35:53 | 07-28-2020 16:35:53 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6098?src=pr&el=h1) Report
> Merging [#6098](https://codecov.io/gh/huggingface/transformers/pull/6098?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dafa296c952c08fca3686f1cf8f3a8f8eb116744&el=desc) will **decrease** coverage by `1.02%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6098?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6098 +/- ##
==========================================
- Coverage 78.80% 77.78% -1.03%
==========================================
Files 146 146
Lines 26325 26326 +1
==========================================
- Hits 20746 20477 -269
- Misses 5579 5849 +270
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6098?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6098/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <100.00%> (+0.06%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6098/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6098/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6098/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.48% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6098/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6098/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `94.97% <0.00%> (+4.41%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6098/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6098?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6098?src=pr&el=footer). Last update [dafa296...30e83a7](https://codecov.io/gh/huggingface/transformers/pull/6098?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,097 | closed | Logs should not be hidden behind a logger.info | Currently the logs that are printed when the global step is a multiple of the `logging_steps` are printed using a `logger.info`. In the case where a user did not set its logging level to INFO, these logs are not shown, even if the user has set a `logging_steps > 0`. This PR fixes that by putting back a print statement.
Closes https://github.com/huggingface/transformers/issues/5901
| 07-28-2020 16:26:01 | 07-28-2020 16:26:01 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6097?src=pr&el=h1) Report
> Merging [#6097](https://codecov.io/gh/huggingface/transformers/pull/6097?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dafa296c952c08fca3686f1cf8f3a8f8eb116744&el=desc) will **decrease** coverage by `0.31%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6097?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6097 +/- ##
==========================================
- Coverage 78.80% 78.48% -0.32%
==========================================
Files 146 146
Lines 26325 26325
==========================================
- Hits 20746 20662 -84
- Misses 5579 5663 +84
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6097?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6097/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `41.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6097/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6097/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6097/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.71% <0.00%> (-1.76%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6097/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6097/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `94.97% <0.00%> (+4.41%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6097/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6097?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6097?src=pr&el=footer). Last update [dafa296...2f0a5a6](https://codecov.io/gh/huggingface/transformers/pull/6097?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,096 | closed | mBART: incorrect <mask> token id | # 🐛 Bug
## Information
Model I am using: mBART
## To reproduce
```
from transformers import MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-cc25')
print(tokenizer.convert_tokens_to_ids(['<mask>', 'ar_AR']))
```
The output for the above code is `[250001, 250001]` - two different special tokens are mapped to the same id.
## Expected behavior
As far as I can tell, `<mask>` token should be mapped to id 250026.
I've checked [fairseq implementation](https://github.com/pytorch/fairseq/blob/master/fairseq/tasks/multilingual_denoising.py) and it seems that `<mask>` token is added after all the language codes, so it should be the last token in the vocab.
Currently, when I try to use mBART to denoise text with `<mask>` tokens, it mostly just ignores them, but if I replace mask ids with 250026, the model actually generates new text in place of `<mask>` tokens:
```
from transformers import MBartTokenizer, BartForConditionalGeneration
tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-cc25')
model = BartForConditionalGeneration.from_pretrained('facebook/mbart-large-cc25')
text = 'I highly recommend <mask> - it is one of the best <mask> ever read!'
inputs = tokenizer.prepare_translation_batch([text], src_lang='en_XX')
outputs = model.generate(inputs['input_ids'], decoder_start_token_id=tokenizer.lang_code_to_id['en_XX'],
num_beams=5)
print(tokenizer.batch_decode(outputs)[0])
```
The output is:
```
en_XX<s> highly recommend - it is one of the best ever read!
```
Replacing mask ids:
```
where = (inputs['input_ids'] == 250001)
inputs['input_ids'][where] = 250026
outputs = model.generate(inputs['input_ids'], decoder_start_token_id=tokenizer.lang_code_to_id['en_XX'],
num_beams=5)
print(tokenizer.batch_decode(outputs)[0])
```
The output is:
```
en_XX<s> highly recommend this book - it is one of the best books I have ever read!
```
(In both cases, the model also skips the first input token when generating output, as discussed in #5755.)
I've also noticed that fairseq is using [language code tokens](https://github.com/pytorch/fairseq/blob/108bb2560b1ec01524ba723bc7c69186875afa0a/fairseq/tasks/multilingual_denoising.py#L62) of the form `[en_XX]` rather than just `en_XX`, which can lead to different tokenization if words like `en_XX` appear in the text, but that's a rather contrived case.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
@sshleifer
| 07-28-2020 14:25:21 | 07-28-2020 14:25:21 | great catch and incredibly detailed description, I'll fix this one! Thanks! |
transformers | 6,095 | closed | Add BERTweet and PhoBERT | I'd like to add BERTweet and PhoBERT to the huggingface's transformers library.
Users can now use these models directly from transformers. E.g:
tokenizer = BertweetTokenizer.from_pretrained("vinai/bertweet-base")
bertweet = BertweetModel.from_pretrained("vinai/bertweet-base") | 07-28-2020 14:08:16 | 07-28-2020 14:08:16 | |
transformers | 6,094 | closed | 5 Slow test failures | traceback: [here](https://github.com/huggingface/transformers/runs/916838508?check_suite_focus=true)
```bash
=========================== short test summary info ============================
FAILED tests/test_modeling_auto.py::AutoModelTest::test_model_for_pretraining_from_pretrained
FAILED tests/test_modeling_auto.py::AutoModelTest::test_model_from_pretrained
FAILED tests/test_modeling_bart.py::BartModelIntegrationTests::test_bart_base_mask_filling
FAILED tests/test_modeling_bart.py::BartModelIntegrationTests::test_bart_large_mask_filling
FAILED tests/test_modeling_common.py::ModelUtilsTest::test_model_from_pretrained
==== 5 failed, 1423 passed, 489 skipped, 384 warnings in 1609.33s (0:26:49) ====
```
I'm investigating | 07-28-2020 14:07:02 | 07-28-2020 14:07:02 | ALL FIXED YEEHAW |
transformers | 6,093 | closed | Fix #6092 | `BatchEncoding` are not instances of dictionaries so we need the separate test. | 07-28-2020 13:32:53 | 07-28-2020 13:32:53 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6093?src=pr&el=h1) Report
> Merging [#6093](https://codecov.io/gh/huggingface/transformers/pull/6093?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/54f49af4aef2b19aaf00ffa400ff6c1e4292e9dd&el=desc) will **decrease** coverage by `1.20%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6093?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6093 +/- ##
==========================================
- Coverage 78.62% 77.42% -1.21%
==========================================
Files 146 146
Lines 26324 26325 +1
==========================================
- Hits 20698 20381 -317
- Misses 5626 5944 +318
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6093?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `97.41% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.01%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6093/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6093?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6093?src=pr&el=footer). Last update [54f49af...6fe884d](https://codecov.io/gh/huggingface/transformers/pull/6093?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,092 | closed | i dont know what Tranier`s Dataset is. | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->


i thought its my customer dataset goes wrong, i dont konw what dataset it should return. the Trainer receive what dataset.
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 07-28-2020 13:11:48 | 07-28-2020 13:11:48 | Can reproduce, it should be fixed by #6093. Thanks for flagging! |
transformers | 6,091 | closed | Fix local_files_only for TF | The `local_files_only` was not working for TF. The flag is to be used as such:
```py
from transformers import TFBertModel
model = TFBertModel.from_pretrained('bert-base-cased', local_files_only=True)
```
Setting it to `True` for any TF model would result in the following error:
```
Traceback (most recent call last):
File "/Users/jik/Library/Application Support/JetBrains/PyCharm2020.1/scratches/scratch_1.py", line 7, in <module>
model = TFBertModel.from_pretrained('bert-base-cased', local_files_only=True)
File "/Users/jik/Workspaces/python/transformers/src/transformers/modeling_tf_utils.py", line 578, in from_pretrained
local_files_only=local_files_only,
File "/Users/jik/Workspaces/python/transformers/src/transformers/file_utils.py", line 663, in cached_path
local_files_only=local_files_only,
File "/Users/jik/Workspaces/python/transformers/src/transformers/file_utils.py", line 801, in get_from_cache
"Cannot find the requested files in the cached path and outgoing traffic has been"
ValueError: Cannot find the requested files in the cached path and outgoing traffic has been disabled. To enable model look-ups and downloads online, set 'local_files_only' to False.
```
This is because the `file_utils.get_from_cache` method obtains the filename from the `url_to_filename` method:
https://github.com/huggingface/transformers/blob/1246b20f6d81bcd949078d26cf5ab3d0f3acccc6/src/transformers/file_utils.py#L777
The issue is that this method adds `.h5` to TF models:
https://github.com/huggingface/transformers/blob/1246b20f6d81bcd949078d26cf5ab3d0f3acccc6/src/transformers/file_utils.py#L585-L586
This works when the file is already built using `{filename}.{etag}`, resulting in `{filename}.{etag}.h5`. However, since the etag is `None` with `local_files_only=True`, this results in `{filename}.h5`.
The method tries to find the saved files using the filename followed by `.*`:
https://github.com/huggingface/transformers/blob/1246b20f6d81bcd949078d26cf5ab3d0f3acccc6/src/transformers/file_utils.py#L788-L792
This doesn't work since it's looking for `{filename}.h5.*` which doesn't exist. It should instead be looking for `{filename}.*`, which is what it's doing now.
Fix https://github.com/huggingface/transformers/issues/5016 | 07-28-2020 10:47:35 | 07-28-2020 10:47:35 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Thanks, @LysandreJik . I manually applied this fix to a local installation of `transformers==3.3.0` and can confirm that this fix resolves #5016 . Can this please be merged?<|||||>This still doesn't work in 3.3.0. I am getting the same issue on running without internet with transformers==3.3.0<|||||>You're perfectly correct! This was merged the 1st of October, and 3.3.0 was released in September. Please install a more recent version to get that fix (version v3.4.0 being the first release which contained that fix).<|||||>Thanks, I'll check. |
transformers | 6,090 | closed | customize special tokens | # ❓ Questions & Help
## Details
I am trying to add more special tokens in order to train new language models without modifying the model architecture but simply modifying the input data. For example, adding more separators to involving more complex structure information.
Is there a simple, clear, consistent way to add/customize special tokens.
1. The function `add_special_tokens` can only add token string to token attribute specified in `SPECIAL_TOKENS_ATTRIBUTES`
2. I tried to extend this list and `add_special_tokens` works, but calling
`tokenizer.get_special_tokens_mask` report that _XXX does not exist, where XXX is the new customized special token attribute
finally I found a solution: modifying `tokenizer._additional_special_tokens.extend` directly makes `tokenizer.get_special_tokens_mask` work. But I don't know how could this work and if there is any other side effect.
Properties like `special_tokens_map` is also confusing, it seems like a @property implementation, which is not modifiable. And I don't know where this could be used to have what effect.
I cost much time to read the source code but still lack a overview about the special token architecture in tokenizers. | 07-28-2020 10:22:25 | 07-28-2020 10:22:25 | For special tokens that don't correspond to one of the attributes of tokenizers, you should pass them to `add_special_tokens` with the `additional_special_tokens` keyword arguments:
```
tokenizer.add_special_tokens(additional_special_tokens = ['TK1', 'TK2' ...])
```
You can then change `tokenizer.additional_special_tokens` directly if you need to add or remove some of those tokens.<|||||>@sgugger Thank you very much. |
transformers | 6,089 | closed | Added capability to quantize a model while exporting through ONNX. | Add quantization support as part of our collaboration with ONNX team.
Quantization is available through the new method `quantize` which takes the path to the initial ONNX model and operate the conversion.
From a CLI point of view, adding `--quantize` allows to seamlessly export the quantized model.
Signed-off-by: Morgan Funtowicz <[email protected]> | 07-28-2020 09:17:46 | 07-28-2020 09:17:46 | cc: @tianleiwu @yufenglee from Microsoft<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6089?src=pr&el=h1) Report
> Merging [#6089](https://codecov.io/gh/huggingface/transformers/pull/6089?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/640550fc7a1e311915ead1bcca6dacea0c503faf&el=desc) will **increase** coverage by `0.68%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6089?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6089 +/- ##
==========================================
+ Coverage 77.85% 78.53% +0.68%
==========================================
Files 146 146
Lines 26326 26326
==========================================
+ Hits 20496 20676 +180
+ Misses 5830 5650 -180
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6089?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6089/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `45.98% <0.00%> (-44.92%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6089/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.98% <0.00%> (-0.98%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6089/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6089/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (+0.75%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6089/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6089?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6089?src=pr&el=footer). Last update [640550f...16ecccc](https://codecov.io/gh/huggingface/transformers/pull/6089?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,088 | closed | Finetuning German BERT for QA on biomedical domain | Hello there and thank you very much for this wonderful work. I am relatively new to this field, so please bear with my amateur question. I want to perform question-answering on a German Biomedical text. From what I understand up to now, I need to fine-tune German BERT on biomedical QA datasets. Is there any script/pipeline that I should be using for this?
I have also posted in StackOverflow and the huggingface forum, but to no avail as of now.
Thank you very much in advance. | 07-28-2020 09:09:32 | 07-28-2020 09:09:32 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 6,087 | closed | fixed typos, added example question. | Added a new example question for the widget. Fixed some typos in the description. | 07-28-2020 08:27:04 | 07-28-2020 08:27:04 | |
transformers | 6,086 | closed | Replace mecab-python3 with fugashi for Japanese tokenization | This replaces mecab-python3 with fugashi for Japanese tokenization. I am
the maintainer of both projects.
Both projects are MeCab wrappers, so the underlying C++ code is the
same. fugashi is the newer wrapper and doesn't use SWIG, so for basic
use of the MeCab API it's easier to use.
This code insures the use of a version of ipadic installed via pip,
which should make versioning and tracking down issues easier.
fugashi has wheels for Windows, OSX, and Linux, which will help with
issues with installing old versions of mecab-python3 on Windows.
Compared to mecab-python3, because fugashi doesn't use SWIG, it doesn't
require a C++ runtime to be installed on Windows.
In adding this change I removed some code dealing with `cursor`,
`token_start`, and `token_end` variables. These variables didn't seem to
be used for anything, it is unclear to me why they were there.
I ran the tests and they passed. For reference, since I had trouble figuring it out, this is needed to run the tests:
RUN_CUSTOM_TOKENIZERS=yes RUN_SLOW=1 pytest -rs tests/test_tokenization_bert_japanese.py
This is a followup to #5375 .
It's not in this PR, but because installing MeCab separately is not required, it might be a good idea to have the docs changed to point directly to fugashi instead of MeCab. | 07-28-2020 07:21:25 | 07-28-2020 07:21:25 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6086?src=pr&el=h1) Report
> Merging [#6086](https://codecov.io/gh/huggingface/transformers/pull/6086?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/91cb95461e438dc57555c4f57f8ce95a56328036&el=desc) will **increase** coverage by `1.14%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6086?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6086 +/- ##
==========================================
+ Coverage 78.35% 79.50% +1.14%
==========================================
Files 146 146
Lines 26454 26450 -4
==========================================
+ Hits 20729 21030 +301
+ Misses 5725 5420 -305
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6086?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bert\_japanese.py](https://codecov.io/gh/huggingface/transformers/pull/6086/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9qYXBhbmVzZS5weQ==) | `32.05% <0.00%> (+1.56%)` | :arrow_up: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6086/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `60.56% <0.00%> (-35.22%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6086/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6086/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.45% <0.00%> (-2.76%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6086/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6086?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6086?src=pr&el=footer). Last update [91cb954...3f97763](https://codecov.io/gh/huggingface/transformers/pull/6086?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>My pleasure. It'd be great if you could edit the docs 👍 I think I gave edit permissions so you should be able to push normally.<|||||>Awesome, this has been a small but sore thorn in some issues posted here. Thanks a lot @polm!<|||||>If you ever have any other issues with it please feel free to tag me any time. |
transformers | 6,085 | closed | tf.saved_model.save is not worked on TFElectra* series. | # 🐛 Bug
## Information
Model I am using TFElectraModel:
Language I am using the model on English:
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
import tensorflow as tf
from pathlib import Path
from transformers import TFAutoModel
dump_dir = Path("test_electra")
if not dump_dir.exists():
dump_dir.mkdir(parents=True)
model = TFAutoModel.from_pretrained("google/electra-base-discriminator")
tf.saved_model.save(model, export_dir=str(dump_dir))
```
```bash
Traceback (most recent call last):
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3343, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-a8a019d74543>", line 11, in <module>
tf.saved_model.save(model, export_dir=str(dump_dir))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py", line 951, in save
obj, export_dir, signatures, options, meta_graph_def)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py", line 1008, in _build_meta_graph
checkpoint_graph_view)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/saved_model/signature_serialization.py", line 75, in find_function_to_export
functions = saveable_view.list_functions(saveable_view.root)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py", line 143, in list_functions
self._serialization_cache)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 1656, in _list_functions_for_serialization
Model, self)._list_functions_for_serialization(serialization_cache)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 2750, in _list_functions_for_serialization
.list_functions_for_serialization(serialization_cache))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/base_serialization.py", line 87, in list_functions_for_serialization
fns = self.functions_to_serialize(serialization_cache)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 77, in functions_to_serialize
serialization_cache).functions_to_serialize)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 92, in _get_serialized_attributes
serialization_cache)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/model_serialization.py", line 53, in _get_serialized_attributes_internal
serialization_cache))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 101, in _get_serialized_attributes_internal
functions = save_impl.wrap_layer_functions(self.obj, serialization_cache)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 153, in wrap_layer_functions
original_fns = _replace_child_layer_functions(layer, serialization_cache)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 272, in _replace_child_layer_functions
serialization_cache).functions)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 92, in _get_serialized_attributes
serialization_cache)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/model_serialization.py", line 53, in _get_serialized_attributes_internal
serialization_cache))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 101, in _get_serialized_attributes_internal
functions = save_impl.wrap_layer_functions(self.obj, serialization_cache)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 163, in wrap_layer_functions
'{}_layer_call_and_return_conditional_losses'.format(layer.name))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 503, in add_function
self.add_trace(*self._input_signature)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 418, in add_trace
trace_with_training(True)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 416, in trace_with_training
fn.get_concrete_function(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 547, in get_concrete_function
return super(LayerCall, self).get_concrete_function(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 959, in get_concrete_function
concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 865, in _get_concrete_function_garbage_collected
self._initialize(args, kwargs, add_initializers_to=initializers)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 506, in _initialize
*args, **kwds))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2446, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 524, in wrapper
ret = method(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 170, in wrap_with_training_arg
lambda: replace_training_and_call(False))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/utils/tf_utils.py", line 65, in smart_cond
pred, true_fn=true_fn, false_fn=false_fn, name=name)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/framework/smart_cond.py", line 54, in smart_cond
return true_fn()
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 169, in <lambda>
lambda: replace_training_and_call(True),
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 165, in replace_training_and_call
return wrapped_call(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 566, in call_and_return_conditional_losses
return layer_call(inputs, *args, **kwargs), layer.get_losses_for(inputs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/transformers/modeling_tf_electra.py", line 292, in call
hidden_states = self.embeddings([input_ids, position_ids, token_type_ids, inputs_embeds], training=training)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 968, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 71, in return_outputs_and_add_losses
outputs, losses = fn(inputs, *args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 170, in wrap_with_training_arg
lambda: replace_training_and_call(False))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/utils/tf_utils.py", line 65, in smart_cond
pred, true_fn=true_fn, false_fn=false_fn, name=name)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/framework/smart_cond.py", line 54, in smart_cond
return true_fn()
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 169, in <lambda>
lambda: replace_training_and_call(True),
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 165, in replace_training_and_call
return wrapped_call(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 541, in __call__
self.call_collection.add_trace(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 418, in add_trace
trace_with_training(True)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 416, in trace_with_training
fn.get_concrete_function(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 547, in get_concrete_function
return super(LayerCall, self).get_concrete_function(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 959, in get_concrete_function
concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 865, in _get_concrete_function_garbage_collected
self._initialize(args, kwargs, add_initializers_to=initializers)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 506, in _initialize
*args, **kwds))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2446, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 524, in wrapper
ret = method(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 170, in wrap_with_training_arg
lambda: replace_training_and_call(False))
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/utils/tf_utils.py", line 65, in smart_cond
pred, true_fn=true_fn, false_fn=false_fn, name=name)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/framework/smart_cond.py", line 54, in smart_cond
return true_fn()
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 169, in <lambda>
lambda: replace_training_and_call(True),
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py", line 165, in replace_training_and_call
return wrapped_call(*args, **kwargs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 566, in call_and_return_conditional_losses
return layer_call(inputs, *args, **kwargs), layer.get_losses_for(inputs)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1627, in get_losses_for
reachable = tf_utils.get_reachable_from_inputs(inputs, losses)
File "/Users/nick/.pyenv/versions/conversion/lib/python3.7/site-packages/tensorflow/python/keras/utils/tf_utils.py", line 140, in get_reachable_from_inputs
raise TypeError('Expected Operation, Variable, or Tensor, got ' + str(x))
TypeError: Expected Operation, Variable, or Tensor, got None
```
## Expected behavior
```python
import tensorflow as tf
from pathlib import Path
from transformers import TFAutoModel
dump_dir = Path("test_distilbert")
if not dump_dir.exists():
dump_dir.mkdir(parents=True)
model = TFAutoModel.from_pretrained("distilbert-base-uncased")
tf.saved_model.save(model, export_dir=str(dump_dir))
```
```bash
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x16538b0d0>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x165347250>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x16531c2d0>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x165e22f90>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x16528bd90>, because it is not built.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x16526d850>, because it is not built.
INFO:tensorflow:Assets written to: test_distilbert/assets
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| 07-28-2020 06:27:43 | 07-28-2020 06:27:43 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,084 | closed | ValueError raises when load Flaubert from pre-train with Transformers >=3.0.0 | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): **Flaubert**
Language I am using the model on (English, Chinese ...): **French**
The problem arises when using: my own modified scripts
## To reproduce
**Just load Flaubert model with from_pretrained:**
```
MODEL = "flaubert/flaubert_large_cased"
transformer_layer = TFAutoModel.from_pretrained(MODEL, from_pt=True)
```
**Then a ValueError is raised:**
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-22-4235ae313fbd> in <module>()
1 MODEL = "flaubert/flaubert_large_cased"
----> 2 transformer_layer = TFAutoModel.from_pretrained(MODEL, from_pt=True)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
421 for config_class, model_class in TF_MODEL_MAPPING.items():
422 if isinstance(config, config_class):
--> 423 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
424 raise ValueError(
425 "Unrecognized configuration class {} for this kind of TFAutoModel: {}.\n"
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
479 if from_pt:
480 # Load from a PyTorch checkpoint
--> 481 return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)
482
483 model(model.dummy_inputs, training=False) # build the network with dummy inputs
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_pytorch_utils.py in load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path, tf_inputs, allow_missing_keys)
91
92 return load_pytorch_weights_in_tf2_model(
---> 93 tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys
94 )
95
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_pytorch_utils.py in load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs, allow_missing_keys)
123
124 if tf_inputs is not None:
--> 125 tf_model(tf_inputs, training=False) # Make sure model is built
126
127 # Adapt state dict - TODO remove this and update the AWS weights files instead
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
966 with base_layer_utils.autocast_context_manager(
967 self._compute_dtype):
--> 968 outputs = self.call(cast_inputs, *args, **kwargs)
969 self._handle_activity_regularization(inputs, outputs)
970 self._set_mask_metadata(inputs, outputs, input_masks)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_xlm.py in call(self, inputs, **kwargs)
635 heads.
636 """
--> 637 outputs = self.transformer(inputs, **kwargs)
638 return outputs
639
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
966 with base_layer_utils.autocast_context_manager(
967 self._compute_dtype):
--> 968 outputs = self.call(cast_inputs, *args, **kwargs)
969 self._handle_activity_regularization(inputs, outputs)
970 self._set_mask_metadata(inputs, outputs, input_masks)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_flaubert.py in call(self, inputs, attention_mask, langs, token_type_ids, position_ids, lengths, cache, head_mask, inputs_embeds, training, output_attentions, output_hidden_states)
267 tensor_normalized = self.layer_norm1[i](tensor)
268 attn_outputs = self.attentions[i](
--> 269 [tensor_normalized, attn_mask, None, cache, head_mask[i]], training=training
270 )
271 attn = attn_outputs[0]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
966 with base_layer_utils.autocast_context_manager(
967 self._compute_dtype):
--> 968 outputs = self.call(cast_inputs, *args, **kwargs)
969 self._handle_activity_regularization(inputs, outputs)
970 self._set_mask_metadata(inputs, outputs, input_masks)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_xlm.py in call(self, inputs, training)
139 Self-attention (if kv is None) or attention over source sentence (provided by kv).
140 """
--> 141 input, mask, kv, cache, head_mask, output_attentions = inputs
142 # Input is (bs, qlen, dim)
143 # Mask is (bs, klen) (non-causal) or (bs, klen, klen)
ValueError: not enough values to unpack (expected 6, got 5)
```
## Expected behavior
Load the model successfully.
- It can be achieved with Transformers version <= 2.11.0
- I also try CamemBERT, and Bert-like models in other languages like tr, ru, es, pt, it. None of them have this problem.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: **>=3.0.0 ( 3.0.0, 3.0.1, 3.0.2 all have this problem.)**
(I try this on google colab with GPU and TPU. The environment does not matter here. But I still provide the one with GPU.)
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 07-28-2020 06:17:54 | 07-28-2020 06:17:54 | Hi @wzmJimmy,
Did you get any solution for this problem? I am also facing the same problem. Please let me know if you are able to make it run. Thanks.<|||||>Well, my solution is inside the expected behavior part. Downgrade transformers to version lower than 3.0.0 and it just works. I post the bug report just to ask what's wrong and how to fix it in the newer version. |
transformers | 6,083 | closed | Error when using np.where() during squad tokenization | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):Bert
Language I am using the model on (English, Chinese ...):English
The problem arises when using:
* [x] the official example scripts: squad1.1
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD task
* [ ] my own task or dataset: (give details below)
## To reproduce
I'm confused about this line:
https://github.com/huggingface/transformers/blob/896300177bf9f35feac4698370212a80a5ab6138/src/transformers/data/processors/squad.py#L230
Should this be
```
pad_token_indices = np.where(np.array(span["input_ids"]) == tokenizer.pad_token_id)
```
Cause "p_mask: mask with 1 for token than cannot be in the answer (0 for token which can be in an answer)", and the padding positions should not be answers
Steps to reproduce the behavior:
```python
span["input_ids"] = [1,1,3,0,0,0]
np.where([1,1,3,0,0,0] == tokenizer.pad_token_id)
# return (array([], dtype=int64),)
np.where(np.array([1,1,3,0,0,0]) == tokenizer.pad_token_id)
# return (array([3, 4, 5]),)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
As described above.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 7a68d401388bc68f10dfeb591709352736a6c0b6
- Platform:
- Python version: python3.6.8
- PyTorch version (GPU?):1.5.0 CPU
- Tensorflow version (GPU?):
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?:NO
| 07-28-2020 03:11:43 | 07-28-2020 03:11:43 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,082 | closed | 🐛 Inconsistencies between BartTokenizer and BartTokenizerFast | # 🐛 Bug
## Description
It's possible to use the argument `add_prefix_space` with `BartTokenizer` :
```
from transformers import BartTokenizer
t = BartTokenizer.from_pretrained("facebook/bart-large")
x = t("This is an example.", add_prefix_space=True)
```
But when doing the same with `BartTokenizerFast` :
```
from transformers import BartTokenizerFast
t = BartTokenizerFast.from_pretrained("facebook/bart-large")
x = t("This is an example.", add_prefix_space=True)
```
It throws the following error :
```
ValueError: Keyword arguments {'add_prefix_space': True} not recognized.
```
## To reproduce
[Colab notebook](https://colab.research.google.com/drive/1f0W2llsJfVIkXsYk0XK1C3oiy2xSqYOx?usp=sharing)
## Work-around
It seems working if the argument is specified in the constructor :
```
from transformers import BartTokenizerFast
t = BartTokenizerFast.from_pretrained("facebook/bart-large", add_prefix_space=True)
x = t("This is an example.")
```
@sshleifer | 07-28-2020 02:21:15 | 07-28-2020 02:21:15 | Seems like this issue is for all fast tokenizers, this [line](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_fast.py#L312) directly raises that error when kwargs are passed<|||||>Can I close? Seems like your workaround is solid @Colanim <|||||>If it's the expected behavior, sure go ahead ^^
I opened because I thought `Tokenizer` and `TokenizerFast` were expected to have the same behaviors.<|||||>Fair. I think there is a medium-term plan to deprecate `add_prefix_space`, so at the moment it is left inconsistent and in the future it will be deleted from both. |
transformers | 6,081 | closed | [s2s] Delete useless method, log tokens_per_batch | - trimming is already performed in `collate_fn`, there is no need to call it again.
- val_summ_len can be called gen_len so that it makes sense for a translation model.
- new metric: tpb = how many non pad tokens per batch. Useful for debugging/verifying speedups.
cc @patil-suraj | 07-28-2020 00:46:46 | 07-28-2020 00:46:46 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6081?src=pr&el=h1) Report
> Merging [#6081](https://codecov.io/gh/huggingface/transformers/pull/6081?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dc4755c6d59238ffea4843d06610a29c522257fb&el=desc) will **increase** coverage by `0.07%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6081?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6081 +/- ##
==========================================
+ Coverage 78.13% 78.20% +0.07%
==========================================
Files 146 146
Lines 26325 26318 -7
==========================================
+ Hits 20569 20582 +13
+ Misses 5756 5736 -20
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6081?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.52% <0.00%> (-2.09%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.37% <0.00%> (-1.21%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.14% <0.00%> (-0.76%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.58% <0.00%> (-0.13%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (ø)` | |
| [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <0.00%> (ø)` | |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (ø)` | |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.48% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `99.09% <0.00%> (+1.68%)` | :arrow_up: |
| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6081/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6081?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6081?src=pr&el=footer). Last update [dc4755c...51c962b](https://codecov.io/gh/huggingface/transformers/pull/6081?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>fixed typo! |
transformers | 6,080 | closed | Proposal: seq2seq tokenizers expose a prepare_seq2seq_batch method | This has been useful in `MarianTokenizer.prepare_translation_batch` and `MBartTokenizer.prepare_translation_batch`.
and would also be a useful addition for `T5Tokenizer`, and `BartTokenizer`.
@LysandreJik mentioned this in a PR so I wanted to see what others thought before starting work.
@patrickvonplaten @sgugger @mfuntowicz @patil-suraj ?
Proposed signature (I would not add this to PretrainedModel, just rename the Marian/MBart implems and add implems for Bart, T5. When pegasus, blenderbot get merged, I would also add implems there consider a helper function.)
```python
def prepare_seq2seq_batch(
self,
src_texts: List[str],
tgt_texts: Optional[List[str]] = None,
max_length: Optional[int] = None,
max_target_length:Optional[int] = None,
padding: str = "longest",
**kwargs,
) -> BatchEncoding:
if max_length is None:
max_length = self.max_len
if max_target_length is None: # default to max_length
max_target_length = max_length# no need to specify twice for translation
model_inputs: BatchEncoding = self(
src_texts,
add_special_tokens=True,
return_tensors=return_tensors,
max_length=max_length,
padding=padding,
truncation=True,
**kwargs,
)
if tgt_texts is None:
return model_inputs
# Here classes can implement logic to put decoder_start_token_id at the front if they want,
# or deal with language codes for multilingual models.
decoder_inputs: BatchEncoding = self(
tgt_texts,
add_special_tokens=True,
return_tensors=return_tensors,
padding=padding,
max_length=max_target_length,
truncation=True,
**kwargs,
)
for k, v in decoder_inputs.items():
model_inputs[f"decoder_{k}"] = v
return model_inputs # still a BatchEncoding
```
| 07-28-2020 00:09:49 | 07-28-2020 00:09:49 | Motivation:
less important: only call tokenizer once
more important: the best time to think about special tokens is when you are writing the tokenizer, not when you are designing your training loop. We've had a lot of bugs recently involving special tokens in seq2seq training loops that likely could have been avoided if when the tokenizer was added the author had written a method that said: "This is how you make a batch".<|||||>This will be actually really helpful. Many people run into bugs by forgetting to add special tokens, or shift labels to right. Should this also prepare `labels` along with `decoder_input_ids` as needed by seq2seq models ? <|||||>This sounds useful, and looks like it could then directly be used as a `data_collator` in `Trainer`.<|||||>yes, optionally making `labels` would be very useful!
<|||||>What do you think @thomwolf ?<|||||>I am not really in favor of adding this code...I think the user should have understood the library (tokenizers) well enough to not need such a helper function. Think we could add it in a notebook / example and point to it if people ask, but IMO the tokenizers already have too many functions exposed to the user...<|||||>> I think the user should have understood the library (tokenizers) well enough to not need such a helper function.
1) I got this wrong the first time for marian and the only way to support finetuning multilingual models was to add `prepare_translation_batch`.
2) Suraj just fixed a bug (that was also my fault, but nobody noticed) about adding `decoder_start_token_id` to the beginning of `t5.decoder_input_ids`. Both of us know the library pretty well and had trouble getting this right.
3) The tokenizers are already setup to create model-ready batches for GLUE tasks (an awesome feature IMO). Why shouldn't they create model-ready batches for seq2seq tasks?
<|||||>@patrickvonplaten said that he is ok with this change if @mfuntowicz and @n1t0 are ok with it.
What do you guys think?
@patil-suraj we can move forward on your PR once we get some consensus that this is an OK direction to go in.<|||||>Im feeling pretty strongly about this as I just messed up decoder special tokens for pegasus finetuning in a very avoidable way :(; the tokenizers should handle special tokens, not some random barely tested examples/processor* code.<|||||>After thinking a bit more about this and talking to @sshleifer, I am fine with the PR. I agree now that there are a lot of use cases when `prepare_seq2seq_batch` is used and since it's going to be added to each model specifically, it's clean as well. Also since it will replace `prepare_translation_batch`, it does not really increase function exposure.
Two things that I'm still a bit uncertain about is:
a) It does add a "layer" on top of the `__call__` function. But I guess we do the same with `generate()` on top of `forward()`
b) will we have to add this functionality in Rust then as well? Will it be difficult to add? @n1t0
Overall I'm in favor of this design as well though now<|||||>Moving forward, as discussed with @LysandreJik .<|||||>The problem here is that the inputs are retokenized every time right, instead of pre-tokenizing and fixing padding to fit the max size of the batch as with [DataCollatorForSeq2Seq](https://huggingface.co/docs/transformers/main_classes/data_collator#transformers.DataCollatorForSeq2Seq)
Not a huge deal I guess. <|||||>I feel like Transformers would maybe benefit from having a better advertised, "one" good way to do seq2seq that it could put in all of it's tutorials and everything. I had kind of a harder time than I would have expected to figure this out<|||||>> The problem here is that the inputs are retokenized every time right, instead of pre-tokenizing and fixing padding to fit the max size of the batch as with [DataCollatorForSeq2Seq](https://huggingface.co/docs/transformers/main_classes/data_collator#transformers.DataCollatorForSeq2Seq)
The inputs are not retokenized every time. `DataCollatorForSeq2Seq` just pads them dynamically to reduce excessive padding which can be more efficient rather than always padding to max len.
> I feel like Transformers would maybe benefit from having a better advertised, "one" good way to do seq2seq that it could put in all of it's tutorials and everything
Yes, good idea!<|||||>I meant, with `prepare_seq2seq_batch` they are tokenized every time, not with `DataCollatorForSeq2Seq` |
transformers | 6,078 | closed | model.roberta.from_pretrained() fails to change the parameters | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**:
Hi all, my problem is "how to use a pretrained 3-way sequence classification model to fine-tune on a 2-way classification task". I am using "https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py";
in the code I change
```
model_args.model_name_or_path = 'roberta-large'
model = AutoModelForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
)
model.roberta.from_pretrained(path_to_my_3way_pretrained_model)
```
This is what I tried. Pls note that I can not used the "AutoModelForSequenceClassification.from_pretrained()" to load "path_to_my_3way_pretrained_model" directly because there will be class mismatch (i.e., 3-way model does not apply to 2-way task); but I think no matter it is 3-way or 2-way sequence classification, what their architectures share is the roberta part; so I used the model.roberta to load the parameters from my pretrained model.
However, I found the roberta parameters do not change, I checked as follows
```
for name, param in model.named_parameters():
if param.requires_grad and name == 'roberta.encoder.layer.16.attention.self.value.weight':
print('new:', name, param.data)
```
Any clue why it doesn't work? Or any solution how to use a fine-tune a pretrained 3-way model on a 2-way task? Thanks | 07-27-2020 23:27:04 | 07-27-2020 23:27:04 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,077 | closed | [s2s] dont document packing because it hurts performance | 07-27-2020 22:25:37 | 07-27-2020 22:25:37 | ||
transformers | 6,076 | closed | Create README.md | 07-27-2020 22:06:32 | 07-27-2020 22:06:32 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6076?src=pr&el=h1) Report
> Merging [#6076](https://codecov.io/gh/huggingface/transformers/pull/6076?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d0d3a6645384e236c55d311f3f8b7dd67d58562&el=desc) will **decrease** coverage by `1.21%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6076?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6076 +/- ##
==========================================
- Coverage 78.59% 77.37% -1.22%
==========================================
Files 146 146
Lines 26314 26314
==========================================
- Hits 20681 20361 -320
- Misses 5633 5953 +320
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6076?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6076/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6076/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-1.26%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6076/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6076/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6076?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6076?src=pr&el=footer). Last update [9d0d3a6...45cc428](https://codecov.io/gh/huggingface/transformers/pull/6076?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,075 | closed | Create README.md | 07-27-2020 22:03:43 | 07-27-2020 22:03:43 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6075?src=pr&el=h1) Report
> Merging [#6075](https://codecov.io/gh/huggingface/transformers/pull/6075?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d0d3a6645384e236c55d311f3f8b7dd67d58562&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6075?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6075 +/- ##
=======================================
Coverage 78.59% 78.59%
=======================================
Files 146 146
Lines 26314 26314
=======================================
Hits 20681 20681
Misses 5633 5633
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6075?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6075/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <0.00%> (-0.29%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6075/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6075?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6075?src=pr&el=footer). Last update [9d0d3a6...de221b0](https://codecov.io/gh/huggingface/transformers/pull/6075?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>What's the difference between this and `_v2`?<|||||>v2 includes ingredients too. V1 just includes instructions (the dataset for this version was provided by https://twitter.com/alexcg/status/1286464335867346946?s=19 But then I found a version with ingredients and instructions and created v2.<|||||>👍 |
|
transformers | 6,074 | closed | Cannot use the RobertaForMultipleChoice model for processing multiple choice questions with 4 options | Hello,
When I try to use the `RobertaForMultipleChoice` pretrained model with the code below, it generates an error:
```python
from transformers import RobertaTokenizer, RobertaForMultipleChoice
import torch
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaForMultipleChoice.from_pretrained('roberta-base')
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
choice2 = "It is eathen with a napkin."
choice3 = "It is not eatable."
labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
encoding = tokenizer([[prompt, prompt,prompt,prompt], [choice0, choice1, choice2, choice3]], return_tensors='pt', return_token_type_ids=True,padding=True)
```
The error message is:
```python
File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 534, in _batch_encode_plus
ids, pair_ids = ids_or_pair_ids
ValueError: too many values to unpack (expected 2)
```
What am I doing wrong here? Thank you.
| 07-27-2020 22:02:32 | 07-27-2020 22:02:32 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Good question. `xxxForMultipleChoice` models are actually a bit tricky. The way you should provide the data to the tokenizer is as follows:
```
prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
choice0 = "It is eaten with a fork and a knife."
choice1 = "It is eaten while held in the hand."
choice2 = "It is eathen with a napkin."
choice3 = "It is not eatable."
encoded_dict = bert_tokenizer([prompt, prompt, prompt, prompt],
[choice0, choice1, choice2, choice3],
return_tensors='pt',
padding='max_length')
```
Note the difference: we provide 2 lists to the tokenizer rather than one. The first list contains the first sequence of every training example, the second list contains the second sequence. Every training example will then be encoded as `[CLS] prompt [SEP] choice x [SEP]`.
For more details, see [here](https://github.com/huggingface/transformers/issues/7701#issuecomment-707149546). |
transformers | 6,073 | closed | Create README.md | 07-27-2020 22:01:00 | 07-27-2020 22:01:00 | I forgot to add the ```widget``` keyword |
|
transformers | 6,072 | closed | Error in the RobertaTokenizer? | Hello,
I think I spotted an error in the `RobertaTokenizer` (`AutoTokenizer`).
I tested the `tokenizer` function by using the code below:
```python
import torch
from torch.nn import CrossEntropyLoss
from matplotlib import pyplot as plt
from transformers import RobertaTokenizer, RobertaForMultipleChoice, AdamW, get_constant_schedule
from transformers import AutoTokenizer
import numpy as np
import pandas as pd
import pickle
import dill
from matplotlib.pyplot import plot, savefig, xlim, figure, ylim, legend, boxplot, setp, axes, xlabel, ylabel, xticks
import gc
import math
import time
from random import seed
from random import randint
import sys
import statistics
from numpy import nan
import scipy.stats as ss
from statistics import mode
from pylab import *
from numpy import arange
# import the pre-trained HuggingFace RobertaTokenizer
tokenizer = AutoTokenizer.from_pretrained('roberta-base')
# get the pre-trained HuggingFace RobertaForMultipleChoice and resize the token embeddings after adding the special token
best_model_roberta = RobertaForMultipleChoice.from_pretrained('roberta-base', output_hidden_states = True)
sequence_a = 'china is a very large country .'
sequence_b = 'indeed'
tokenizer(sequence_a, sequence_b, padding = True, return_token_type_ids=True, return_tensors="pt")
```
and below is the corresponding output:
```
{
'input_ids': tensor([[ 0, 611, 1243, 16, 10, 182, 739, 247, 479, 2, 2, 2028, 10247, 2]]),
'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]),
'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])
}
```
The output above is not what I was expecting to get from the `tokenizer` function. Since I am inputting `sequence_a` along with `sequence_b`, I was expecting my `token_type_ids` to be: `tensor([[0,0,0,0,0,0,0,0,0,0,0,1,1,1]])`, but the output does not include any 1's...everything is 0. Perhaps this is the error within the `RobertaTokenizer`?
Thank you, | 07-27-2020 21:31:29 | 07-27-2020 21:31:29 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,071 | closed | Loading and running on CPU, the RoBERTa model traced/saved on GPU. | # ❓ Questions & Help
## Details
I have installed the transformers from source and I am trying to trace/save roberta_base model for sequence classification on GPU and load it on CPU, similar to the issue [#5664](https://github.com/huggingface/transformers/issues/5664) that was addressed [PR #5773](https://github.com/huggingface/transformers/pull/5773) and I am facing similar issues with position_embeddings. I was wondering is any step is missing or RoBERTa is also covered in [PR #5773](https://github.com/huggingface/transformers/pull/5773). Thanks.
## Sample Script
```python
import transformers
from pathlib import Path
import os
import json
import torch
from transformers import (AutoModelForSequenceClassification, AutoTokenizer, AutoModelForQuestionAnswering,
AutoModelForTokenClassification, AutoConfig)
print('Transformers version',transformers.__version__)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
dummy_input = "This is a dummy input for torch jit trace"
max_length=20
config = AutoConfig.from_pretrained('roberta-base',num_labels=2,torchscript=True)
model = AutoModelForSequenceClassification.from_pretrained('roberta-base', config=config)
tokenizer = AutoTokenizer.from_pretrained('roberta-base',do_lower_case=True)
inputs = tokenizer.encode_plus(dummy_input,max_length = max_length,pad_to_max_length = True,truncation=True, add_special_tokens = True, return_tensors = 'pt')
print(inputs.keys())
input_ids = inputs["input_ids"].to(device)
attention_mask = inputs["attention_mask"].to(device)
model.to(device).eval()
traced_model = torch.jit.trace(model, (input_ids,attention_mask))
torch.jit.save(traced_model, "Roberta_cuda.pt")
print(traced_model.graph)
print("\n")
print("Load model onto CPU")
loaded = torch.jit.load("Roberta_cuda.pt", map_location=torch.device("cpu"))
print("\n")
print(loaded.graph)
outputs = loaded(input_ids.to("cpu"),attention_mask.to("cpu"))
print(outputs)
```
## Error
Traceback (most recent call last):
File "test.py", line 100, in <module>
outputs = loaded(input_ids.to("cpu"),attention_mask.to("cpu"))
File "/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/__torch__/transformers/modeling_roberta.py", line 10, in forward
attention_mask: Tensor) -> Tuple[Tensor]:
_0 = self.classifier
_1 = (self.roberta).forward(input_ids, attention_mask, )
~~~~~~~~~~~~~~~~~~~~~ <--- HERE
return ((_0).forward(_1, ),)
class RobertaModel(Module):
File "code/__torch__/transformers/modeling_roberta.py", line 32, in forward
_9 = torch.to(extended_attention_mask, 6, False, False, None)
attention_mask0 = torch.mul(torch.rsub(_9, 1., 1), CONSTANTS.c0)
_10 = (_3).forward((_4).forward(input_ids, input, ), attention_mask0, )
~~~~~~~~~~~ <--- HERE
_11 = (_2).forward(_10, )
return _10
File "code/__torch__/transformers/modeling_roberta.py", line 59, in forward
input0 = torch.to(_19, dtype=4, layout=0, device=torch.device("cuda:0"), pin_memory=False, non_blocking=False, copy=False, memory_format=None)
_20 = (_16).forward(input_ids, )
_21 = (_15).forward(input0, )
~~~~~~~~~~~~ <--- HERE
_22 = (_14).forward(input, )
input1 = torch.add(torch.add(_20, _21, alpha=1), _22, alpha=1)
File "code/__torch__/torch/nn/modules/sparse/___torch_mangle_0.py", line 7, in forward
def forward(self: __torch__.torch.nn.modules.sparse.___torch_mangle_0.Embedding,
input: Tensor) -> Tensor:
position_embeddings = torch.embedding(self.weight, input, 1, False, False)
~~~~~~~~~~~~~~~ <--- HERE
return position_embeddings
Traceback of TorchScript, original code (most recent call last):
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py(1724): embedding
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/sparse.py(114): forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(534): _slow_forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(548): __call__
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/transformers/modeling_bert.py(202): forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/transformers/modeling_roberta.py(76): forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(534): _slow_forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(548): __call__
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/transformers/modeling_bert.py(792): forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(534): _slow_forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(548): __call__
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/transformers/modeling_roberta.py(344): forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(534): _slow_forward
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(548): __call__
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/jit/__init__.py(1027): trace_module
/home/ubuntu/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/jit/__init__.py(875): trace
test.py(91): <module>
RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select | 07-27-2020 21:17:16 | 07-27-2020 21:17:16 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@mfuntowicz I was wondering if there is any update/ work around or a PR in progress to solve this issue. Thanks. |
transformers | 6,070 | closed | lightning_base: new clarg: lr_scheduler=polynomial_decay | lr_scheduler should be a string with many options, including polynomial_decay, like this [command](https://github.com/pytorch/fairseq/blob/master/examples/mbart/README.md#finetune-on-en-ro) | 07-27-2020 20:49:36 | 07-27-2020 20:49:36 | @stas00 as discussed on slack<|||||>got it<|||||>PR for stage 1 to support the existing schedulers: https://github.com/huggingface/transformers/pull/6232
once this is happy and merged, I will work on importing new ones.<|||||>poly: https://github.com/huggingface/transformers/pull/6361<|||||>@sshleifer, so now that get_polynomial_decay_schedule_with_warmup is done (about to be merged) - do we need any others?<|||||>Not that I know of. We can close this once the associated PR ismerged. |
transformers | 6,069 | closed | lightning_base: new clarg: adam_betas | should be able to pass `adam_betas` through command line like this [command](https://github.com/pytorch/fairseq/blob/master/examples/mbart/README.md#finetune-on-en-ro)
| 07-27-2020 20:47:41 | 07-27-2020 20:47:41 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,068 | closed | link to README.md | 07-27-2020 20:39:24 | 07-27-2020 20:39:24 | ||
transformers | 6,067 | closed | fix typo | wrong filename passed to `tar` | 07-27-2020 20:34:35 | 07-27-2020 20:34:35 | Thanks for catching the typo! Alas, I think this one has already been fixed by recent commits. |
transformers | 6,066 | closed | Add fire to setup.cfg to make isort happy | 07-27-2020 18:14:12 | 07-27-2020 18:14:12 | ||
transformers | 6,065 | closed | Make all data collators accept dict | + one file changed by make style for some reason.
I'm only keeping the inputs, for more evolved data collation, I think it's best to let the user write their own subclass. | 07-27-2020 17:13:03 | 07-27-2020 17:13:03 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6065?src=pr&el=h1) Report
> Merging [#6065](https://codecov.io/gh/huggingface/transformers/pull/6065?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/11792d7826854979bb532b6da09bc3796b09ea6a&el=desc) will **decrease** coverage by `0.54%`.
> The diff coverage is `71.42%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6065?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6065 +/- ##
==========================================
- Coverage 78.73% 78.19% -0.55%
==========================================
Files 146 146
Lines 26314 26318 +4
==========================================
- Hits 20719 20579 -140
- Misses 5595 5739 +144
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6065?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `97.39% <71.42%> (-1.71%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.48% <0.00%> (ø)` | |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (+2.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6065/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6065?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6065?src=pr&el=footer). Last update [11792d7...f8cc060](https://codecov.io/gh/huggingface/transformers/pull/6065?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LGTM! Agree on the user subclass, especially for language modeling which should be mostly always the same. |
transformers | 6,064 | closed | [Performance improvement] "Bad tokens ids" optimization | Running some benchmarks I noticed that the generation pipeline was varying quite a bit in terms of execution time. Especially the banned token masking seems to be fairly expensive (I ran some experiments where up to 30% of the time for an entire generation process was spent in this step - which seems too high considering its expected simplicity).
This PR accelerates the entire generation pipeline for models using a `bad_words_ids` in their configuration by around 20% on a GPU-enabled node (this includes for example translation using the Marian models).
The following changes contribute to the performance improvement:
- Single conversion from tensor to list. Previous approach was accessing the GPU buffer for every banned token and every batch element, causing this operation to be slower than the entire forward pass through the model
- Vectorized update of the banned tokens using a masked fill
- Skipping the EOS token for the banned tokens (avoiding a potential duplicate masking) | 07-27-2020 16:17:39 | 07-27-2020 16:17:39 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6064?src=pr&el=h1) Report
> Merging [#6064](https://codecov.io/gh/huggingface/transformers/pull/6064?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8a8ae27617e3c4dafb34bcbbaadf4ceee28583bd&el=desc) will **decrease** coverage by `0.10%`.
> The diff coverage is `27.77%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6064?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6064 +/- ##
==========================================
- Coverage 78.49% 78.38% -0.11%
==========================================
Files 146 147 +1
Lines 26335 26384 +49
==========================================
+ Hits 20671 20681 +10
- Misses 5664 5703 +39
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6064?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/test\_generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Rlc3RfZ2VuZXJhdGlvbl91dGlscy5weQ==) | `0.00% <0.00%> (ø)` | |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6064/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.64% <93.75%> (-0.19%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6064?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6064?src=pr&el=footer). Last update [8a8ae27...7d9767a](https://codecov.io/gh/huggingface/transformers/pull/6064?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@sshleifer Thank you very much for the review. I have added unit tests for the modified method that hopefully aligns with what you had in mind. I have re-run the Marian integration tests that run without issue. I somehow have issues running the BART integration tests (even on master) due to an `ImportError` and unable to see if these still run:
```python
from .test_configuration_common import ConfigTester
ImportError: attempted relative import with no known parent package
```
regarding point 2) could you please clarify? The `calc_banned_bad_words_ids` still exists (and is used) in the proposed PR. Would you recommend making a copy of it instead of changing its behaviour? Then the original `calc_banned_bad_words_ids` would no longer be used anywhere
@JetRunner I have added a comment to clarify the mask tensor generation
I am currently running into issues with Tensorflow test failing - but I do not see how it relates to the proposed changes
Thank you!<|||||>I misread your code, sorry.
My point 2 should be that it feels like the new masking logic could be put into a helper method like
```python
def set_scores_to_inf_for_banned_tokens(self, scores, bad_words_ids) -> None:
```
just for the sake of namespace control.
You could also test that method without running `generate`.
Also, how significant is the speedup here?<|||||>@sshleifer This makes sense, just pushed a few more changes:
- Moved the masking to a utility function
- Updated the unit test to let it fail if it hits timeout. As this is configuration dependent, the limit was increased to 10 if the CI compute power available fluctuates. In general I am not sure if unit tests are the best way to perform performance regression tests
- I have created a gist to share the performance difference between the current and the proposed approach: https://gist.github.com/guillaume-be/e335b099005e9bf38448d0e2eb02f74f . On this simple example with a GPU on Colab, the proposed approach is twice as fast. This actually has a significant impact on the entire generation process, but I did not manage to create a good example on Colab (the resources fluctuate too much from notebook to notebook, and not aware of a way to change a library version within a same notebook). Running locally with a consumer-grade Turing GPU (2070), I observe a time reduction of around 20% for the end-to-end generation process. <|||||>@sshleifer Thank you again for the thorough review! Tried to address the latest comments - I believe it cleans it up quite a bit thank you for the suggestions<|||||>This is ready to be merged @LysandreJik ! |
transformers | 6,063 | closed | [fix] no warning for position_ids buffer | Fixes [#6044](https://github.com/huggingface/transformers/issues/6044)
The failing tests check that there are no missing keys for bart, but morgan's recent change started registering a position_ids buffer in `__init__` for four models, with more expected.
Since we do not expect weights to be saved for position_ids, this PR adds the pattern `position_ids` to authorized_missing_keys for all PT models.
| 07-27-2020 16:12:32 | 07-27-2020 16:12:32 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6063?src=pr&el=h1) Report
> Merging [#6063](https://codecov.io/gh/huggingface/transformers/pull/6063?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c8bdf7f4ecd73680cb0751d9efc8fa3a992c2c2d&el=desc) will **increase** coverage by `1.19%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6063?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6063 +/- ##
==========================================
+ Coverage 77.39% 78.58% +1.19%
==========================================
Files 146 146
Lines 26314 26314
==========================================
+ Hits 20366 20680 +314
+ Misses 5948 5634 -314
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6063?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.32% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6063/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6063?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6063?src=pr&el=footer). Last update [c8bdf7f...5de946f](https://codecov.io/gh/huggingface/transformers/pull/6063?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>gunna merge this so that github actions catches it. |
transformers | 6,062 | closed | Add new AutoModel classes in pipeline | This PR removes the about to be deprecated `AutModelWithLMHead` class and uses the new `AutoModelForSeq2SeqLM`, `AutoModelForCausalLM` and `AutoModelForMaskedLM` for `translation`, `text-generation` and `fill-mask` pipelines respectively.
Regrading issue #6060
@sgugger | 07-27-2020 15:43:08 | 07-27-2020 15:43:08 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6062?src=pr&el=h1) Report
> Merging [#6062](https://codecov.io/gh/huggingface/transformers/pull/6062?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f7f03b22dc15543317635770f312adf4513303d0&el=desc) will **increase** coverage by `0.17%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6062?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6062 +/- ##
==========================================
+ Coverage 78.50% 78.67% +0.17%
==========================================
Files 146 146
Lines 26251 26251
==========================================
+ Hits 20609 20654 +45
+ Misses 5642 5597 -45
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6062?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `77.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-1.51%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.48% <0.00%> (+4.06%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6062/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6062?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6062?src=pr&el=footer). Last update [f7f03b2...6385c16](https://codecov.io/gh/huggingface/transformers/pull/6062?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,061 | closed | Pipelines should use tuples instead of namedtuples | Fix https://github.com/huggingface/transformers/issues/5713
The ONNX conversion cannot handle other objects than tuples, lists and variables. Since calling the conversion script via the command line uses a pipeline, and pipelines cannot be configured with specific model kwargs, the change is so that pipelines manage tuples instead of namedtuples (not a breaking change). | 07-27-2020 15:39:34 | 07-27-2020 15:39:34 | |
transformers | 6,060 | closed | new AutoModel classes in pipeline | The `translation`, `text-generation` and `fill-mask` pipelines still use the `AutoModelWithLMHead` class but from the warning it seems that it's going to be deprecated. Is there any specific reason for this or can we use the new `AutoModelForSeq2SeqLM` for translation, `AutoModelCausalLM` for text-generation and `AutoModelForMaskedLM` for fill-mask pipeline ? If yes then I'll be happy to open a PR.
@sgugger | 07-27-2020 14:42:22 | 07-27-2020 14:42:22 | |
transformers | 6,059 | closed | Errors while using TFAutoModelForMultipleChoice and TFTrainer on winogrande dataset | # ❓ Questions & Help
I'm using TFAutoModelForMultipleChoice and TFTrainer to train on winogrande dataset.
The dataset is from [here](https://leaderboard.allenai.org/winogrande/submissions/get-started)
And I got an error while I starting training.
```python
Traceback (most recent call last):
File "run_wsc.py", line 233, in <module>
main()
File "run_wsc.py", line 208, in main
trainer.train()
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/trainer_tf.py", line 358, in train
for step, training_loss in enumerate(self._training_steps(train_ds, optimizer)):
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/trainer_tf.py", line 401, in _training_steps
for i, loss in enumerate(self._accumulate_next_gradients(ds)):
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/trainer_tf.py", line 434, in _accumulate_next_gradients
yield _accumulate_next()
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py", line 568, in __call__
result = self._call(*args, **kwds)
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py", line 615, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py", line 497, in _initialize
*args, **kwds))
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py", line 2389, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py", line 2703, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py", line 2593, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py", line 978, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py", line 439, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py", line 968, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in converted code:
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/trainer_tf.py:430 _accumulate_next *
return self._accumulate_gradients(per_replica_features, per_replica_labels)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/trainer_tf.py:440 _accumulate_gradients *
per_replica_loss = self.args.strategy.experimental_run_v2(
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/distribute/one_device_strategy.py:180 experimental_run_v2
return super(OneDeviceStrategy, self).experimental_run_v2(fn, args, kwargs)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/trainer_tf.py:453 _forward *
per_example_loss, _ = self._run_model(features, labels, True)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/trainer_tf.py:474 _run_model *
loss, logits = self.model(features, labels=labels, training=training)[:2]
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py:778 __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/modeling_tf_bert.py:1114 call *
loss = self.compute_loss(labels, reshaped_logits)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:134 compute_loss *
if shape_list(logits)[1] == 1:
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/autograph/operators/control_flow.py:918 if_stmt
basic_symbol_names, composite_symbol_names)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/autograph/operators/control_flow.py:956 tf_if_stmt
error_checking_orelse)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py:507 new_func
return func(*args, **kwargs)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py:1174 cond
return cond_v2.cond_v2(pred, true_fn, false_fn, name)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/ops/cond_v2.py:83 cond_v2
op_return_value=pred)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py:983 func_graph_from_py_func
expand_composites=True)
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/util/nest.py:568 map_structure
structure[0], [func(*x) for x in entries],
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/util/nest.py:568 <listcomp>
structure[0], [func(*x) for x in entries],
/Users/qixinli/anaconda/envs/a2/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py:943 convert
(str(python_func), type(x)))
TypeError: To be compatible with tf.contrib.eager.defun, Python functions must return zero or more Tensors; in compilation of <function tf_if_stmt.<locals>.error_checking_body at 0x158e993b0>, found return value of type <class 'tensorflow.python.keras.losses.MeanSquaredError'>, which is not a Tensor.
```
## Details
```python
import logging
import os
from dataclasses import dataclass, field
from enum import Enum
from typing import Dict, Optional
import numpy as np
import tensorflow as tf
from transformers import (
AutoConfig,
AutoTokenizer,
EvalPrediction,
HfArgumentParser,
PreTrainedTokenizer,
TFTrainer,
TFTrainingArguments,
TFAutoModelForMultipleChoice
)
from wsc_processor import (
WinograndeProcessor,
convert_multiple_choice_examples_to_features,
compute_metrics
)
class Split(Enum):
train = "train"
dev = "validation"
test = "test"
def get_wsc_ds(
tokenizer: PreTrainedTokenizer, max_seq_length: Optional[int] = None, mode: Split = Split.train, data_dir:str=None
):
processor = WinograndeProcessor()
if mode == Split.train:
examples = processor.get_train_examples(data_dir)
elif mode == Split.dev:
examples = processor.get_dev_examples(data_dir)
else:
examples = processor.get_test_examples(data_dir)
features = convert_multiple_choice_examples_to_features(examples, label_list=processor.get_labels(), tokenizer=tokenizer, max_seq_length=max_seq_length, output_mode="multiple_choice")
def gen():
for ex in features:
inputs = []
masks = []
segments = []
for feature in ex.option_features:
inputs.append(feature["input_ids"])
masks.append(feature["input_mask"])
segments.append(feature["segment_ids"])
yield (
{
"input_ids": inputs,
"attention_mask": masks,
"token_type_ids": segments,
},
ex.label,
)
ds = tf.data.Dataset.from_generator(
gen,
({"input_ids": tf.int32, "attention_mask": tf.int32, "token_type_ids": tf.int32}, tf.int64),
(
{
"input_ids": tf.TensorShape([None,None]),
"attention_mask": tf.TensorShape([None,None]),
"token_type_ids": tf.TensorShape([None,None]),
},
tf.TensorShape([]),
),
)
return ds
logger = logging.getLogger(__name__)
@dataclass
class GlueDataTrainingArguments:
task_name: str = field(default="wsc")
data_dir: str = field(default="./data/WSC")
max_seq_length: int = field(
default=128,
metadata={
"help": "The maximum total input sequence length after tokenization. Sequences longer "
"than this will be truncated, sequences shorter will be padded."
},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
@dataclass
class ModelArguments:
model_name_or_path: str = field(
default="roberta-base",
metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"}
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
use_fast: bool = field(default=False, metadata={"help": "Set this flag to use fast tokenization."})
cache_dir: Optional[str] = field(
default="./model/", metadata={"help": "Where do you want to store the pretrained models downloaded from s3"}
)
def main():
parser = HfArgumentParser((ModelArguments, GlueDataTrainingArguments, TFTrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
if (
os.path.exists(training_args.output_dir)
and os.listdir(training_args.output_dir)
and training_args.do_train
and not training_args.overwrite_output_dir
):
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome."
)
# Setup logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
logger.info("Training/evaluation parameters %s", training_args)
output_mode = "classification"
config = AutoConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
# num_labels=1,
finetuning_task=data_args.task_name,
cache_dir=model_args.cache_dir
)
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir
)
# with training_args.strategy.scope():
model = TFAutoModelForMultipleChoice.from_pretrained(
model_args.model_name_or_path,
from_pt=bool(".bin" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
)
# Get datasets
train_dataset = (
get_wsc_ds(tokenizer=tokenizer, max_seq_length=data_args.max_seq_length,data_dir=data_args.data_dir)
if training_args.do_train
else None
)
eval_dataset = (
get_wsc_ds(tokenizer=tokenizer, max_seq_length=data_args.max_seq_length,mode=Split.dev, data_dir=data_args.data_dir)
if training_args.do_eval
else None
)
def metric(p: EvalPrediction) -> Dict:
if output_mode == "classification":
preds = np.argmax(p.predictions, axis=1)
else:
preds = np.squeeze(p.predictions)
return compute_metrics("winogrande", preds, p.label_ids)
# Initialize our Trainer
trainer = TFTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
compute_metrics=metric,
)
# Training
if training_args.do_train:
trainer.train()
trainer.save_model()
tokenizer.save_pretrained(training_args.output_dir)
return results
if __name__ == "__main__":
main()
```
the wsc_processor is from [here](https://github.com/allenai/winogrande/blob/master/scripts/utils.py)
I wonder what is wrong in the code (am I wrong in the get_wsc_ds() method ?) , and how to fix it?
| 07-27-2020 14:32:24 | 07-27-2020 14:32:24 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,058 | closed | Create README.md | 07-27-2020 13:42:15 | 07-27-2020 13:42:15 | ||
transformers | 6,057 | closed | Transformer layers + Functional API is failed but subclass is successful | # 🐛 Bug
## Information
Model I am using (Bert):
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
**successful**
```python
# define model via subclassing
class MyModel(tf.keras.Model):
def __init__(self, item_dim, num_layers, num_heads, max_len, **kwargs):
super(MyModel, self).__init__(**kwargs)
self.item_dim = item_dim
self.num_layers = num_layers
self.num_heads = num_heads
self.max_len = max_len
self.config = BertConfig(hidden_size=item_dim, num_hidden_layers=num_layers,
num_attention_heads=num_heads, intermediate_size=item_dim*4,
max_position_embeddings=max_len)
self.bert = TFBertMainLayer(config=self.config)
self.dense = Dense(1, activation='sigmoid')
def call(self, inputs):
seq_emb = self.bert(inputs=None, inputs_embeds=inputs)[0]
last_token_emb = seq_emb[:, -1, :]
outputs = self.dense(last_token_emb)
return outputs
```
**failed**
```python
def build_model(item_dim, num_layers, num_heads, max_len):
config = BertConfig(hidden_size=item_dim, num_hidden_layers=num_layers,
num_attention_heads=num_heads, intermediate_size=item_dim*4,
max_position_embeddings=max_len)
bert = TFBertMainLayer(config=config)
inputs = Input(shape=(max_len, item_dim), dtype=tf.float32, name='inputs')
# pre-training vectors to bert
seq_emb = bert(inputs=None, inputs_embeds=inputs)[0]
last_token_emb = seq_emb[:, -1, :]
outputs = Dense(1, activation='sigmoid')(last_token_emb)
model = Model(inputs=inputs, outputs=outputs)
return model
```
Errors with
`ValueError: It appears you are trying to construct a functional model, but not all of the inputs in the first positional argument of your layer call are symbolic tensors. (Input objects, or the output of another layer) Functional models cannot correctly track custom layers unless all values in the first call argument are symbolic.`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
* Run the code successfully when use transformer layers and functional api.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
https://colab.research.google.com/gist/Douboo/80dfa91917c176c530ef64be244a99a6/untitled2.ipynb
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: google colab
- Python version: 3.7
- Tensorflow version (GPU?): 2.3.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| 07-27-2020 10:46:01 | 07-27-2020 10:46:01 | Don't use transformers in tensorflow when use functional API and use pre-training vector. Help!!!<|||||>Might be of interest to @jplu <|||||>> Might be of interest to @jplu
Thanks! Finally someone replied to me. I am a beginner.<|||||>Hello!
You are not creating you input properly, `inputs` cannot be `None`. This piece of code should work:
```
import tensorflow as tf
from transformers import TFBertMainLayer, BertConfig
config = BertConfig.from_pretrained("bert-base-cased")
bert = TFBertMainLayer(config)
inputs = tf.keras.layers.Input(shape=(None,), name='input_embeds', dtype='int32')
seq_emb = bert(inputs)[0]
last_token_emb = seq_emb[:, -1, :]
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(last_token_emb)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
```<|||||>Ok after a deeper investigation, this partially solve the issue, it might have a bug in the way the model handles the Keras symbolic tensors, I need to do more tests to be sure but we certainly need to review this on our side.<|||||>Ok I found a workaround for you:
```
import tensorflow as tf
from transformers import TFBertMainLayer, BertConfig
config = BertConfig.from_pretrained("bert-base-cased")
bert = TFBertMainLayer(config)
inputs_embeds = tf.keras.layers.Input(shape=(None,768), name='inputs_embeds', dtype='float32')
inputs = {"input_embeds": inputs_embeds}
seq_emb = bert(inputs)[0]
last_token_emb = seq_emb[:, -1, :]
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(last_token_emb)
model = tf.keras.Model(inputs=inputs, outputs=[outputs])
```<|||||>> Ok I found a workaround for you:
>
> ```
> import tensorflow as tf
> from transformers import TFBertMainLayer, BertConfig
>
> config = BertConfig.from_pretrained("bert-base-cased")
> bert = TFBertMainLayer(config)
> inputs_embeds = tf.keras.layers.Input(shape=(None,768), name='inputs_embeds', dtype='float32')
> inputs = {"input_embeds": inputs_embeds}
> seq_emb = bert(inputs)[0]
> last_token_emb = seq_emb[:, -1, :]
> outputs = tf.keras.layers.Dense(1, activation='sigmoid')(last_token_emb)
> model = tf.keras.Model(inputs=inputs, outputs=[outputs])
> ```
Thank you very much! This method successfully solved my problem. Transformers is very awesome project. Thank you again! @jplu @LysandreJik |
transformers | 6,056 | closed | Empty assert hunt | Empty asserts are bad for debugging. I tried to remove them all and to add helpful Pytorch-style messages with the shapes of the corresponding objects when they were mismatched-lengths checks + the file paths when they were file-not-found checks. | 07-27-2020 09:53:40 | 07-27-2020 09:53:40 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6056?src=pr&el=h1) Report
> Merging [#6056](https://codecov.io/gh/huggingface/transformers/pull/6056?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/12f14710cea72a2104ff698762a8fc68a5dc0a0b&el=desc) will **decrease** coverage by `0.35%`.
> The diff coverage is `22.91%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6056?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6056 +/- ##
==========================================
- Coverage 78.82% 78.47% -0.36%
==========================================
Files 146 146
Lines 26200 26204 +4
==========================================
- Hits 20653 20564 -89
- Misses 5547 5640 +93
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6056?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/commands/train.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFpbi5weQ==) | `0.00% <ø> (ø)` | |
| [src/transformers/data/metrics/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL21ldHJpY3MvX19pbml0X18ucHk=) | `26.66% <0.00%> (ø)` | |
| [src/transformers/data/metrics/squad\_metrics.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL21ldHJpY3Mvc3F1YWRfbWV0cmljcy5weQ==) | `0.00% <0.00%> (ø)` | |
| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (ø)` | |
| [src/transformers/data/processors/xnli.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMveG5saS5weQ==) | `27.08% <0.00%> (-2.47%)` | :arrow_down: |
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `82.04% <0.00%> (ø)` | |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.39% <0.00%> (ø)` | |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.55% <0.00%> (ø)` | |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.88% <0.00%> (ø)` | |
| [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <0.00%> (ø)` | |
| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6056/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6056?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6056?src=pr&el=footer). Last update [12f1471...09c7880](https://codecov.io/gh/huggingface/transformers/pull/6056?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I'm in love 😍 😍. Thanks @TevenLeScao !<|||||>@LysandreJik can I merge this? |
transformers | 6,055 | closed | add another e.g. to avoid confusion | 07-27-2020 09:36:27 | 07-27-2020 09:36:27 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6055?src=pr&el=h1) Report
> Merging [#6055](https://codecov.io/gh/huggingface/transformers/pull/6055?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b9b11795cfdce7bb8dd8a01ec5efa602589a78b2&el=desc) will **increase** coverage by `0.11%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6055?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6055 +/- ##
==========================================
+ Coverage 78.26% 78.37% +0.11%
==========================================
Files 146 146
Lines 26253 26253
==========================================
+ Hits 20546 20576 +30
+ Misses 5707 5677 -30
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6055?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6055/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6055/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6055/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (+2.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6055/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6055?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6055?src=pr&el=footer). Last update [b9b1179...7264c50](https://codecov.io/gh/huggingface/transformers/pull/6055?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,054 | closed | Errors while creating a subclass of BertForTokenClassification in run_ner.py file |
Question 1) I want to skip calling AutoModelForTokenClassification ( https://github.com/huggingface/transformers/blob/f7f03b22dc15543317635770f312adf4513303d0/examples/token-classification/run_ner.py#L158 ) and directly use BertForTokenClassification. When I am directly using BertForTokenClassification in run_ner.py using the same parameters which were fed to AutoModelForTokenClassification, it does not load the weights from pretrained BERT checkpoint.
Can you suggest what changes I have to make in the input parameters (most likely in the "config") such that it loads the weights from pretrained BERT checkpoint.
Question 2) I also tried to create a subclass of BertForTokenClassification and call it within AutoModelForTokenClassification. In this case, it throws an error - AttributeError: 'BertConfig' object has no attribute 'use_return_tuple' .
Can anyone please suggest something to fix this error which is caused while just using a subclass of BertForTokenClassification instead of the original BertForTokenClassification class. Please note that the subclass is exactly same as BertForTokenClassification.
Thanks.
| 07-27-2020 08:57:51 | 07-27-2020 08:57:51 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,053 | closed | Errors when creating a subclass of "BertForTokenClassification" | 07-27-2020 08:54:50 | 07-27-2020 08:54:50 | ||
transformers | 6,052 | closed | add gpt2 padding for tflite | # ❓ Questions & Help
Since tflite requires a fixed input length, how can we correctly add padding for gpt2 input with a fix length so that it won't affect the final result?
```
import tensorflow as tf
from transformers import *
gpt2_model = TFGPT2LMHeadModel.from_pretrained('distilgpt2')
class WrapModel(tf.keras.models.Model):
def __init__(self, transformer):
super(WrapModel, self).__init__()
self.transformer = transformer
def call(self, input_ids, **kwargs):
inputs = {"inputs": input_ids}
outputs = self.transformer(**inputs)
next_token_logits = outputs[0][:, -1, :]
# Greedy decoding
next_token = tf.math.argmax(next_token_logits, axis=-1, output_type=tf.int32)
return {"decoded_ids": next_token}
w = WrapModel(gpt2_model)
input_layer = tf.keras.layers.Input(shape=(1, 10), dtype=tf.int32, name='input_ids')
prediction_model = w(input_layer)
tf_model = tf.keras.models.Model(inputs=input_layer, outputs=prediction_model)
export_dir = "./gpt2_tmp/"
tf.keras.models.save_model(tf_model, export_dir)
saved_model_dir = "./gpt2_tmp/"
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.experimental_new_converter = True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()
```
I am not clear how we can use attention_mask to solve this concern based on this one. https://github.com/huggingface/transformers/issues/2630 | 07-27-2020 05:14:11 | 07-27-2020 05:14:11 | the latest TF2.3 and tflite can support dynamic input now. Close this. |
transformers | 6,051 | closed | examples/seq2seq: add a dataloader that supports dynamic batch size | Should follow the logic of `load_langpair_dataset` in fairseq, roughly.
Batches should be created such that they include N(default 1024) tokens of source documents, however many examples that requires. | 07-27-2020 03:29:00 | 07-27-2020 03:29:00 | |
transformers | 6,050 | closed | CI: run tests against torch=1.6 | Through github or circleci.
If github actions:
copy `.github/self-scheduled.yml` to `.github/torch_future.yml` and modify the install steps. | 07-27-2020 03:25:40 | 07-27-2020 03:25:40 | we just did! |
transformers | 6,049 | closed | examples/seq2seq/test_bash_script.py :: actually learn something | At the moment validation bleu barely gets above zero in the tests, so they don't really prove much about our code.
we could use a larger model like sshleifer/student_marian_6_3, and more data, and train for 10 minutes . This would allows us to test whether changing default parameters/batch techniques obviously degrades performance.
The github actions CI reuses it's own disk, so this will only run there and hopefully not have super slow downloads. | 07-27-2020 03:24:13 | 07-27-2020 03:24:13 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This is still an issue :)<|||||>This command runs in 3 mins (without including downloads)
```bash
# export WANDB_PROJECT=dmar
export MAX_LEN=64
python finetune.py \
--learning_rate=3e-4 \
--do_train \
--do_predict \
--fp16 \
--val_check_interval 0.25 --n_train 100000 --n_val 500 --n_test 500 \
--data_dir wmt_en_ro \
--max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \
--freeze_encoder --freeze_embeds \
--train_batch_size=64 --eval_batch_size=64 \
--tokenizer_name Helsinki-NLP/opus-mt-en-ro \
--model_name_or_path sshleifer/mar_enro_6_3_student \
--warmup_steps 500 --sortish_sampler \
--gpus 1 --fp16_opt_level=O1 --task translation --num_sanity_val_steps=0 --output_dir dmar_utest_1gpu --num_train_epochs=1 \
--overwrite_output_dir
```
#### Test results
```bash
cat dmar_utest_1gpu/test_results.txt
```
```
bs: 32.000000
loss: 1.122035
src_pad_frac: 0.322266
src_pad_tok: 330.000000
step_count: 5.000000
test_avg_bleu: 20.660713
test_avg_gen_len: 63.750000
test_avg_gen_time: 0.033025
test_avg_loss: 2.215564
test_bleu: 20.660713
test_loss: 2.215564
tpb: 1475.000000
val_avg_bleu: 20.099000
val_avg_gen_len: 63.125000
val_avg_gen_time: 0.031409
val_avg_loss: 2.265883
val_bleu: 20.099001
val_loss: 2.265883
```
The validation BLEU also improves over the course of training:
```
cat dmar_utest_1gpu/metrics.json | grep bleu
```
```
"val_avg_bleu": 16.3561625,
"val_avg_bleu": 19.0204625,
"val_avg_bleu": 19.704875,
"val_avg_bleu": 20.099,
"test_avg_bleu": 20.660712500000002
```
So this would be a good template for the test.
### Spec
+ convert the command into a unit-test (With programatic download of the right amount of data, possibly through via a new s3 `.tgz` file.
+ Replace existing `test_bash_script` marian test.
+ Try to cut n_train/further
+ Anything less than 15 mins is fine, but the faster the better.
+ Minimum learning requirement 1: BLEU improves over the course of training by more than 2 pts
+ Minimum learning requirement 2: BLEU finishes above 17
+ Minimum learning requirement 3: test bleu and val bleu within 1 pt.
(this command meets all 3 learning requirements).
Wdyt @stas00 ?<|||||>I will work on that, thank you.<|||||>@sshleifer, could you please validate that this is the [command you run](https://github.com/huggingface/transformers/issues/6049#issuecomment-721782069)?
I get very different (bad) results:
```
bs: 9.000000
loss: 6.701375
src_pad_frac: 0.118056
src_pad_tok: 68.000000
step_count: 2.000000
test_avg_bleu: 0.021700
test_avg_gen_len: 512.000000
test_avg_gen_time: 0.439663
test_avg_loss: 5.679669
test_bleu: 0.021700
test_loss: 5.679669
tpb: 1025.000000
val_avg_bleu: 0.082700
val_avg_gen_len: 512.000000
val_avg_gen_time: 0.483860
val_avg_loss: 5.404536
val_bleu: 0.082700
val_loss: 5.404536
```
In the OP you mentioned " sshleifer/student_marian_6_3" but here you used "sshleifer/mar_enro_6_3_student" - not sure if that's the difference.<|||||>Also for the second time you use `wmt_en_ro` instead of `test_data/wmt_en_ro` - do you use a different dataset?<|||||>Your spec on timing would be a small issue, since I get what you said 3min on your hw in 33secs (unoptimized rtx3090), so might have to re-test on CI. But again I'm not sure we are testing against the same dataset, since my results are terrible.<|||||>Retested with `sshleifer/student_marian_en_ro_6_3` and 5 epochs - still under < 1 bleu - so this is probably an issue of insufficient data and you must be using a different dataset.<|||||>I am using full dataset (as in README.md) <|||||>Ah, that explains it.
So run the slow test with the full dataset downloaded at runtime, right? <|||||>OK, I was able to reproduce your results with the full dataset, slightly under 3min and slightly better bleu scores. <|||||>Not sure if there is a point to it, but 7zip shaves off about 35% in download size (but CI might not have it installed).
```
-rw-rw-r-- 1 stas stas 58M Nov 4 11:40 wmt_en_ro.tar.gz
-rw-rw-r-- 1 stas stas 37M Nov 4 11:39 wmt_en_ro.7z
```
<|||||>Another way to save download time would be to only zip up 100k (or fewer) training examples, 500 val examples, 500 test examples. Those are all we use given the `--ntrain --nval --ntest` flags.
I would also check whether 10k/25k/50k meet the learning requirements.<|||||>While trying to match the suggested hparams to the ones in `train_mbart_cc25_enro.sh` I've been wondering - I think I'm missing the point of this whole thing - if the intention is to test a bash script with specific fixed hparams, but the test replaces half of these presets and adds quite a few new ones, how are we testing this script?
<|||||>Why do we use "--foo=bar" and "--foo bar" both seemingly totally at random - half the args are set the first way, the other half the other way.<|||||>question: do you want this as a new test or modify the existing `test_train_mbart_cc25_enro_script` - I'm trying to do the latter at the moment - perhaps that's why I'm questioning what do we test here.<|||||>The high level goal originally was to test that the bash scripts we check in work.
I have a secondary goal of making sure the training code is actually good at training models.
I am fine with any setup that accomplishes both of those tasks, with bonus points for enough traceback control that a developer could tell that they have made performance/vs. speed worse or some such.
As I slacked, we want a test to detect if we've regressed the training code. For example, if you set dropout=0.95 or freeze all parameters, or set the LR too low, or mess up the special tokens logic, the test should fail. Does that make sense? I didn't test all these for my command line, but it would be nice.
Relatedly, we just had a 2x slowdown in the `finetune_trainer.py` code that was not detected by unit-tests.
I know this is something of a scope expansion, so feel free to break it up/ignore parts as you see fit. I trust you to make reasonable decisions.<|||||>Thank you for this useful brain dump. Let's take it point by point.
1. the bash scripts
If a test rewrites the script's guts before doing the testing should we not just modify those scripts themselves - we want to test that the script works, so we should test it as is, with the only changes allowed in some numerical settings to make tests faster.
If we want different pre-sets for different purposes - then have a set of scripts rather then do-them-all in one?
2. Best regression tests are written when an actual regression is discovered because then you know exactly which side of things to put under the "magnifier glass". When another regression is discovered down the road a new test should be made that focuses just on that part. Over time one ends up with a great coverage and the test suite becomes strong. Trying to accomplish all of these in one test will over time lose the very specific setups that exposed very specific side of things. It also helps to annotate that this test solves a regression in this git sha, so that it flags to future developers to not try to refactor or slip extra checks or slightly change things in the existing test.
It's very difficult to think of all the possible things that could regress in the void, but surely it is a great start.
> Relatedly, we just had a 2x slowdown in the finetune_trainer.py code that was not detected by unit-tests.
That's great! Let's find a git sha before and after and write a test that detects that regression.
I hope this approach makes sense?
<|||||>Yeah you are right, let me try to isolate the bad commit https://github.com/huggingface/transformers/commits/master/examples/seq2seq
related issue: #8154 <|||||>I don't think there was an actual regression, I think my command lines are subtly different.
I still think the current test in the linked PR is more aggressive/better and should be added to the test suite in some form, but I am open to other opinions.<|||||>**edit**: reusing the same ouput_dir during debug is a terrible idea - it gives total bogus test results - basically remembers the very first run and generates test reports based on it all the subsequent times, ignoring the actual test results. Why is that?
I am growing to dislike `--overwrite_output_dir` - it's so ambiguous - but I guess it was never meant to be used as a debug flag.
This works for debug:
```
if DEBUG:
output_dir = self.get_auto_remove_tmp_dir("./xxx", before=True, after=False)
```
So after re-evaluating:
> Try to cut n_train/further
40k works.
25k w/ 2 epochs is almost there, but it's slower, than adding a bit more data, so went with 40k
going with a subset "tr40k-va0.5k-te0.5k"<|||||>Created https://cdn-datasets.huggingface.co/translation/wmt_en_ro-tr40k-va0.5k-te0.5k.tar.gz - hope the name is intuitive - self-documenting. It's just 3.6M (vs 56M original)
I made it using this script:
https://github.com/stas00/porting/blob/master/transformers/translation/make-wmt_en_ro-subset.md
<|||||>In all these tests where we measure a relatively exact quality metrics - should we use a fixed seed? |
transformers | 6,048 | closed | examples/seq2seq/test_bash_script.py covers summarization | 07-27-2020 03:22:19 | 07-27-2020 03:22:19 | Hey @sshleifer, I would like to take this issue, if there no objection<|||||>Absolutely!
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
|
transformers | 6,047 | closed | Feed to forward new parameters as computed manually by update rule | Hi,
I would like to update the parameters of BertModel manually while training without using automatic optimizer to do it. I am trying meta-learning on BertModel and if I use automatic optimization on both inner and outer loops I get in-place operation errors. I also tried deepcopying but I cannot use deepcopy with DataParallel. So, I have opted for configuring my own manual optimization mechanism. how can I feed new parameters I computed from autograd of loss with respect to old parameters and update rule into the forward pass of BertModel manually? Is it a feature that exists? Otherwise is there a workaround?
Thanks, | 07-27-2020 03:09:23 | 07-27-2020 03:09:23 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,046 | closed | is_pretokenized seems to work incorrectly | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): roberta
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
I use `RobertaTokenizerFast` on pretokenized text, but problem arises when I switch to slow version too
The tasks I am working on is:
* an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I am trying to implement sliding window for roberta
## To reproduce
I use `tokenizer.tokenize(text)` method to tokenize whole text (1-3 sentences), when I divide tokens into chunks and try to use `__call__` method (I also tried `encode`) with `is_pretokenized=True` argument, but this creates additional tokens (like 3 times more then should be). I worked this around by using `tokenize` -> `convert_tokens_to_ids` -> `prepare_for_model` -> `pad` pipeline, but I believe that batch methods should be faster and more memory efficient
Steps to reproduce the behavior:
0. `tokenizer = AutoTokenizer.from_pretrained('roberta-base', add_prefix_space=True, use_fast=True)`
1. `ex_text = 'long text'`
2. `tokens = tokenizer.tokenize(ex_text)`
3. `examples = [tokens[i:i+126] for i in range(0, len(tokens), 100)]`
4. `print(len(tokenizer(examples, is_pretokenized=True)['input_ids'][0])) # this prints more than 128`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I would expect to get result similar to result I get when I use
```
tokens = tokeniser.tokenize(ex_text)
inputs = tokenizer.convert_tokens_to_ids(tokens)
inputs = [inputs[i:i+126] for i in range(0, len(tokens), 100)]
inputs = [tokenizer.prepare_for_model(example) for example in inputs]
inputs = tokenizer.pad(inputs, padding='longest')
```
Am I doing something wrong or it's unexpected behaviour?
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: MacOs
- Python version: 3.8.3
- PyTorch version (GPU?): 1.5.1 (no GPU)
- Tensorflow version (GPU?): NO
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
EDIT:
I see that when I use `__call__` it actually treat ` Ġ` as 2 tokens:
`tokenizer(tokenizer.tokenize('How'), is_pretokenized=True)['input_ids']`
`out: [0, 4236, 21402, 6179, 2]` where 4236, 21402 is ` Ġ` | 07-26-2020 23:23:46 | 07-26-2020 23:23:46 | We face a similar issue with the distilbert tokenizer.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-german-cased")
tokens = ['1980', 'kam', 'der', 'Crow', '##n', 'von', 'Toy', '##ota']
result = tokenizer.encode_plus(text=tokens,
text_pair=None,
add_special_tokens=True,
truncation=False,
return_special_tokens_mask=True,
return_token_type_ids=True,
is_pretokenized=True
)
result["input_ids"]
# returns:
[102,
3827,
1396,
125,
28177,
1634,
1634,
151,
195,
25840,
1634,
1634,
23957,
30887,
103]
tokenizer.decode(result["input_ids"])
# returns:
'[CLS] 1980 kam der Crow # # n von Toy # # ota [SEP]'
```
It seems that subword tokens (here ##n and ##ota) get split into further tokens even though we set `is_pretokenized=True`. This seems unexpected to me but maybe I am missing something?<|||||>As I mentioned before we used `is_pretokenized` to create sliding window, but recently discovered that this can be achieved using:
```
stride = max_seq_length - 2 - int(max_seq_length*stride)
tokenized_examples = tokenizer(examples, return_overflowing_tokens=True,
max_length=max_seq_length, stride=stride, truncation=True)
```
this returns `dict` with `input_ids`, `attention_mask` and `overflow_to_sample_mapping` (this helps to map between windows and example, but you should check for its presence, if you pass 1 short example it might not be there).
Hope this will help someone 🤗<|||||>I have the same issue as @tholor - there seem to be some nasty differences between slow and fast tokenizer implementations.<|||||>Just got the same issue with `bert-base-uncased`, However if when `is_pretokenized=False` it seems to be OK. Is this expected behaviour?
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
text = "huggingface transformers"
tok = tokenizer.tokenize(text)
print(tok)
# ['hugging', '##face', 'transformers']
output = tokenizer.encode_plus(tok, is_pretokenized=True)
tokenizer.convert_ids_to_tokens(output["input_ids"])
# ['[CLS]', 'hugging', '#', '#', 'face', 'transformers', '[SEP]']
```
when `is_pretokenized=False`
```python
output2 = tokenizer.encode_plus(tok, is_pretokenized=False)
tokenizer.convert_ids_to_tokens(output2["input_ids"])
# ['[CLS]', 'hugging', '##face', 'transformers', '[SEP]']
```
<|||||>I believe that this issue can be closed because of explanation in #6575 stating that `is_pretokenized` expect just list of words spited by white space not actual tokens. So this is "kind of expected" behaviour :) |
transformers | 6,045 | open | Test BART's memory consumption | - this can run on GPU only and be marked `@slow`
- check how much memory bart is using at `__init__`
- assert that it doesn't use more than 110% of that.
- check how much memory bart uses on a single forward pass. (optionally test this in fp16).
- assert that it doesn't use more than 110% of that.
- check how much memory bart uses on a single forward and backward pass.
- assert that it doesn't use more than 110% of that.
### Bonus:
- add similar asserts for timing!
- let the test run and check memory on CPU (make sure that if pytest is run with `-n -8` the test still passes!
- add a test to `test_modeling_common.py` to make it easy for all models to test this.
- add a test to `test_modeling_common_tf.py` to make it easy for all TF models to test this.
The benchmarking utilities may help.
It may also help to use `torch.cuda.max_memory...`
@patrickvonplaten may have further thoughts!
| 07-26-2020 21:24:39 | 07-26-2020 21:24:39 | @stas00 , this might be up your alley!<|||||>Excellent, will work on that.
Thank you, @sshleifer!<|||||>I will post my work in progress here, so others could chime in with ideas.
This notebook measures memory consumption and speed of `__init__`, a single fwd and fwd+bwd pass
https://colab.research.google.com/drive/1n6J3tc8FT4ER1vBCTAtU4__U5Px2mxwI?usp=sharing
The current difficulty is how to establish a baseline for memory and speed, so that it can be validated against in the future - detecting any regressions.
Several issues have been encountered:
1. there is a large discrepancy between results on different GPU cards. Currently tested my own Titan X and Colab's Tesla T4:
The following data is for `MODEL_ID = "sshleifer/tinier_bart"`
Memory:
| func | T4 | Titan X |
| --------|------|---------|
| init | 0 | 0 |
| fwd | 22MB | 62MB |
| fwd-bwd | 22MB | 70MB |
Currently only GPU-memory was measured and the initial loading of cudnn memory was subtracted. The latter varies hugely - ~600MB on Titan X and ~1GB on T4.
Exec speed:
| func | T4 | Titan X |
|---------|----------|----------|
| init | 0.05978s | 0.03808s |
| fwd | 0.00375s | 0.00273s |
| fwd-bwd | 0.00904s | 0.00563s |
Speed measurements are an average of 30 (!) runs.
The following data is for `MODEL_ID = "tacebook/bart-base"`
Memory:
| func | T4 | Titan X |
|---------|------|---------|
| init | 0 | 0 |
| fwd | 596MB | 636MB |
| fwd-bwd | 1576MB | 1624MB |
Exec speed:
| func | T4 | Titan X |
|---------|----------|----------|
| init | 3.28328s | 2.24100s |
| fwd | 0.01698s | 0.00998s |
| fwd-bwd | 0.03551s | 0.03039s |
2. Inconsistent results on the same card:
just one function of fwd+bwd on Tesla T4:
(memory is the same)
Exec time:
```
run 1: 0.00904s
run 2: 0.00769s
run 3: 0.00740s
run 4: 0.00768s
run 5: 0.00753s
```
the notebook was fully restarted for each measurement. Each report is an average of 30 runs.
Randomness is not fixed, but the input data is small and fake so it shouldn't make much of a difference.
So, given that most users have different cards, how can we approach making such a test that would work for everybody (or enough people to warrant this test's usefulness).
What would also help if you could download the notebook and share which measurement numbers you get and your card name. So that we could compare different cards. Though perhaps just one or 2 - the real testing will be on the full models and not the tiny ones. Thank you.<|||||>Is the variance similar for larger models?<|||||>I updated the comment above to add the same data for the full bart-base model.
The difference in memory seems to be "relatively" fixed - i.e. larger model still has about the same diff as the small one between two cards memory-wise.
Speed on the other hand has a much wider variation. Note, I added a comment that speed measurements are already an average of 30 re-runs. <|||||>Hey @stas00,
Thanks for running the benchmarks. As discussed, I think the way to go here is:
**1) GPU Memory:**
Slightly adapt the memory measurement to return 3 results:
- Required mem for CUDA/CUDNN init
- Required mem to run function (fwd or bwd)
- Sum of both these functions (this number should be the one that is returned now)
=> so we only need to add a function that measures how much memory is required to load CUDA/CUDNN kernels into GPU.
Then, I think we should report GPU mem requirement for a fixed GPU with fixed CUDA/CUDNN library (at the moment the CUDA/CUDNN version is not reported when running the benchmark, but they should be IMO) and make sure that these numbers are stable. Then we can loosen restrictions on different libraries and see how performance changes I guess and then finally compare on different GPUs.
2) Speed measurements:
My measurements on notebooks regarding speed also always varied a lot, but was very stable on one of our GPUs...not sure why the notebooks were much more unstable. I also think we can reduce 30 to something like 10, which should be fine and maybe always do 2-5 warmup runs (this is currently only done for TPU/torchscript)? What do you think?
Here we should again start with same libraries / same GPU IMO.<|||||>Your proposals sound good to me, @patrickvonplaten
Based on my experimentation and your input - it seems that stable speed measurements are the tricky part here.
It looks like this issue (including speed measurements) is now where the discussion is on: https://github.com/huggingface/transformers/issues/6218 So perhaps let's continue there and return to this issue once there is an updated API and then we can proceed tuning this one.
The complication here is that different cards give different results and we may have to do the painful process of maintaining different baselines for different cards.
<|||||>Unless someone would like to contribute some new ideas, this issue is stalled at the moment. The data is too divergent between different cards to have a solid test that also does a proper regression measurement. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I will reopen this for now - as we have come back to it in https://github.com/huggingface/transformers/issues/9261<|||||>We are still just talking about how to approach this - time being the main missing element at the moment. |
transformers | 6,044 | closed | slow from_pretrained failures | github actions has [full traceback](https://github.com/huggingface/transformers/runs/910644181?check_suite_focus=true)
Failures are all like
```python
for model_name in BERT_PRETRAINED_MODEL_ARCHIVE_LIST[:1]:
config = BertConfig.from_pretrained(model_name)
self.assertIsNotNone(config)
self.assertIsInstance(config, PretrainedConfig)
model = BertModel.from_pretrained(model_name)
model, loading_info = BertModel.from_pretrained(model_name, output_loading_info=True)
self.assertIsNotNone(model)
self.assertIsInstance(model, PreTrainedModel)
for value in loading_info.values():
> self.assertEqual(len(value), 0)
E AssertionError: 1 != 0
```
@sgugger is that from your change?
| 07-26-2020 20:57:18 | 07-26-2020 20:57:18 | I didn't add any ignore missing key flag to BERT, so that's weird.<|||||>Cool. I'll take this one.<|||||>It's `position_ids`, from Morgan's change. Will fix.
```
WARNING transformers.modeling_utils:modeling_utils.py:885 Some weights of BertModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['bert.embeddings.position_ids']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
WARNING transformers.modeling_utils:modeling_utils.py:885 Some weights of BertModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['bert.embeddings.position_ids']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
|
transformers | 6,043 | closed | docs(pretrained_models): fix num parameters | @julien-c This PR corrects the number of parameters of BERT based models.
Sometimes the difference between a given model and its pairs is important. For instance:
`bert-base-uncased` has **110M parameters** but `bert-base-multilingual-cased` has more than **178M parameters**, even if both models share the same architecture (12-layer, 768-hidden, 12-heads).
The difference is due to the vocabulary size:
`bert-base-uncased` uses a vocabulary of **30k** entries while `bert-base-multilingual-cased` uses a vocabulary of **119k** entries.
To compute the number of parameters:
``` python
from transformers import AutoModelForMaskedLM
bert_base = AutoModelForMaskedLM.from_pretrained('bert-base-uncased')
print(bert_base.num_parameters())
bert_multiling = AutoModelForMaskedLM.from_pretrained('bert-base-multilingual-cased')
print(bert_multiling.num_parameters())
``` | 07-26-2020 17:20:12 | 07-26-2020 17:20:12 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6043?src=pr&el=h1) Report
> Merging [#6043](https://codecov.io/gh/huggingface/transformers/pull/6043?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7f65daa2e155ecdd8594e19862dac8b322ed3b73&el=desc) will **increase** coverage by `0.06%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6043?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6043 +/- ##
==========================================
+ Coverage 79.57% 79.63% +0.06%
==========================================
Files 146 146
Lines 26597 26597
==========================================
+ Hits 21164 21181 +17
+ Misses 5433 5416 -17
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6043?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6043/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |
| [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6043/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6043?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6043?src=pr&el=footer). Last update [7f65daa...ee22cce](https://codecov.io/gh/huggingface/transformers/pull/6043?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@LysandreJik I created a new Pull Request (https://github.com/huggingface/transformers/pull/7575) to resolve the above mentioned conflicts and start from a dedicated branch.
Thanks.
|
transformers | 6,042 | closed | Update README.md of my model | 07-26-2020 16:44:52 | 07-26-2020 16:44:52 | ||
transformers | 6,041 | closed | docs(pretrained_models): fix num parameters | Correct the number of parameters of BERT based models.
Sometimes the difference is important. For instance:
`bert-base-uncased` has **110M parameters** but `bert-base-multilingual-cased` has more than **178M parameters**, even if both models share the same architecture (12-layer, 768-hidden, 12-heads).
The difference is due to the vocabulary size:
`bert-base-uncased` uses a vocabulary of 30k entries while `bert-base-multilingual-cased` uses a vocabulary of 119k entries. | 07-26-2020 16:19:04 | 07-26-2020 16:19:04 | I created a new PR with the correct formatting: https://github.com/huggingface/transformers/pull/6043 |
transformers | 6,040 | closed | Draft Etalab QA model | Card for the model https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf | 07-26-2020 14:21:08 | 07-26-2020 14:21:08 | Is this ready to merge? I'll merge now and feel free to update later (we'll make it way easier to iterate on model cards in the medium-term future)<|||||>(You can input model-specific examples to the inference QA widget if you need to, see https://huggingface.co/docs#how-can-i-control-my-models-widgets-example-inputs)<|||||>Top ! Thanks @julien-c ! Yes, I will update it with model-specific examples and fixing some typos also. |
transformers | 6,039 | closed | MarianMT - How to find out the actual names of the languages? - Only language-codes are available | # Question
## Information
I want to use the [MarianMT](https://huggingface.co/transformers/model_doc/marian.html) library for creating a translation application. There are lots of languages available with their language-codes mentioned such as "en" and "es", but no where can I find the actual names of the languages for each code. I know "en" is for English and "fr" for French, but what "bg" and "lu" and so many others that are available. [This resource](https://huggingface.co/Helsinki-NLP) doesn't mention the actual names as well.
Is there any place where I can find the corresponding names for these codes. I tried using some online source, but there seems to be a miss-match and I can't be sure if I got the match correct.
| 07-26-2020 10:42:57 | 07-26-2020 10:42:57 | https://huggingface.co/languages<|||||>@patil-suraj Thank you. |
transformers | 6,038 | closed | Rework TF trainer | This PR brings several improvements to the TensorFlow Trainer:
- make it more intuitive with less multiple small functions.
- should also make the trainer compliant with TPU.
- Fix the TensorFlow version to at least 2.2 for the trainer
- increase speed of the dataset preprocessing
@Colanim Can you please test this PR with TPU to see if it looks ok. | 07-26-2020 09:16:19 | 07-26-2020 09:16:19 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6038?src=pr&el=h1) Report
> Merging [#6038](https://codecov.io/gh/huggingface/transformers/pull/6038?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dafa296c952c08fca3686f1cf8f3a8f8eb116744&el=desc) will **decrease** coverage by `0.88%`.
> The diff coverage is `9.23%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6038?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6038 +/- ##
==========================================
- Coverage 78.80% 77.92% -0.89%
==========================================
Files 146 146
Lines 26325 26332 +7
==========================================
- Hits 20746 20519 -227
- Misses 5579 5813 +234
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6038?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/training\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `47.45% <0.00%> (ø)` | |
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.54% <8.66%> (+0.06%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `89.47% <100.00%> (+0.51%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-29.91%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.48% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `94.97% <0.00%> (+4.41%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6038/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6038?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6038?src=pr&el=footer). Last update [dafa296...0e56209](https://codecov.io/gh/huggingface/transformers/pull/6038?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@jplu I tried, but unfortunately I'm using a **TFRecord dataset**, which has an **unknown cardinality** ([doc link](https://www.tensorflow.org/api_docs/python/tf/data/experimental/cardinality)).
So I'm meeting the following error :
```
ValueError: The training dataset must have an asserted cardinality
```<|||||>Yes, you now have to use `assert_cardinality` on your dataset, as it is done in all the examples now.<|||||>I can rework it to make it more similar to the PT one, but I don't think they must be strictly the same has they don't really have the same way to work... In any case I can rethink it :)
About the issue I don't get what it means so I asked to elaborate more.<|||||>@jplu With my custom code I'm meeting the following error :
```
TypeError: in user code:
/home/me/.venv/test/lib/python3.7/site-packages/transformers/trainer_tf.py:551 distributed_training_steps *
self.args.strategy.experimental_run_v2(apply_gradients, batch)
/home/me/.venv/test/lib/python3.7/site-packages/transformers/trainer_tf.py:531 apply_gradients *
reduced_features = features[: self.args.train_batch_size / self.args.n_replicas]
TypeError: unhashable type: 'slice'
```
Not sure if it's from my code or this PR though...<|||||>Humm this is weird... I never got this error myself. How did you come to this? in order to try to replicate it.<|||||>@LysandreJik @sgugger Did I properly address all your comments? |
transformers | 6,037 | closed | Model card for Vamsi/T5_Paraphrase_Paws | Creating a model card for my uploaded model on the transformers hub. | 07-26-2020 08:45:40 | 07-26-2020 08:45:40 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6037?src=pr&el=h1) Report
> Merging [#6037](https://codecov.io/gh/huggingface/transformers/pull/6037?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c69ea5efc4eac65b183e8d07b1bf91d20bbe0c8c&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6037?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6037 +/- ##
==========================================
+ Coverage 78.50% 78.51% +0.01%
==========================================
Files 146 146
Lines 26249 26249
==========================================
+ Hits 20606 20610 +4
+ Misses 5643 5639 -4
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6037?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6037/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.00%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6037?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6037?src=pr&el=footer). Last update [c69ea5e...4cf27e8](https://codecov.io/gh/huggingface/transformers/pull/6037?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks! |
transformers | 6,036 | closed | don't complain about missing W&B when WANDB_DISABLED=true | I get a lot of crashes with W&B in transformers examples: https://github.com/huggingface/transformers/pull/5835 so I have to use `WANDB_DISABLED=true` - this PR removes a complaint that shouldn't be there when this env var is used. | 07-26-2020 00:45:04 | 07-26-2020 00:45:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6036?src=pr&el=h1) Report
> Merging [#6036](https://codecov.io/gh/huggingface/transformers/pull/6036?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c69ea5efc4eac65b183e8d07b1bf91d20bbe0c8c&el=desc) will **decrease** coverage by `1.18%`.
> The diff coverage is `50.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6036?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6036 +/- ##
==========================================
- Coverage 78.50% 77.31% -1.19%
==========================================
Files 146 146
Lines 26249 26251 +2
==========================================
- Hits 20606 20297 -309
- Misses 5643 5954 +311
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6036?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.47% <0.00%> (-0.07%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `41.00% <100.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.75%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6036/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6036?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6036?src=pr&el=footer). Last update [c69ea5e...f1463a0](https://codecov.io/gh/huggingface/transformers/pull/6036?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LGTM<|||||>Note, the project has a failing test - I did 2 PRs updates yesterday and today - both unrelated to my work
https://app.circleci.com/pipelines/github/huggingface/transformers/9539/workflows/15fcbc84-9cfe-4970-943f-935bce800c98/jobs/64765
another one from last night:
https://app.circleci.com/pipelines/github/huggingface/transformers/9527/workflows/73306d70-4190-48cd-b24a-b73619cd2002/jobs/64665/steps
Please kindly trigger a re-run of CI for this PR. Thank you.
<|||||>Yeah, the CI is flaky sometimes. Re-triggered the failing test.<|||||>> Yeah, the CI is flaky sometimes.
When it happens it's the same test that fails - perhaps it could be fixed?
> Re-triggered the failing test.
Thank you, @sgugger
<|||||>Yeah we try to fix those flaky tests as we catch them. Don't hesitate to report the name and logs on your own too. |
transformers | 6,035 | closed | add a summary report flag for run_examples on CI | Currently, it's hard to derive which example tests were run on CI, and which weren't. Adding `-rA` flag to `pytest`, will now include a summary like:
```
==================================================================== short test summary info =====================================================================
PASSED examples/test_examples.py::ExamplesTests::test_generation
PASSED examples/test_examples.py::ExamplesTests::test_run_glue
PASSED examples/test_examples.py::ExamplesTests::test_run_language_modeling
PASSED examples/test_examples.py::ExamplesTests::test_run_squad
FAILED examples/test_examples.py::ExamplesTests::test_run_pl_glue - AttributeError: 'Namespace' object has no attribute 'gpus'
============================================================ 1 failed, 4 passed, 8 warnings in 42.96s ============================================================
```
which makes it easier to validate whether some example is being covered by CI or not.
The PR came about following on the discussion at https://github.com/huggingface/transformers/pull/6027#issuecomment-663894148 | 07-25-2020 22:23:05 | 07-25-2020 22:23:05 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6035?src=pr&el=h1) Report
> Merging [#6035](https://codecov.io/gh/huggingface/transformers/pull/6035?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c69ea5efc4eac65b183e8d07b1bf91d20bbe0c8c&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6035?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6035 +/- ##
==========================================
+ Coverage 78.50% 78.52% +0.01%
==========================================
Files 146 146
Lines 26249 26249
==========================================
+ Hits 20606 20611 +5
+ Misses 5643 5638 -5
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6035?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6035/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+1.25%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6035?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6035?src=pr&el=footer). Last update [c69ea5e...9bca1bc](https://codecov.io/gh/huggingface/transformers/pull/6035?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,034 | closed | add pl_glue example test | Currently PL glue example isn't being tested by CI. This PR fixes that.
https://github.com/huggingface/transformers/pull/6027
the added test currently fails (goodness) and fixes are being PR'ed in
1. https://github.com/huggingface/transformers/pull/6027
2. https://github.com/huggingface/transformers/pull/6028
At the moment this is a basic 'can-run' test - I will add an actual validation once the `pl_glue_run.py` script is made to work.
| 07-25-2020 20:38:50 | 07-25-2020 20:38:50 | Looks like this PR needs one of these merged first: https://github.com/huggingface/transformers/pull/6028 or https://github.com/huggingface/transformers/pull/6027, as `-n_gpus` is now required in `lightning_base.py`<|||||>No matter what I try CI can't get to acc/f1>=0.75, even though I get 1.0 on my machine. Suggestions?<|||||>OK, as suggested by @sshleifer I changed the PR to only test that `run_pl_glue.py` is able to run and complete its job. Will have to figure out quality testing another time. It's very inefficient to try to tune something that works just fine on my machine but fails on CI.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6034?src=pr&el=h1) Report
> Merging [#6034](https://codecov.io/gh/huggingface/transformers/pull/6034?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1bbc54a87c293761823ff1c1833a4f77353af9a9&el=desc) will **decrease** coverage by `0.05%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6034?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6034 +/- ##
==========================================
- Coverage 79.53% 79.47% -0.06%
==========================================
Files 148 148
Lines 27196 27196
==========================================
- Hits 21630 21614 -16
- Misses 5566 5582 +16
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6034?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.43% <0.00%> (-7.42%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (+5.16%)` | :arrow_up: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+10.71%)` | :arrow_up: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6034/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `92.17% <0.00%> (+25.21%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6034?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6034?src=pr&el=footer). Last update [1bbc54a...bf693dd](https://codecov.io/gh/huggingface/transformers/pull/6034?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>CI failure is totally unrelated. |
transformers | 6,033 | closed | Is there an easy way to access the multiple choice head of the RobertaForMultipleChoice? | Hello,
This question is specifically for the ```RobertaForMultipleChoice``` model.
I know how to extract the hidden state vectors from each layer of the ```RobertaForMultipleChoice``` model, but is there any way that I can directly feed the hidden state vectors of layer `j` as an input to the multiple-choice head of the model?
In other words, I would like to know if there is any way that I can control the input of the multiple-choice head of the ```RobertaForMultipleChoice``` model (or if there is any easy way that I can access the multiple-choice head directly).
Thank you, | 07-25-2020 18:24:16 | 07-25-2020 18:24:16 | There is no direct way to do this from the model, you will have to edit the code directly. You can just copy the code from the file `modeling_roberta.py` and then change the forward method of `RobertaForMultipleChoice`. You can then directly feed the hidden layer of your choice to the final layer.<|||||>Hello,
Would the code below do the job that I am looking for?
```python
# get the pre-trained HuggingFace RobertaForMultipleChoice and resize the token embeddings after adding the special token
best_model_roberta = RobertaForMultipleChoice.from_pretrained('roberta-base', output_hidden_states = True)
# for each layer j = 1,...,12, extract the hidden states at the layer j
hidden_states = best_model_roberta(input_ids=input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask, labels=mc_labels)[2][j][:,:,:].detach()
mc_logits = best_model_roberta.classifier(hidden_states).detach()
```
Thank you,
<|||||>Yes, but if you detach the history, you won't be able to do a backward pass through this model.<|||||>Hello,
Thank you for your reply. If I want to calculate the mc_loss generated after inputting the hidden state vector directly into the multiple-choice head, would the code below be appropriate?
```python
import torch
from torch.nn import CrossEntropyLoss
from matplotlib import pyplot as plt
from transformers import RobertaTokenizer, RobertaForMultipleChoice, AdamW, get_constant_schedule
from transformers import PreTrainedTokenizer
import numpy as np
import pandas as pd
import pickle
import dill
from matplotlib.pyplot import plot, savefig, xlim, figure, ylim, legend, boxplot, setp, axes, xlabel, ylabel, xticks
import gc
import math
import time
from random import seed
from random import randint
import sys
import statistics
from numpy import nan
import scipy.stats as ss
# import the pre-trained HuggingFace RobertaTokenizer
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
# get the encoding for the special tokens
pub2_pad_token_id = tokenizer.convert_tokens_to_ids(tokenizer.pad_token)
# sanity check
len(tokenizer)
# get the pre-trained HuggingFace RobertaForMultipleChoice and resize the token embeddings after adding the special token
best_model_roberta = RobertaForMultipleChoice.from_pretrained('roberta-base', output_hidden_states = True)
# Turn on the evaluation mode
best_model_roberta.eval()
# extract the hidden states at the layer 1
hidden_states = best_model_roberta(input_ids=input_ids, attention_mask=attention_mask, labels=mc_labels, output_hidden_states=True)[2][1][:,:,:].detach()
# access the multiple-choice head
mc_logits = best_model_roberta.classifier(hidden_states).detach()
# define the loss function
loss_fct = CrossEntropyLoss()
# calculate the mc_loss
mc_loss = loss_fct(mc_logits.view(1,-1), mc_labels)
```
The code above works without error, but I am particularly wondering if `mc_logits.view(1,-1)` is correct. The original HuggingFace code for RobertaForMultipleChoice uses `mc_logits.view(-1,num_choice)` to calculate the resulting error, but I am wondering if it is correct to specify `mc_logits.view(1,-1)` instead, if I am inputting the hidden state vectors directly into the multiple-choice head (i.e. instead of `pooled_output`...not sure what the `pooled_output` in the HuggingFace code is).
Thank you,
<|||||>I think you should use the pooler layer too, which is there to convert all your sequence tokens in one summary state. So
```python
pooled_output = self.roberta.pooler(hidden_state)
logits = self.classifier(pooled_output)
reshaped_logits = logits.view(-1, num_choices)
loss_fct = CrossEntropyLoss()
mc_loss = loss_fct(reshaped_logits, mc_labels)
``` |
transformers | 6,032 | closed | Create README.md | Adding model card - readme | 07-25-2020 17:55:38 | 07-25-2020 17:55:38 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6032?src=pr&el=h1) Report
> Merging [#6032](https://codecov.io/gh/huggingface/transformers/pull/6032?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c69ea5efc4eac65b183e8d07b1bf91d20bbe0c8c&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6032?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6032 +/- ##
==========================================
+ Coverage 78.50% 78.51% +0.01%
==========================================
Files 146 146
Lines 26249 26249
==========================================
+ Hits 20606 20610 +4
+ Misses 5643 5639 -4
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6032?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6032/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.00%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6032?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6032?src=pr&el=footer). Last update [c69ea5e...cbb3748](https://codecov.io/gh/huggingface/transformers/pull/6032?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@sgugger ! Hi Sylvain, Wanted to get the pull request for this model card approved. If you can take a look that would be great :) Thanks for your time.
Sincerely,
Ramsri |
transformers | 6,031 | closed | Error with batch_encode_plus | # 🐛 Bug
## Information
When using the tokenizer method `batch_encode_plus` on the new `transformers` version, I run into an error
Model I am using (Bert, XLNet ...): BertTokenizer
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Here is a sample script to reproduce the error:
```python
import torch
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
text = [['sample text 1'], ['sample text 2']]
encode = tokenizer.batch_encode_plus(text)
```
The stack trace is:
```
Traceback (most recent call last):
File "bug.py", line 8, in <module>
encode = tokenizer.batch_encode_plus(text)
File "/home/nithinh/debug/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1832, in batch_encode_plus
**kwargs,
File "/home/nithinh/debug/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 534, in _batch_encode_plus
ids, pair_ids = ids_or_pair_ids
ValueError: not enough values to unpack (expected 2, got 1)
```
## Expected behavior
Successfully tokenize without errors. The script runs fine with `transformers` v2.11.0.
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.19.0-9-amd64-x86_64-with-debian-10.4
- Python version: 3.6.3
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 07-25-2020 10:02:16 | 07-25-2020 10:02:16 | Hi, if want to encode as sentence pair then you should do it like this
```python3
[ ["sent1", "sent2"] ]
```
else just pass the examples in a single list
```python3
["example1", "example2"]
```
Also, consider switching to the new tokenizer API. You can find the docs [here](https://huggingface.co/transformers/preprocessing.html)<|||||>@patil-suraj I didn't intend to encode it as a sentence pair, but rather a batch of single sentences of size 2.<|||||>Then you can just pass a list of examples. No need for nested list<|||||>Okay, thanks! |
transformers | 6,030 | closed | create model-card for lordtt13/emo-mobilebert | 07-25-2020 09:39:39 | 07-25-2020 09:39:39 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6030?src=pr&el=h1) Report
> Merging [#6030](https://codecov.io/gh/huggingface/transformers/pull/6030?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c69ea5efc4eac65b183e8d07b1bf91d20bbe0c8c&el=desc) will **increase** coverage by `0.14%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6030?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6030 +/- ##
==========================================
+ Coverage 78.50% 78.64% +0.14%
==========================================
Files 146 146
Lines 26249 26249
==========================================
+ Hits 20606 20643 +37
+ Misses 5643 5606 -37
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6030?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.20% <0.00%> (-2.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6030/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6030?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6030?src=pr&el=footer). Last update [c69ea5e...fa74b6b](https://codecov.io/gh/huggingface/transformers/pull/6030?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Interesting project, thanks for sharing. **[Model page](https://huggingface.co/lordtt13/emo-mobilebert)** |
|
transformers | 6,029 | closed | tensorflow bert model can‘t return all hidden_states | ```
import tensorflow as tf
import tensorflow_datasets
from transformers import *
from tensorflow.keras import layers
configuration = BertConfig.from_pretrained('bert-base-cased', output_hidden_states=True)
# Load dataset, tokenizer, model from pretrained model/vocabulary
tokenizer = BertTokenizer.from_pretrained('bert-base-cased', config=configuration)
# BERT encoder
encoder = TFBertModel.from_pretrained('bert-base-cased', config=configuration)
# Model
input_ids = layers.Input(shape=(100,), dtype=tf.int32)
token_type_ids = layers.Input(shape=(100,), dtype=tf.int32)
attention_mask = layers.Input(shape=(100,), dtype=tf.int32)
outputs = encoder(
input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask
)
print(outputs)
_, _, hidden_states = outputs[0], outputs[1], outputs[2]
```
output:
```
If your task is similar to the task the model of the ckeckpoint was trained on, you can already use TFBertModel for predic
(<tf.Tensor 'tf_bert_model_2/Identity:0' shape=(None, 100, 768) dtype=float32>, <tf.Tensor 'tf_bert_model_2/Identity_1:0' shape=(None, 768) dtype=float32>)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-7-63944e137dcd> in <module>()
17 )
18 print(outputs)
---> 19 _, _, hidden_states = outputs[0], outputs[1], outputs[2]
IndexError: tuple index out of range
```
You can check it:
[colab code](https://colab.research.google.com/drive/1zZo2SwBuGoCHsbvL3_fVR5rygWiuR16-?usp=sharing) | 07-25-2020 09:38:28 | 07-25-2020 09:38:28 | I'm encountering the exact same error - and it also happens when trying to output the attention keys.
I would really like to know where the problem comes from as well! <|||||>@sshleifer <|||||>@julien-c <|||||>transformers==2.7.0 solved the issue in my case<|||||>> transformers==2.7.0 solved the issue in my case
Thanks a lot. It solved my case too<|||||>Thank you so much @VietHoang1710, worked for me - The same problem with tensorflow roberta model, lost a lot of time to this one!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,028 | closed | examples/text-classification/run_pl.sh multiple problems | Fixing this sequence of errors - each fix required for the next error
running:
```
cd examples/text-classification
./run_pl.sh
```
error 1:
```
Traceback (most recent call last):
File "run_pl_glue.py", line 183, in <module>
trainer = generic_train(model, args)
File "/mnt/nvme1/code/huggingface/transformers-issue-1/examples/lightning_base.py", line 289, in generic_train
if args.gpus > 1:
AttributeError: 'Namespace' object has no attribute 'gpus'
```
solution: added `--n_gpus` arg
error 2:
```
Traceback (most recent call last):
File "run_pl_glue.py", line 183, in <module>
trainer = generic_train(model, args)
File "/mnt/nvme1/code/huggingface/transformers-issue-1/examples/lightning_base.py", line 300, in generic_train
**train_params,
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 853, in from_argparse_args
return cls(**trainer_kwargs)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 468, in __init__
self.tpu_cores = _parse_tpu_cores(tpu_cores)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 526, in _parse_tpu_cores
raise MisconfigurationException("`tpu_cores` can only be 1, 8 or [<1-8>]")
pytorch_lightning.utilities.exceptions.MisconfigurationException: `tpu_cores` can only be 1, 8 or [<1-8>]
```
solution: removed `default=0` for `tpu_cores`
error 3:
```
Traceback (most recent call last):
File "run_pl_glue.py", line 183, in <module>
trainer = generic_train(model, args)
File "/mnt/nvme1/code/huggingface/transformers-issue-1/examples/lightning_base.py", line 304, in generic_train
trainer.fit(model)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1038, in fit
model.setup('fit')
File "/mnt/nvme1/code/huggingface/transformers-issue-1/examples/lightning_base.py", line 125, in setup
dataloader = self.get_dataloader("train", train_batch_size)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/modules/module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'GLUETransformer' object has no attribute 'get_dataloader'
```
solution: added a wrapper - but it's incomplete - what to do with the `shuffle` arg?
error 4:
```
Traceback (most recent call last):
File "run_pl_glue.py", line 187, in <module>
trainer = generic_train(model, args)
File "/mnt/nvme1/code/huggingface/transformers-issue-1/examples/lightning_base.py", line 306, in generic_train
trainer.fit(model)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1044, in fit
results = self.run_pretrain_routine(model)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1213, in run_pretrain_routine
self.train()
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train
self.run_training_epoch()
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 452, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 632, in run_training_batch
self.hiddens
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 776, in optimizer_closure
hiddens)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 956, in training_forward
output = self.model.training_step(*args)
File "run_pl_glue.py", line 44, in training_step
tensorboard_logs = {"loss": loss, "rate": self.lr_scheduler.get_last_lr()[-1]}
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/modules/module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'GLUETransformer' object has no attribute 'lr_scheduler'
```
solution: I'm not sure how it used to work, but there is no `self.lr_scheduler` in pytorch-lightning (PL), I found one here: `self.trainer.lr_schedulers[0]["scheduler"]` and set this attribute. I have no idea whether this always works. Someone who wrote this script would probably know better where the missing attribute has gone. It's set inside `def fit`/CPU but inside the `trainer` object and not `nn.Module`.
Further notes:
`run_pl.sh` invokes PL in CPU mode, despite available GPU. I haven't tested this on gpu yet - I just saw during debug that PL [inits optimizers](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/trainer.py#L1096) just before it runs `run_pretrain_routine`, so I didn't find an easy PL predefined method where one could preset `self.lr_scheduler`.
Perhaps PL API has changed and caused this issue?
error 5:
```
Traceback (most recent call last):
File "run_pl_glue.py", line 218, in <module>
trainer = generic_train(model, args)
File "/mnt/nvme1/code/huggingface/transformers-issue-1/examples/lightning_base.py", line 305, in generic_train
trainer.fit(model)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1044, in fit
results = self.run_pretrain_routine(model)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1213, in run_pretrain_routine
self.train()
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train
self.run_training_epoch()
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 452, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 671, in run_training_batch
self.on_batch_end()
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/trainer/callback_hook.py", line 82, in on_batch_end
callback.on_batch_end(self, self.get_model())
File "/mnt/nvme1/code/huggingface/transformers-issue-1/examples/lightning_base.py", line 198, in on_batch_end
lrs = {f"lr_group_{i}": lr for i, lr in enumerate(self.lr_scheduler.get_lr())}
AttributeError: 'LoggingCallback' object has no attribute 'lr_scheduler'
```
solution: see notes for error 4.
with these fixes the code at least starts training, I didn't test further, since clearly there is a better way. Only the fixes for the first 2 errors are obviously correct to merge.
All the fixes are in one PR, as one can't move to the next error, before fixing the previous ones.
| 07-25-2020 03:21:16 | 07-25-2020 03:21:16 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6028?src=pr&el=h1) Report
> Merging [#6028](https://codecov.io/gh/huggingface/transformers/pull/6028?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c69ea5efc4eac65b183e8d07b1bf91d20bbe0c8c&el=desc) will **increase** coverage by `0.18%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6028?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6028 +/- ##
==========================================
+ Coverage 78.50% 78.68% +0.18%
==========================================
Files 146 146
Lines 26249 26249
==========================================
+ Hits 20606 20655 +49
+ Misses 5643 5594 -49
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6028?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6028/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6028/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.75%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6028/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6028?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6028?src=pr&el=footer). Last update [c69ea5e...414fbe9](https://codecov.io/gh/huggingface/transformers/pull/6028?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Looks like both of us worked on this at the same time - a different solution - https://github.com/huggingface/transformers/pull/6027<|||||>The code of this PR eventually got in the hard way via 3 PRs: https://github.com/huggingface/transformers/pull/6027, https://github.com/huggingface/transformers/pull/6307 and https://github.com/huggingface/transformers/pull/6314 |
transformers | 6,027 | closed | [Fix] text-classification PL example | The text-classification example needed to have several edits to get it to working. The main one was that the hparams are loaded as a dict instead of a Namespace object from the checkpoint so this needed to be fixed with recasting the hparams to a Namespace object.
Though this is not the ideal solution, it works for now.
I also have some other fixes such as gpus argument which needed to be added to the generic arguments list in the `lightning_base.py` and removing the default value for `n_tpu_cores`. And the lr_scheduler was not accessed by the logging callback correctly. These have been all fixed and the example works correctly. | 07-25-2020 02:02:17 | 07-25-2020 02:02:17 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6027?src=pr&el=h1) Report
> Merging [#6027](https://codecov.io/gh/huggingface/transformers/pull/6027?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e168488a74a41db0eddfa4699239c6f7b301c933&el=desc) will **increase** coverage by `1.04%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6027?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6027 +/- ##
==========================================
+ Coverage 77.46% 78.50% +1.04%
==========================================
Files 146 146
Lines 26243 26243
==========================================
+ Hits 20330 20603 +273
+ Misses 5913 5640 -273
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6027?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6027/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6027/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6027/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+1.25%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6027/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6027/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6027?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6027?src=pr&el=footer). Last update [e168488...4fbcde5](https://codecov.io/gh/huggingface/transformers/pull/6027?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>> Why wasn't any of this breaking tests? Is there some test coverage we could add to find these sorts of errors earlier?
`run_pl_glue.py` isn't being tested. It's not in [`test_examples.py`](https://github.com/huggingface/transformers/blob/master/examples/test_examples.py) and it doesn't have a dedicated test file like some other examples do.
Besides, looking at the output for examples test:
https://circleci.com/gh/huggingface/transformers/64559?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
it's impossible to tell which examples are being run and which aren't. It will only indicate the name on failure. Perhaps at the very least adding a pytest option so that it can announce which tests were run? Submitted PR to do just that: https://github.com/huggingface/transformers/pull/6035
<|||||>Here is a PR that adds the missing PL glue test: https://github.com/huggingface/transformers/pull/6034 (which obviously fails by CI - a good thing).
<|||||>@stas00 you can at least see all the files that are run with
`ls examples/**/test*.py`. <|||||>> @stas00 you can at least see all the files that are run with
> `ls examples/**/test*.py`.
you want one dir up as well, so:
`ls -1 examples/test*.py examples/*/test*.py`
but it tells only part of the story, since most info is hidden in `examples/test_examples.py`. e.g. you can't tell pl glue is not being there.
<|||||>How does this break tf tests? Looks like the model save still has issues with the state_dict its saving.<|||||>TF failures are spurious.<|||||>Merging this, thanks @bhashithe, @stas00 @laibamehnaz and everyone else who helped!<|||||>The merged https://github.com/huggingface/transformers/pull/6027 broke `examples/seq2seq/test_seq2seq_examples.py::test_finetune_lr_shedulers` - which I think was flagged by failing CI of that PR.
yeah, PL already has `--gpus` - so it conflicts with the one added by 6027. So I will look at how to rework that need in a different way.<|||||>Let's continue the discussion here: https://github.com/huggingface/transformers/issues/6310 |
transformers | 6,026 | closed | Fix tokenizer saving and loading error | This PR is to address #6025 | 07-25-2020 00:15:45 | 07-25-2020 00:15:45 | > This is great! Do you mind adding a test under `tests/test_tokenization_common.py` so that we may ensure this doesn't fail in the future?
Hi I added into an existing test. Please check if it's appropriate.<|||||>Hello! I added a more robust test and fixed the style issue. Thanks a lot for your contribution, merging as soon as all the tests show green! |
transformers | 6,025 | closed | Failed to save tokenizer with AddedToken in additional_special_tokens | # 🐛 Bug
I tried to add new special tokens to some tokenizer. I wanted them to be `AddedToken` in order to handle specific whitespace striping. But I got the following error when saving the tokenizer.
## To reproduce
Steps to reproduce the behavior:
```python
>>> from transformers import BertTokenizer
>>> from tokenizers import AddedToken
>>> tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
>>> new_token = AddedToken('new_token', lstrip=True)
>>> tokenizer.add_special_tokens({'additional_special_tokens': [new_token]})
1
>>> tokenizer.save_pretrained('.')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/junyuan/anaconda3/envs/deep/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1371, in save_pretrained
f.write(json.dumps(write_dict, ensure_ascii=False))
File "/home/junyuan/anaconda3/envs/deep/lib/python3.7/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/home/junyuan/anaconda3/envs/deep/lib/python3.7/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/home/junyuan/anaconda3/envs/deep/lib/python3.7/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/home/junyuan/anaconda3/envs/deep/lib/python3.7/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type AddedToken is not JSON serializable
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
| 07-25-2020 00:14:05 | 07-25-2020 00:14:05 | Fixed by #6026 |
transformers | 6,024 | closed | Feed forward chunking | Official PR for #5928 | 07-24-2020 20:00:09 | 07-24-2020 20:00:09 | @patrickvonplaten - here's an initial implementation I have. My first step is to get the model to work with chunked feed forward - and it works! I still need to run the benchmark test to find out the benefits in terms of memory.
However, I see a problem. The new architecture causes some of the nn.Module weights and bias parameter-names to change - which would be a problem with loading existing pretrained weights from checkpoints.
For example:
`bert.encoder.layer.0.intermediate.dense.weight` --> becomes `bert.encoder.layer.0.feed_forward.dense.dense.weight`
See the failing tests for more details. Any thoughts/ideas on how to get around this?
<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6024?src=pr&el=h1) Report
> Merging [#6024](https://codecov.io/gh/huggingface/transformers/pull/6024?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/175cd45e13b2e33d1efec9e2ac217cba99f6ae58&el=desc) will **decrease** coverage by `0.31%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6024?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6024 +/- ##
==========================================
- Coverage 79.44% 79.12% -0.32%
==========================================
Files 148 148
Lines 27193 27198 +5
==========================================
- Hits 21604 21521 -83
- Misses 5589 5677 +88
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6024?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.57% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.49% <100.00%> (+0.09%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.31% <0.00%> (-26.18%)` | :arrow_down: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.43% <0.00%> (-7.42%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |
| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/6024/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6024?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6024?src=pr&el=footer). Last update [175cd45...406d621](https://codecov.io/gh/huggingface/transformers/pull/6024?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>hh<|||||>Hey @Pradhy729, thanks a lot for continuing the PR. I made a couple of changes: fix the docs and added tests for all models, whereas only Reformer and Bert tests are on for now.
Would be great if @LysandreJik @sgugger @thomwolf @sshleifer can review.
This PR shows how `feed_forward_chunking` can be employed for all models. Feed forward chunking is explained here: https://huggingface.co/blog/reformer#2-chunked-feed-forward-layers in combination with some benchmarking. It can give good memory improvements for certain model architectures.
For Bert a test is added showing that the model gives equal results. This function can easily be added to other models, the same way it was done for BERT. There is no real drawback in implementing this IMO.
**To-Do after review is positive**:
1. Add feed forward chunking to more models. @Pradhy729, feel free to add it to as many models as you want. The rest can also be added in a new PR or we open a "first good issue" for it.
2. Add feed forward chunking for language modeling loss. Chunking of feed forward layers in the attention block is often not really helpful to save memory - only if the model has very few attention heads. On the other hand, a real bottleneck is often the last word embedding layer. When training the loss does not have to be calculated in one huge batch (over time dim), but can be chunked the same way it is done here for Feed forward layers. This is not even really implemented in Reformer yet and would definitely require a new PR.<|||||>Great! Thanks @patrickvonplaten
Will wait for reviewers and start working on the others.
|
transformers | 6,023 | closed | Remove unused file | This seems to be the old script to deploy the docs, new one is `.circleci/deploy.sh`. | 07-24-2020 19:20:32 | 07-24-2020 19:20:32 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6023?src=pr&el=h1) Report
> Merging [#6023](https://codecov.io/gh/huggingface/transformers/pull/6023?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a884b7fa38de9082a6f3f7889b9f7348a8dadbf5&el=desc) will **decrease** coverage by `1.37%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6023?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6023 +/- ##
==========================================
- Coverage 78.68% 77.31% -1.38%
==========================================
Files 146 146
Lines 26249 26249
==========================================
- Hits 20655 20295 -360
- Misses 5594 5954 +360
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6023?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6023/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6023/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6023/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6023/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6023?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6023?src=pr&el=footer). Last update [a884b7f...0cb4788](https://codecov.io/gh/huggingface/transformers/pull/6023?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,022 | closed | Fix the return documentation rendering for all model outputs | All PyTorch model outputs are documented from their output types. A problem is that just using the docstrings of the output class doesn't render properly on sphinx (this was also the case before the new model outputs were introduced).
This PR adds a function that converts the args part of the docstrings of the output class to render properly on our doc. You can see the transformation by looking [here](https://huggingface.co/transformers/master/model_doc/bert.html#transformers.BertModel.forward) for the docs before this PR and [here](https://64423-155220641-gh.circle-artifacts.com/0/docs/_build/html/model_doc/bert.html#transformers.BertModel.forward) for after it's merged (it's one example, but it will give the same result for all models). Scroll a bit to get to the return part of the doc. | 07-24-2020 18:05:20 | 07-24-2020 18:05:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6022?src=pr&el=h1) Report
> Merging [#6022](https://codecov.io/gh/huggingface/transformers/pull/6022?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3996041d0ae23ce23dfb8a343e6344f2f8d54c16&el=desc) will **increase** coverage by `0.40%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6022?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6022 +/- ##
==========================================
+ Coverage 78.29% 78.70% +0.40%
==========================================
Files 146 146
Lines 26249 26268 +19
==========================================
+ Hits 20552 20674 +122
+ Misses 5697 5594 -103
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6022?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6022/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `79.12% <ø> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6022/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.20% <100.00%> (+1.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6022/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6022/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (ø)` | |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6022/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6022/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6022?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6022?src=pr&el=footer). Last update [3996041...345372d](https://codecov.io/gh/huggingface/transformers/pull/6022?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,021 | closed | [CI] Don't test apex | 07-24-2020 17:04:46 | 07-24-2020 17:04:46 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6021?src=pr&el=h1) Report
> Merging [#6021](https://codecov.io/gh/huggingface/transformers/pull/6021?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3996041d0ae23ce23dfb8a343e6344f2f8d54c16&el=desc) will **increase** coverage by `0.39%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6021?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6021 +/- ##
==========================================
+ Coverage 78.29% 78.68% +0.39%
==========================================
Files 146 146
Lines 26249 26249
==========================================
+ Hits 20552 20655 +103
+ Misses 5697 5594 -103
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6021?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6021/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6021/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (ø)` | |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6021/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6021/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6021?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6021?src=pr&el=footer). Last update [3996041...14292a2](https://codecov.io/gh/huggingface/transformers/pull/6021?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,020 | closed | Create README.md | README.md for my model
https://huggingface.co/rdenadai/BR_BERTo | 07-24-2020 15:29:48 | 07-24-2020 15:29:48 | 👍
Would you like to add example inputs for all models for `pt`? If you do, please open a PR against this file: https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts
Thanks:) |
transformers | 6,019 | closed | Update the new model template | The model template files were old and did not use any of the recent improvements (output attentions, output hidden states, docstrings refactored...)
This PR fixes them and adds a few more instructions.
Tagging @LysandreJik, @patrickvonplaten, @thomwolf and @julien-c : We have to make sure that whenever we make a substantial change that applies to all model, this template is also updated. | 07-24-2020 15:14:35 | 07-24-2020 15:14:35 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6019?src=pr&el=h1) Report
> Merging [#6019](https://codecov.io/gh/huggingface/transformers/pull/6019?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/778e635fc900e5afc90e5f8f4ca74b5d5fc2976a&el=desc) will **increase** coverage by `1.37%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6019?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6019 +/- ##
==========================================
+ Coverage 77.31% 78.68% +1.37%
==========================================
Files 146 146
Lines 26249 26249
==========================================
+ Hits 20295 20655 +360
+ Misses 5954 5594 -360
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6019?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6019/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6019/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6019/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6019/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6019?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6019?src=pr&el=footer). Last update [778e635...b79103c](https://codecov.io/gh/huggingface/transformers/pull/6019?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,018 | closed | seq2seq/finetune.py can take config train_params through command line | - this might belong in lightning_base
- check how/if trainer does it.
Desired API:
```bash
python finetune.py --encoder_layerdrop 0.1 --decoder_layerdrop 0.1 --dropout 0.1 --attention_dropout 0.1
``` | 07-24-2020 14:38:23 | 07-24-2020 14:38:23 | @stas00 this might be up your alley.<|||||>Added to my todo queue - thank you, @sshleifer<|||||>https://github.com/huggingface/transformers/pull/6149 |
transformers | 6,017 | closed | convert_bart script should support mbart through command line. | Desired command:
```convert_mbart_checkpoint.py --checkpoint_dir --pytorch_dump_path```
This should probably also convert the tokenizer. | 07-24-2020 14:35:40 | 07-24-2020 14:35:40 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,016 | closed | Create model card for RuPERTa-base | 07-24-2020 14:31:54 | 07-24-2020 14:31:54 | Feel free to apply the change<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6016?src=pr&el=h1) Report
> Merging [#6016](https://codecov.io/gh/huggingface/transformers/pull/6016?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3996041d0ae23ce23dfb8a343e6344f2f8d54c16&el=desc) will **increase** coverage by `0.22%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6016?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6016 +/- ##
==========================================
+ Coverage 78.29% 78.51% +0.22%
==========================================
Files 146 146
Lines 26249 26249
==========================================
+ Hits 20552 20610 +58
+ Misses 5697 5639 -58
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6016?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6016/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6016?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6016?src=pr&el=footer). Last update [3996041...a80f84b](https://codecov.io/gh/huggingface/transformers/pull/6016?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,015 | closed | Tf trainer cleanup | This follows up from #5982 and does the same thing for `TFTrainer`.
Basically no code is changed, just moved around and the customization points are made public and explicit to the user. Starting to make the docs better but I will do it properly (for both Trainer and TFTrainer) after this PR is merged. | 07-24-2020 14:00:22 | 07-24-2020 14:00:22 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6015?src=pr&el=h1) Report
> Merging [#6015](https://codecov.io/gh/huggingface/transformers/pull/6015?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3b44aa935a4d8f1b0e93a23070d97be6b9c9506b&el=desc) will **increase** coverage by `1.29%`.
> The diff coverage is `22.22%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6015?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6015 +/- ##
==========================================
+ Coverage 77.32% 78.61% +1.29%
==========================================
Files 146 146
Lines 26253 26270 +17
==========================================
+ Hits 20299 20652 +353
+ Misses 5954 5618 -336
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6015?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `40.87% <ø> (ø)` | |
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.24% <22.22%> (-0.30%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6015/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6015?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6015?src=pr&el=footer). Last update [3b44aa9...5ff70d6](https://codecov.io/gh/huggingface/transformers/pull/6015?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>As long as you don't create too many merge conflicts, I can wait ;-)<|||||>PR available #6038 <|||||>Merge conflicts were a mess so opened a new version. |
transformers | 6,014 | closed | Fix question template | 07-24-2020 13:12:35 | 07-24-2020 13:12:35 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6014?src=pr&el=h1) Report
> Merging [#6014](https://codecov.io/gh/huggingface/transformers/pull/6014?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a5404052135a455f9b3ada6154dcfbe7c6ac3968&el=desc) will **increase** coverage by `1.32%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6014?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6014 +/- ##
==========================================
+ Coverage 77.18% 78.51% +1.32%
==========================================
Files 146 146
Lines 26252 26252
==========================================
+ Hits 20263 20612 +349
+ Misses 5989 5640 -349
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6014?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (ø)` | |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6014/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6014?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6014?src=pr&el=footer). Last update [a540405...78af589](https://codecov.io/gh/huggingface/transformers/pull/6014?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,013 | closed | Cannot import DPRReader | # 🐛 Bug
You have DPRReader included in your v3.0.2 documentation, but it's in the Master branch, it's not included in v3.0.2 tag; therefore it crashes and cannot import it. I think it would be convenient to either eliminate that piece of documentation until the models are added in the tag, or update the transformers version with the Master branch.
| 07-24-2020 09:36:50 | 07-24-2020 09:36:50 | Thanks for flagging, the stable doc was building on a wrong commit hash, hence the DPR models being present.
This is fixed now. |
transformers | 6,012 | closed | When i train a GPT-2 which uses BPE, the computed perplexity is per sub-word right? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 07-24-2020 08:38:56 | 07-24-2020 08:38:56 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 6,011 | closed | [Benchmark] | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here!
| 07-24-2020 03:57:29 | 07-24-2020 03:57:29 | |
transformers | 6,010 | closed | install sentence-transformers on Linux by python error | # ❓ Questions & Help
When I install sentence-transformers on Linux by python

, I got an error message:
ERROR: Could not find a version that satisfies the requirement transformers>=3.0.2 (from sentence-transformers) (fr
om versions: 0.1, 2.0.0, 2.1.0, 2.1.1, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.4.0, 2.4.1, 2.5.0, 2.5.1)
ERROR: No matching distribution found for transformers>=3.0.2 (from sentence-transformers)
system: Linux 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1+deb9u1 (2020-06-07) x86_64 on GCP VM instance.
Is there any suggestion?
| 07-24-2020 03:50:44 | 07-24-2020 03:50:44 | I am also facing same issue while trying to install transformers version 3.0.2...Did you find any solution?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>
how to fix this....pls help
|
transformers | 6,009 | closed | [model_cards] roberta-base-finetuned-yelp-polarity | Model card for [roberta-base-finetuned-yelp-polarity](https://huggingface.co/VictorSanh/roberta-base-finetuned-yelp-polarity). | 07-24-2020 02:26:50 | 07-24-2020 02:26:50 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6009?src=pr&el=h1) Report
> Merging [#6009](https://codecov.io/gh/huggingface/transformers/pull/6009?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f5b5c5bd7e213dea1645f07902b681f88e3cf954&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6009?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6009 +/- ##
==========================================
- Coverage 78.52% 78.51% -0.01%
==========================================
Files 146 146
Lines 26252 26252
==========================================
- Hits 20614 20612 -2
- Misses 5638 5640 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6009?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6009/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <0.00%> (-0.30%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6009/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6009?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6009?src=pr&el=footer). Last update [f5b5c5b...fed314c](https://codecov.io/gh/huggingface/transformers/pull/6009?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,008 | closed | fix: model card readme clutter | this removes the clutter line in the readme.md of model card `csarron/roberta-base-squad-v1`. It also fixes the result table. | 07-23-2020 22:23:59 | 07-23-2020 22:23:59 | looks good now 👍
(we'll have a better system for model cards soon-ish) |
transformers | 6,007 | closed | Fine tune T5 for paraphrase generation | # ❓ Questions & Help
Dear all,
I am new to NLP and has a lot of questions. Sorry to ask this long list here. I tried asking on huggingface's forum but as a new user, I can only put 2 lines there.
My goal is to fine-tuned t5-large for paraphrase generation. I found [this code](https://github.com/ramsrigouthamg/Paraphrase-any-question-with-T5-Text-To-Text-Transfer-Transformer-/blob/master/t5-pretrained-question-paraphraser.ipynb) which is based on [this code](https://github.com/patil-suraj/exploring-T5). So I just modified to further fine tune on my dataset. My questions( I also asked some of them on [the github code mentioned above](https://github.com/patil-suraj/exploring-T5/issues/3) but I feel these question may be better address here):
1. I saw for 2 epoches and the paraphrases generated looks good. When I trained for 11 epochs and the model seems overfitted (the paraphrases generated is similar to the original sentence). Do you have any recommendation for further improving the performance beside decrease the number of epochs?
2. For paraphrase generation using T5 as a text-to-text task, I don't know how to utilize the negative examples (pairs that are not paraphrases) directly here. Any recommendation?
3. One idea I have to include the negative examples is : I plan to first further fine tune T5-large's paraphrase identification with my data set (with positive and negative examples) and then used this fine tuned version to further fine tune on paraphrase generation. My assumption is the information learned in paraphrase identification task will help improve paraphrase generation. Is this correct?
4. I am also a little confused about the prefix.
On huggingface'T5 works well on a variety of tasks out-of-the-box by prepending a different prefix to the input corresponding to each task, e.g.: for translation: translate English to German: …, summarize: …. For more information about which prefix to use, it is easiest to look into Appendix D of the paper '.
Thus, I think that prefix notifies the T5 which task he should be performing. [The answer of this thread](https://github.com/huggingface/transformers/issues/4092 ) agrees with my understanding. However, in the first two examples [here](https://github.com/patil-suraj/exploring-T5), the code seems only add </s> at the end of the sentence "< / s >" but no 'prefix'. Could you tell me why? Do that mean these fine-tuned model will not do pretrain tasks of T5 but only their specific trained task so we don't need a prefix?
5.Also, MRPC and QQP are both paraphrase identification. If I want to fine tune, should I use my data set to fine with both of their prefix, or fine tune with one of their prefix or create my own prefix?
6. The loss function of the code is cross entropy, which is not the best for this task. I am thinking if I can use the paraphrase identification result (like the probability of being paraphrase) as the target function. Is this OK? I feel it maybe suuuupppeer slow. And I am not really sure how to implement it.
7. I have 3 paraphrase datasets (let's call them A,B,C) from different sources. Previously, I first train the model on A for 2 epoch, and then load this model as pretrained model to further train B for 2 epoches, and then C.
I then Combine the A,B,C into one dataset and directly trained for 2 epoches. The resulting two models has different results and the second one is worse. I have the same random seeds for them. Any idea?
8. I set early_stop_callback= True and set max_epochs=32, then it stops at epoch 11. But if I set max_epochs = 6, it stops at epoch 3. I don't understand, as I thought it will stop at epoch 6. I have the same random seed.
9. Another strange thing during training, I saw this on the screen:
Epoch 10: 100%............(time, loss et al)...
INFO:_main_:avg_train_loss = tensor(..)
INFO:_main_:epoch = 8
........
Why the epoch number is not the same?!
10. what is the correct way to evaluate on testing set? I saw several different examples.
[In this example, ](https://www.kaggle.com/enzoamp/t5-for-q-a-training-tutorial-pytorch)
t5 = T5ForConditionalGeneration.from_pretrained('output/')
input_ids = tokenizer.encode(text, return_tensors="pt", add_special_tokens=True) # Batch size 1
**t5.eval()**
generated_ids = t5.generate(
input_ids=input_ids,
num_beams=1,
max_length=80,
#repetition_penalty=2.5
).squeeze()
predicted_span = tokenizer.decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
return predicted_span
[This code has two examples:](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb)
First directly:
outs = **model.model.generate**(input_ids=batch['source_ids'].cuda(),
attention_mask=batch['source_mask'].cuda(),
max_length=2)
He also has :
loader = DataLoader(dataset, batch_size=32, num_workers=4)
**model.model.eval()**
outputs = []
targets = []
for batch in tqdm(loader):
outs = model.model.generate(input_ids=batch['source_ids'].cuda(),
attention_mask=batch['source_mask'].cuda(),
max_length=2)
Is eval() necessary? From[ Huggingface's doc,](https://huggingface.co/transformers/model_doc/auto.html) it seems not necessary when the model is 'from pretrain':
"The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated) To train the model, you should first set it back in training mode with model.train()"
On the other side, all the examples above, none of them uses 'model.train()' to set the mode. They directly train the model. I am confused.
Is model.model necessary?
Thanks!! | 07-23-2020 22:08:47 | 07-23-2020 22:08:47 | Phew, won't be able answer all the questions in single comment, will try my best.
2. IMO for generating paraphrases you probably won't need negative examples.
3. You assumption is correct, knowledge learned from one task can be useful for other similar tasks. You can approach this as a multitask problem.
1. identify if two sentences are paraphrases of each other
2. given one sentence generate it's paraphrase
With T5 you can use task prefixes for multitask learning, so for identification your example could look something like
```
input_text: "sent1: sentence 1 text sent2: sentence two text"
target_text: "equivalent" if sent2 is paraphrase of sent1 else "not equivalent"
```
and for generation
```
input_text: "paraphrase: sentence"
taregt_tetx: "paraphrase of sentence"
```
4. Task prefixes are not required for T5 (required when doing multitask training), but if your task is similar to one of the tasks used in T5's pre-training mixture, then use the prefixes so you can exploit the knowledge already learned by the model.
In my notebook I didn't use task prefixes (even though I was doing sentiment classification) because I wanted to see if it makes any difference if we don't use prefixes. Again use task prefixes when doing multitask learning or if your task similar to one of the tasks used in T5's pre-training mixture.
5. Check which dataset is closer to your own task and make decide on that.
10) I used `model.model` because here the first `model` is an instance of lightening model and the HF model is initialized in the first model so `model.model`, but once you save using `.save_pretrained` then you can load using `.from_pretrained` and you can do `model.generate`.
And for evaluation you can use BLUE, ROUGE and METEOR metrics. I usually use [nlg-eval ](https://github.com/Maluuba/nlg-eval)for calculating theses metrics. Generate predictions on your test data, then give your original reference file and generated file to nlg eval and it'll calculate the metrics.
And yes `.from_pretrained` sets model in eval model by default.
Hope this helps.
<|||||>Thank you, @patil-suraj ! I learn a lot from your answers!
**4.** I thought T5 is already pretrained with several with several different tasks, which means T5 is multitasking model. We can use the prefix in the appendix of the paper to perform the corresponding task. Though we finetuned it for a new task (like in your notebook), the ability for pretrained task is not lost, right? If so, it is surprising for me T5 didn't mess thing up and knows what to do when you didn't give it any prefix.
**5. & 3.** If I want to finetune on QQP, should I also use MRPC's data set (i.e. my own data+ MRPC)? On the other hand, if I train a new prefix, should I use QQP + MRPC + my own data? Will the finetuned T5 overfit a little for QQP and MRPC, as the model see them several times (though for the training of different prefix)? Similarly, if I use the QQP+MRPC+my dataset to finetune T5 paraphrase detection AND then use the positive examples in the QQP+MRPC+my data set to finetune T5 paraphrase generation, will this be information leakage? Should I avoid using same positive examples in two tasks?
**10.** none of the example uses 'model.train()' to set the mode for training. Is this redundant?
**11**. Thanks for suggestion nlg-eval ! However, metrics like BLUE can't really evaluate the quality of paraphrase generated. ( Like really good ones should have diverse phrases and structures but still same meaning).
Hope someone can address question 1. 6. 7. 8. 9. too.
<|||||>**4.** Yes T5 is a multitask model and you can use the the prefixes to perform the corresponding task. But note that, the results reported on individual tasks in the paper are reported after again fine-tuning the model on that task specifically.
And after fine-tuning the model can forget about other tasks
**5.** To answer the first question, you probably won't need to use those datasets, by using task prefix the model can exploit already available knowledge.
**10**. pytorch lightning automatically puts model in train mode for train loop and in eval mode for eval loop.
**11**. Yes, BLUE won't make much sense, ROUGE seems like a good metric for this task. <|||||>Thanks, @patil-suraj .
**4.** Could you explain what do you mean by forget? Here is my understanding: model is first fine-tuned on task **A** with data set **a** using prefix **Aa**, so now the model has the set of parameters **aa** and we call it model **AA**; Then I use the resulting model to further fine tune on task **B** with data set **b** using **Bb**, so the model's parameters change to **bb** and we call it model **BB**. Thus, if we use the final model **BB** to perform task on task **A**, the model may/may not 'recoganize' prefix **Aa**, but the performance will be worse than model **AA**.
If what I say above is correct, then my original understanding that transfer learning 'using the same model (structure and set of parameters are the same) for different tasks' is wrong. If so, the transfer learning or learning from multiple tasks are just give better initialization using the current task result for the next task.
**5.** If the understanding in 4 is correct, I think I may need to reuse the data set when training a new prefix.
<|||||>**4.**, forget here is the context of multitask learning, if you take a multitask model and then only fine-tune it for one task then there's a chance that it can forget about other tasks.<|||||>Thanks,@patil-suraj!
How about just one prefix/task? Will the model forget?
For example, I have paraphrase data set A and paraphrase data set B.
**Fine tune 1:** I first fine tune t5-large on data set A using prefix 'para:' with 2 epoch. The resulting model is T5-A.
I then fine tune t5-A on data set B using prefix 'para:' with 2 epoch. The resulting model is T5-B.
**Fine tune 2:** I first combine data set A and data set B into one file. Then I fine tune t5-large on the combined data set using prefix 'para:' with 2 epoch. The resulting model is T5-2.
Will T5-B forget about data set A? I tried two fine tunning methods and T5-2 seems worse than T5-B (T5-2 with more epoches seems worse than T5-B too).
My thought: If it is gradient descent and both methods have converged and only one optimal solution, they should have no difference. However, in real life, there are maybe a lot of local optimum, numerically, there is no guarantee which of the method is better andThe T5-2 should have a higher chance to be better as it has larger data set and prevent overfitting.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,006 | closed | Model cards: add roberta-base-squad-v1 and bert-base-uncased-squad-v1 | 07-23-2020 20:46:29 | 07-23-2020 20:46:29 | Thanks!<|||||>> Thanks!
Love this amazing library! Happy to contribute! 😊 |
|
transformers | 6,005 | closed | Model utils doc | This PR documents all the model utilities and fixes a few things in the `PreTrainedModel`/`TFPreTrainedModel`.
It introduces a new category I called "internal" that comes at the very end of the package documentation, which is aimed at documenting all our public internal methods (e.g., public in their modules, but not in the project init). This should be useful for people who tweak our code and reuse those pieces.
Preview of the new file in the docs is [here](https://64241-155220641-gh.circle-artifacts.com/0/docs/_build/html/internal/modeling_utils.html) (and you can navigate on the other pages if you want to check the smaller changes). | 07-23-2020 20:33:07 | 07-23-2020 20:33:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6005?src=pr&el=h1) Report
> Merging [#6005](https://codecov.io/gh/huggingface/transformers/pull/6005?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7e251ae0395bd3e558c633fe8664fbdf612cb4f4&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `95.83%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6005?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6005 +/- ##
==========================================
- Coverage 78.66% 78.66% -0.01%
==========================================
Files 146 146
Lines 26230 26231 +1
==========================================
Hits 20634 20634
- Misses 5596 5597 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6005?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.52% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.96% <90.90%> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.20% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6005/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6005?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6005?src=pr&el=footer). Last update [7e251ae...ead6b62](https://codecov.io/gh/huggingface/transformers/pull/6005?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,004 | closed | CI/Examples: ModuleNotFoundError: No module named '_sqlite3' | This is a common issue, all solutions I have found to address involve recompiling python.
like [1](https://stackoverflow.com/questions/1210664/no-module-named-sqlite3) [2](https://www.virtualizationteam.com/cloud/running-vcd-cli-fail-with-the-following-error-modulenotfounderror-no-module-named-_sqlite3.html)
Full Traceback:
```
2020-07-23T00:30:23.1433578Z ==================================== ERRORS ====================================
2020-07-23T00:30:23.1435598Z ____________ ERROR collecting examples/seq2seq/test_bash_script.py _____________
2020-07-23T00:30:23.1437122Z ImportError while importing test module '/home/hf/actions-runner_transformers/_work/transformers/transformers/examples/seq2seq/test_bash_script.py'.
2020-07-23T00:30:23.1437481Z Hint: make sure your test modules/packages have valid Python names.
2020-07-23T00:30:23.1437737Z Traceback:
2020-07-23T00:30:23.1438102Z examples/seq2seq/finetune.py:21: in <module>
2020-07-23T00:30:23.1438342Z from .utils import (
2020-07-23T00:30:23.1438573Z examples/seq2seq/utils.py:14: in <module>
2020-07-23T00:30:23.1438821Z from rouge_score import rouge_scorer, scoring
2020-07-23T00:30:23.1439560Z .env/lib/python3.7/site-packages/rouge_score/rouge_scorer.py:41: in <module>
2020-07-23T00:30:23.1440092Z from nltk.stem import porter
2020-07-23T00:30:23.1440789Z .env/lib/python3.7/site-packages/nltk/__init__.py:149: in <module>
2020-07-23T00:30:23.1441053Z from nltk.translate import *
2020-07-23T00:30:23.1441694Z .env/lib/python3.7/site-packages/nltk/translate/__init__.py:23: in <module>
2020-07-23T00:30:23.1441956Z from nltk.translate.meteor_score import meteor_score as meteor
2020-07-23T00:30:23.1442637Z .env/lib/python3.7/site-packages/nltk/translate/meteor_score.py:10: in <module>
2020-07-23T00:30:23.1442916Z from nltk.stem.porter import PorterStemmer
2020-07-23T00:30:23.1443523Z .env/lib/python3.7/site-packages/nltk/stem/__init__.py:29: in <module>
2020-07-23T00:30:23.1443794Z from nltk.stem.snowball import SnowballStemmer
2020-07-23T00:30:23.1444468Z .env/lib/python3.7/site-packages/nltk/stem/snowball.py:29: in <module>
2020-07-23T00:30:23.1444741Z from nltk.corpus import stopwords
2020-07-23T00:30:23.1445379Z .env/lib/python3.7/site-packages/nltk/corpus/__init__.py:66: in <module>
2020-07-23T00:30:23.1445750Z from nltk.corpus.reader import *
2020-07-23T00:30:23.1446438Z .env/lib/python3.7/site-packages/nltk/corpus/reader/__init__.py:105: in <module>
2020-07-23T00:30:23.1446711Z from nltk.corpus.reader.panlex_lite import *
2020-07-23T00:30:23.1447355Z .env/lib/python3.7/site-packages/nltk/corpus/reader/panlex_lite.py:15: in <module>
2020-07-23T00:30:23.1447591Z import sqlite3
2020-07-23T00:30:23.1447839Z /home/hf/.pyenv/versions/3.7.6/lib/python3.7/sqlite3/__init__.py:23: in <module>
2020-07-23T00:30:23.1448311Z from sqlite3.dbapi2 import *
2020-07-23T00:30:23.1448584Z /home/hf/.pyenv/versions/3.7.6/lib/python3.7/sqlite3/dbapi2.py:27: in <module>
2020-07-23T00:30:23.1448845Z from _sqlite3 import *
2020-07-23T00:30:23.1449485Z E ModuleNotFoundError: No module named '_sqlite3'
2020-07-23T00:30:23.1449638Z
``` | 07-23-2020 20:25:00 | 07-23-2020 20:25:00 | Very weird. Only the seq2seq examples have a dependency on nltk, which in turn has a dependency on sqlite3?
Are we pinning a recent nltk version?<|||||>We are requiring rouge_scorer, which uses nltk for stemming. There is no pinning.<|||||>@mfuntowicz fixed, thanks! |
transformers | 6,003 | closed | MBART: support summarization tasks where max_src_len > max_tgt_len | Previously, MBartTokenizer.prepare_translation_batch always truncated `src_texts` and `tgt_texts` to the same length.
This PR exposes a `max_target_length` kwarg, which, if specified, will control truncation of target_texts. | 07-23-2020 20:13:44 | 07-23-2020 20:13:44 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6003?src=pr&el=h1) Report
> Merging [#6003](https://codecov.io/gh/huggingface/transformers/pull/6003?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d1d15d6f2de9e2cde48ff3ea2072add3311ce2ac&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6003?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6003 +/- ##
=======================================
Coverage 78.52% 78.53%
=======================================
Files 146 146
Lines 26314 26316 +2
=======================================
+ Hits 20664 20667 +3
+ Misses 5650 5649 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6003?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6003/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.71% <100.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6003/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6003?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6003?src=pr&el=footer). Last update [d1d15d6...6c95793](https://codecov.io/gh/huggingface/transformers/pull/6003?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>we have a good test for `MbartDataset` I'll add a test for just the tokenizer to `tokenization_mbart.py` |
transformers | 6,002 | closed | Deebert Examples test failure | `DeeBertTests.test_glue_deebert`
Excerpt from [CI](https://pipelines.actions.githubusercontent.com/SFFqAjp6ciVZiZmfZfjie9y9Q96dfpUE8sJvWAtTDWoFlixGkf/_apis/pipelines/1/runs/4783/signedlogcontent/3?urlExpires=2020-07-23T20%3A02%3A20.7871420Z&urlSigningMethod=HMACV1&urlSignature=XL%2Fj%2F2NSBD3Q174ifxC4jTOQUloQMx3UFZ%2ByaM%2Beomk%3D)
```python
2020-07-23T00:30:23.1537872Z with patch.object(sys, "argv", train_args):
2020-07-23T00:30:23.1538109Z result = run_glue_deebert.main()
2020-07-23T00:30:23.1538324Z for value in result.values():
2020-07-23T00:30:23.1538573Z > self.assertGreaterEqual(value, 0.75)
2020-07-23T00:30:23.1538856Z E AssertionError: 0.6666666666666666 not greater than or equal to 0.75
2020-07-23T00:30:23.1539030Z
```
| 07-23-2020 20:05:18 | 07-23-2020 20:05:18 | @Ji-Xin would you mind taking a stab at this?
Command:
```bash
RUN_SLOW=1 pytest examples/deebert/
``` |
transformers | 6,001 | closed | MbartDataset can support Summarization | need to allow max_source_length != max_seq_length | 07-23-2020 19:56:14 | 07-23-2020 19:56:14 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.