repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 5,600 | closed | [MarianMT{ | 07-08-2020 12:41:25 | 07-08-2020 12:41:25 | ||
transformers | 5,599 | closed | Add newly trained `calbert-tiny-uncased` | Calbert is an open-source ALBERT trained on the Catalan OSCAR dataset. This is the `tiny` version, newly trained from a completely rewrite [in this repo](https://github.com/codegram/calbert), now using SentencePiece which makes everything work much better with pipelines and the default tooling, as demonstrated in the model card. | 07-08-2020 12:29:33 | 07-08-2020 12:29:33 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5599?src=pr&el=h1) Report
> Merging [#5599](https://codecov.io/gh/huggingface/transformers/pull/5599?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f82a2a5e8e6827343322a4a9831924c5bb9bd2b2&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5599?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5599 +/- ##
=======================================
Coverage 76.69% 76.69%
=======================================
Files 145 145
Lines 25351 25351
=======================================
Hits 19444 19444
Misses 5907 5907
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5599?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5599?src=pr&el=footer). Last update [f82a2a5...0a4786e](https://codecov.io/gh/huggingface/transformers/pull/5599?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>looks great, thanks for sharing! |
transformers | 5,598 | closed | returned value from parse method of MeCab >= 1.0.0 was changed | As shown below, returned value from MeCab's parse method was changed,
we need to fix MecabTokenizer in BertJapaneseTokenizer.
# mecab-python3==1.0.0 (latest version)
```
root@713173e4bace:/# pip freeze | grep mecab
mecab-python3==1.0.0
root@713173e4bace:/# python
Python 3.8.3 (default, Jul 7 2020, 11:33:46)
[GCC 7.5.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import MeCab
>>> m = MeCab.Tagger()
>>> m.parse('こんにちは')
'こんにちは\tコンニチワ\tコンニチハ\t今日は\t感動詞-一般\t\t\t5\nEOS\n'
```
# mecab-python3==0.996.5 (previous version)
```
root@713173e4bace:/# pip install mecab-python3==0.996.5 --upgrade
Collecting mecab-python3==0.996.5
Downloading mecab_python3-0.996.5-cp38-cp38-manylinux2010_x86_64.whl (17.1 MB)
|████████████████████████████████| 17.1 MB 1.2 MB/s
Installing collected packages: mecab-python3
Attempting uninstall: mecab-python3
Found existing installation: mecab-python3 1.0.0
Uninstalling mecab-python3-1.0.0:
Successfully uninstalled mecab-python3-1.0.0
Successfully installed mecab-python3-0.996.5
root@713173e4bace:/# pip freeze | grep mecab
mecab-python3==0.996.5
root@713173e4bace:/# python
Python 3.8.3 (default, Jul 7 2020, 11:33:46)
[GCC 7.5.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import MeCab
>>> m = MeCab.Tagger()
>>> m.parse('こんにちは')
'こんにちは\t感動詞,*,*,*,*,*,こんにちは,コンニチハ,コンニチワ\nEOS\n'
```
I found other issues related to MeCab>=1.0.0 ( like #5392 ) and understood the latest version of MeCab will not be supported soon. So if this PR is too early to merge, do not hesitate to close.
Thanks. | 07-08-2020 11:50:25 | 07-08-2020 11:50:25 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5598?src=pr&el=h1) Report
> Merging [#5598](https://codecov.io/gh/huggingface/transformers/pull/5598?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f82a2a5e8e6827343322a4a9831924c5bb9bd2b2&el=desc) will **increase** coverage by `0.18%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5598?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5598 +/- ##
==========================================
+ Coverage 76.69% 76.88% +0.18%
==========================================
Files 145 145
Lines 25351 25351
==========================================
+ Hits 19444 19491 +47
+ Misses 5907 5860 -47
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5598?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bert\_japanese.py](https://codecov.io/gh/huggingface/transformers/pull/5598/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9qYXBhbmVzZS5weQ==) | `30.48% <0.00%> (ø)` | |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5598/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5598/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5598/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5598?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5598?src=pr&el=footer). Last update [f82a2a5...b6b2498](https://codecov.io/gh/huggingface/transformers/pull/5598?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks for your PR. The problem is that, even after doing this, the tokenization returned is not the same, which is the main reason we pinned mecab to <1 (with no plan to support v1 for now): since it changes the tokenization, it will break models on the Hub that were pretrained with the old tokenization.<|||||>Thanks for your reply and summary of the problem.
I didn't realize the breaking changes of tokenizer...
For now, I'll keep using <1 versions. |
transformers | 5,597 | closed | DPR model examples / notebook / pipeline | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
An notebook or own script in the examples folders would be pretty nice for the new DPR model, implemented by @lhoestq
Maybe an implementation to the pipeline would be cool too
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
Such an notebook or script would make it easier to train an custom dpr model
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I did not read the paper and had no closer look to the code, but with some advice I could try to start an PR
| 07-08-2020 11:47:50 | 07-08-2020 11:47:50 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,596 | closed | Add data_collator with attention_mask feature | For models like BERT, we should mask the padding ids as invalidate to avoid attention on them. Therefore, in addition to `input_ids`, the feature `attention_mask` should also be fed to the model.
Add a data collator `DataCollatorForMaskedLanguageModeling` which will return all of above data, and add a dataset class `LineByLineTextMaskDataset` which is related with the former. | 07-08-2020 10:09:37 | 07-08-2020 10:09:37 | Add a related issue: #4702<|||||>I agree that we should handle `attention_mask`, but I don't think adding another class `DataCollatorForMaskedLanguageModeling` is necessary. The class `DataCollatorForLanguageModeling` already handles masked language modeling, and should imo be modified to handle attention masks as well.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5596?src=pr&el=h1) Report
> Merging [#5596](https://codecov.io/gh/huggingface/transformers/pull/5596?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cfbb98297449e09e5a2443b4ba76be52a71ec0f7&el=desc) will **increase** coverage by `0.82%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5596?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5596 +/- ##
==========================================
+ Coverage 76.97% 77.80% +0.82%
==========================================
Files 145 145
Lines 25317 25344 +27
==========================================
+ Hits 19487 19718 +231
+ Misses 5830 5626 -204
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5596?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `99.25% <100.00%> (+0.15%)` | :arrow_up: |
| [...rc/transformers/data/datasets/language\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `92.45% <100.00%> (+0.61%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.44% <0.00%> (-1.17%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.96% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.88% <0.00%> (+0.98%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5596/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5596?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5596?src=pr&el=footer). Last update [cfbb982...cd3ae45](https://codecov.io/gh/huggingface/transformers/pull/5596?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@LysandreJik
I agree!
I have modified the code, and passed related unit tests.
Could you please help review the code?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,595 | closed | transformer dataset and masked LM | # ❓ Questions & Help
I was wondering about the masked LM models and the Datasets for the transformer library.
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
https://stackoverflow.com/questions/62757772/hugging-face-tokenizer-for-masked-lm-question | 07-08-2020 09:42:20 | 07-08-2020 09:42:20 | Should I also put the questions form the stackoverflow post in this issue? Or is there any problem with this issue? Did I overlook something?<|||||>Posted the solution I found on github. |
transformers | 5,594 | closed | [Benchmark] Add benchmarks for TF Training | This PR adds train functions to TF benchmarks including tests.
The notebook is updated accordingly as well. | 07-08-2020 08:50:33 | 07-08-2020 08:50:33 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5594?src=pr&el=h1) Report
> Merging [#5594](https://codecov.io/gh/huggingface/transformers/pull/5594?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cfbb98297449e09e5a2443b4ba76be52a71ec0f7&el=desc) will **increase** coverage by `1.13%`.
> The diff coverage is `15.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5594?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5594 +/- ##
==========================================
+ Coverage 76.95% 78.08% +1.13%
==========================================
Files 145 145
Lines 25317 25351 +34
==========================================
+ Hits 19482 19795 +313
+ Misses 5835 5556 -279
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5594?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/benchmark/benchmark\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `61.53% <10.81%> (-18.28%)` | :arrow_down: |
| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <66.66%> (ø)` | |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-1.01%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.88% <0.00%> (+0.98%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5594/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5594?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5594?src=pr&el=footer). Last update [cfbb982...2e17a73](https://codecov.io/gh/huggingface/transformers/pull/5594?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Merging. Pinging @LysandreJik for notification. |
transformers | 5,593 | closed | AttributeError: 'Tensor' object has no attribute 'ndim' | # ❓ Questions & Help
Hello,
when I run the run_generation.py file I got this error:
Traceback (most recent call last):
File "C:\Users\imen.benamor\AppData\Local\Programs\Python\Python36\lib\site-packages\transformers\tokenization_utils_base.py", line 507, in convert_to_tensors
if tensor.ndim > 2:
AttributeError: 'Tensor' object has no attribute 'ndim'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_generation.py", line 274, in <module>
main()
File "run_generation.py", line 227, in main
encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False, return_tensors="pt")
File "C:\Users\imen.benamor\AppData\Local\Programs\Python\Python36\lib\site-packages\transformers\tokenization_utils_base.py", line 1425, in encode
**kwargs,
File "C:\Users\imen.benamor\AppData\Local\Programs\Python\Python36\lib\site-packages\transformers\tokenization_utils_base.py", line 1737, in encode_plus
**kwargs,
File "C:\Users\imen.benamor\AppData\Local\Programs\Python\Python36\lib\site-packages\transformers\tokenization_utils.py", line 473, in _encode_plus
verbose=verbose,
File "C:\Users\imen.benamor\AppData\Local\Programs\Python\Python36\lib\site-packages\transformers\tokenization_utils_base.py", line 2098, in prepare_for_model
encoded_inputs, tensor_type=return_tensors, prepend_batch_axis=prepend_batch_axis
File "C:\Users\imen.benamor\AppData\Local\Programs\Python\Python36\lib\site-packages\transformers\tokenization_utils_base.py", line 159, in __init__
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
File "C:\Users\imen.benamor\AppData\Local\Programs\Python\Python36\lib\site-packages\transformers\tokenization_utils_base.py", line 515, in convert_to_tensors
"Unable to create tensor, you should probably activate truncation and/or padding "
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
Any suggestions?
Thank you | 07-08-2020 08:28:19 | 07-08-2020 08:28:19 | Hi! Do you mind pasting the command you used to run the script? Thank you!<|||||>Hi @LysandreJik ,
Thank you for your quick reply.
First, I want to cheer you for your amazing work in HuggingFaces Transformers.
For that error I used the command "python run.generation.py --model_type=gpt2 --model_name_or_path=gpt2".
I doubt that the problem originates from my Pytorch version and installation. What do you think?
<|||||>Thank you :)
This doesn't fail in my environment. Are you running on an older `transformers` version? Do you mind pasting your environment information here?
Just running `transformers-cli env` in your environment should return something similar to this:
```
- `transformers` version: 3.0.2
- Platform: Linux-5.6.16-1-MANJARO-x86_64-with-arch-Manjaro-Linux
- Python version: 3.6.10
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```<|||||>I ran into the same issue when using pytorch19.01-py3 Nvidia container but don't get the error when using pytorch20.06-py3 Nvidia container. I can't use the latest version of Pytorch because it doesn't support Cuda driver 10.0 - only Cuda driver version 11.0. The Cuda driver I have is 10.0 version and there are limitations on Nvidia DGX to upgrade to Cuda driver 11.0.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in convert_to_tensors(self, tensor_type, prepend_batch_axis)
506 # at-least2d
--> 507 if tensor.ndim > 2:
508 tensor = tensor.squeeze(0)
AttributeError: 'Tensor' object has no attribute 'ndim'
During handling of the above exception, another exception occurred:<|||||>This is the command I am using that is triggering the error:
encoding = self.tokenizer.encode_plus(
note,
add_special_tokens=True,
max_length=self.max_len,
return_token_type_ids=True,
truncation=True,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt',
) <|||||>The error went away when I switched off the "return_tensors='pt' " argument<|||||>I'm also running into the same error @LysandreJik . Here's my `transformers-cli env`.
`- transformers version: 3.0.2`
`- Platform: Darwin-19.5.0-x86_64-i386-64bit`
`- Python version: 3.7.3`
`- PyTorch version (GPU?): 1.0.1 (False)`
`- Tensorflow version (GPU?): not installed (NA)`
`- Using GPU in script?: <fill in>`
`- Using distributed or parallel set-up in script?: <fill in>`
Running something trivial like `python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I hate you'))"` reproduces the error which leads me to believe I'm missing something obvious here. Here's the last error:
`"Unable to create tensor, you should probably activate truncation and/or padding "
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.`
<|||||>I don't know if you are willing to upgrade, but your issue is that `Tensor.ndim` was introduced in a later version. If you uninstall pytorch and follow these instructions https://pytorch.org/get-started/locally/ you should be all set.<|||||>hi guys i have the same problem i'm trying to load a model and integrate it in django
any solution ?<|||||>same error ! but when i run it on kaggle it works<|||||>Easiest solution is upgrading torch. |
transformers | 5,592 | closed | Allow to set Adam beta1, beta2 in TrainingArgs | In some models, `beta1` and `beta2` in Adam optimizer are set to be different from the default values `(0.9, 0.999)`. For example, RoBERTa set `beta2 = 0.98`. It is thereby necessary to add `beta1` and `beta2` in `TrainingArgs` if the user wants to fine-tune RoBERTa and other similar models. Also, another hyperparameter of Adam, `adam_epsilon`, has already been added to `TrainingArgs`. For the purpose of consistency, it would be better of `adam_beta1` and `adam_beta2` are also added. | 07-08-2020 06:30:04 | 07-08-2020 06:30:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5592?src=pr&el=h1) Report
> Merging [#5592](https://codecov.io/gh/huggingface/transformers/pull/5592?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cfbb98297449e09e5a2443b4ba76be52a71ec0f7&el=desc) will **increase** coverage by `1.21%`.
> The diff coverage is `75.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5592?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5592 +/- ##
==========================================
+ Coverage 76.95% 78.16% +1.21%
==========================================
Files 145 145
Lines 25317 25319 +2
==========================================
+ Hits 19482 19790 +308
+ Misses 5835 5529 -306
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5592?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5592/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.96% <0.00%> (ø)` | |
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5592/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.53% <ø> (ø)` | |
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5592/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.65% <100.00%> (ø)` | |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5592/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `78.00% <100.00%> (+0.44%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5592/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5592/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.20% <0.00%> (-1.76%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5592/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5592/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5592?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5592?src=pr&el=footer). Last update [cfbb982...9bdb1a5](https://codecov.io/gh/huggingface/transformers/pull/5592?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LGTM! Really nice!!!<|||||>I'm fine with this |
transformers | 5,591 | closed | KeyError Issue in Question answering | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Distillbert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Just run this code.
```
from transformers import pipeline
MODEL_LOC = "distilbert-base-uncased-distilled-squad"
TOKENIZER_LOC = "distilbert-base-uncased-distilled-squad"
qa = pipeline(
"question-answering",
model=MODEL_LOC,
tokenizer=TOKENIZER_LOC
)
context = " ".join(["The incubation period is around 5 days (range: 4-7 days) with a maximum of 12-13 day"]*10)
qa({"question": "incubation period?", "context": context})
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It should just provide a json output. If I change *10 to *5, it works.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: OS X
- Python version: 3.6
- PyTorch version (GPU?): 1.5
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: no
| 07-08-2020 05:12:23 | 07-08-2020 05:12:23 | I believe this issue has been recurring in other versions due to tokenization related indexing issue.
I did the following to get the span from the answer.
```
MODEL_LOC = "distilbert-base-uncased-distilled-squad"
TOKENIZER_LOC = "distilbert-base-uncased-distilled-squad"
model = AutoModelForQuestionAnswering.from_pretrained(MODEL_LOC)
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_LOC)
def get_answer(question, contexts: List[str]):
q_c_pairs = [(question, c) for c in contexts]
encoding = tokenizer.batch_encode_plus(q_c_pairs,
max_length=256,
pad_to_max_length=True,
truncation=True,
)
q_encoding = tokenizer.encode_plus(question, max_length=256, truncation=True,)
q_len = len(q_encoding["input_ids"])
answer_encoding = torch.tensor(encoding["input_ids"])[:, q_len:]
model_input_names = tokenizer.model_input_names + ["input_ids"]
fw_args = {k: torch.tensor(encoding[k]) for k in model_input_names}
with torch.no_grad():
start_scores, end_scores = model(**fw_args)
start_scores_prob = F.softmax(start_scores, dim=1)
start_scores_prob = start_scores_prob[:, q_len:]
start_max, start_max_index = torch.max(start_scores_prob, dim=1)
end_scores_prob = F.softmax(end_scores, dim=1)
end_scores_prob = end_scores_prob[:, q_len:]
# Making sure only indices beyond start_index are considered. Forcibly make ones before start_idx as 0
end_scores_dummy = torch.ones_like(end_scores_prob)
end_scores_prob = (torch.arange(end_scores_dummy.size(1)) > start_max_index.unsqueeze(1)) * 1.0 * end_scores_prob
end_max, end_max_index = torch.max(end_scores_prob, dim=1)
probs = (start_max * end_max)
for i, (s, e, p) in enumerate(zip(start_max_index, end_max_index, probs)):
token_ids = answer_encoding[i][s:e + 1]
tokens = tokenizer.convert_ids_to_tokens(token_ids, skip_special_tokens=True)
ans_string = tokenizer.convert_tokens_to_string(tokens)
print(s, e, ans_string, p.cpu().numpy())
question = "What is the incubation period"
context = " ".join(["The incubation period is around 5 days with a maximum of 12-13 days"] * 20)
get_answer(question, [context])
```
I get the desired output and seems to work fine for many cases I tried and with different lengths.
<|||||>Hello! This seems to have been patched on `master`, it fails for me on `v3.0.2` but not on `master`.
The fix will be in the next version, in the meantime you can install from source:
```py
pip install git+https://github.com/huggingface/transformers
```<|||||>Works fine now. Thanks. |
transformers | 5,590 | closed | HF Trainer Segmentation Fault | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): GPT2-medium & large
Language I am using the model on (English, Chinese ...): Korean (with custom trained tokenizer)
The problem arises when using:
* [ O ] the official example scripts: (give details below)
https://huggingface.co/blog/how-to-train
* [ O ] my own modified scripts: (give details below)
```
from transformers import Trainer, TrainingArguments, DataCollatorForLanguageModeling, LineByLineTextDataset
from transformers import GPT2Config, GPT2LMHeadModel, GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained("./data/TOKEN")
config = GPT2Config.from_pretrained('gpt2-medium')
model = GPT2LMHeadModel(config=config)
tokenizer = GPT2TokenizerFast.from_pretrained("./data/TOKEN", model_max_length=1024)
print('loading dataset...')
dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="./data/kowiki.txt",
block_size=512,
)
training_args = TrainingArguments(
output_dir='./m', # output directory
num_train_epochs=1, # total # of training epochs
per_device_train_batch_size=1, # batch size per device during training - the higher the better, but may OOM
per_device_eval_batch_size=1, # batch size for evaluation
logging_dir='./logs', # directory for storing logs
save_steps=10000,
do_train=True
)
trainer = Trainer(
model=model, # the instantiated Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=dataset, # training dataset
)
faulthandler.enable()
trainer.train()
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ O ] my own task or dataset: (give details below)
Text generation with prompt (trained on wiki & novel)
## To reproduce
Steps to reproduce the behavior:
1. Modify path to data file
2. Use any file(tested with Korean - UTF8)
3. Use any tokenizer(tested with self & GPT2 tokenizers)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
### Error message
```
loading dataset...
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Fatal Python error: Segmentation fault | 0/99996 [00:00<?, ?it/s]
Thread 0x00007f872dfff700 (most recent call first):
File "/opt/conda/lib/python3.6/threading.py", line 299 in wait
File "/opt/conda/lib/python3.6/threading.py", line 551 in wait
File "/opt/conda/lib/python3.6/site-packages/tqdm/_monitor.py", line 69 in run
File "/opt/conda/lib/python3.6/threading.py", line 916 in _bootstrap_inner
File "/opt/conda/lib/python3.6/threading.py", line 884 in _bootstrap
Thread 0x00007f8736bb5700 (most recent call first):
File "/opt/conda/lib/python3.6/threading.py", line 299 in wait
File "/opt/conda/lib/python3.6/queue.py", line 173 in get
File "/opt/conda/lib/python3.6/site-packages/tensorboard/summary/writer/event_file_writer.py", line 205 in run
File "/opt/conda/lib/python3.6/threading.py", line 916 in _bootstrap_inner
File "/opt/conda/lib/python3.6/threading.py", line 884 in _bootstrap
Current thread 0x00007f88273e7740 (most recent call first):
File "/opt/conda/lib/python3.6/site-packages/torch/cuda/comm.py", line 39 in broadcast_coalesced
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 21 in forward
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 71 in _broadcast_coalesced_reshape
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 88 in replicate
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 159 in replicate
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 154 in forward
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 577 in __call__
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 622 in _training_step
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 499 in train
File "trainer.py", line 34 in <module>
Segmentation fault (core dumped)
```
## Expected behavior
Process through training(as normal)
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.4.0-178-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.6.0a0+9907a3e (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Planning to (don't see any flags!)
| 07-08-2020 03:58:31 | 07-08-2020 03:58:31 | Hi! Could you paste here the result of `pip list` in your environment ?<|||||>```
absl-py 0.9.0
apex 0.1
astor 0.8.1
astunparse 1.6.3
backcall 0.1.0
beautifulsoup4 4.9.1
blis 0.4.1
Bottleneck 1.3.2
cachetools 4.1.0
catalogue 1.0.0
certifi 2020.6.20
chardet 3.0.4
click 7.1.2
cycler 0.10.0
cymem 2.0.3
Cython 0.29.20
decorator 4.4.2
fastai 1.0.61
fastprogress 0.2.3
filelock 3.0.12
fire 0.3.1
future 0.18.2
gast 0.2.2
gluonnlp 0.9.1
google-auth 1.18.0
google-auth-oauthlib 0.4.1
google-pasta 0.2.0
graphviz 0.8.4
grpcio 1.29.0
h5py 2.10.0
idna 2.8
importlib-metadata 1.6.1
ipython 7.14.0
ipython-genutils 0.2.0
jedi 0.17.0
joblib 0.15.1
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.2
kiwisolver 1.2.0
kobert-transformers 0.4.1
kogpt2 0.1.1
kss 1.3.1
Markdown 3.2.2
matplotlib 3.2.2
mecab-python3 1.0.0
murmurhash 1.0.2
mxnet 1.6.0
natto 0.1.7
numexpr 2.7.1
numpy 1.19.0
nvidia-ml-py3 7.352.0
oauthlib 3.1.0
opt-einsum 3.2.1
packaging 20.4
pandas 1.0.5
parso 0.7.0
pdf2image 1.9.0
pexpect 4.8.0
pickleshare 0.7.5
Pillow 6.2.0
pip 20.1.1
plac 1.1.3
preshed 3.0.2
prompt-toolkit 3.0.5
protobuf 3.12.2
psutil 5.7.0
ptyprocess 0.6.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
Pygments 2.6.1
pyparsing 2.4.7
pytesseract 0.2.7
python-dateutil 2.8.1
pytz 2020.1
PyYAML 5.3.1
regex 2017.4.5
requests 2.21.0
requests-oauthlib 1.3.0
rsa 4.6
sacremoses 0.0.43
scikit-learn 0.23.1
scipy 1.4.1
sentencepiece 0.1.91
setuptools 41.2.0
six 1.14.0
soupsieve 2.0.1
soynlp 0.0.493
spacy 2.3.0
srsly 1.0.2
tensorboard 1.15.0
tensorboard-plugin-wit 1.6.0.post3
tensorflow 1.15.0
tensorflow-estimator 1.15.1
termcolor 1.1.0
thinc 7.4.1
threadpoolctl 2.1.0
tokenizers 0.7.0
torch 1.5.1+cu101
torchvision 0.6.1+cu101
tqdm 4.46.1
traitlets 4.3.3
transformers 2.11.0
urllib3 1.24.3
wasabi 0.7.0
wcwidth 0.1.9
Werkzeug 1.0.1
wheel 0.34.2
wrapt 1.12.1
zipp 3.1.0
```
* Result was the same with
```
tokenizers 0.8.1rc1
transformers 3.0.2
```<|||||>Is there anything else I should post?<|||||>Bumping @sgugger to analyze this issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,589 | closed | Datasets & collators for NER | # 🚀 Feature request
Is there a plan to add a dataset subclass and loader/collator for token classification tasks in the CONLL format? Or has that been deliberately avoided for some reason.
## Motivation
We have TextDatasets and DataCollatorForLanguageModeling. Can we have something similar for token classification tasks with some CONLL format input text file? That way we can just point to a file & tokenizer and generate a dataset like the language modeling tasks.
## Your contribution
I have a working version of this that I use in my projects. I can package and share it if it would be useful to add directly to transformers library. | 07-08-2020 01:49:06 | 07-08-2020 01:49:06 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,588 | closed | [Some weights or buffers of the PyTorch model TFGPT2LMHeadModel were not initialized] convert GPT2 pytorch to tensorflow model | # ❓ Questions & Help
How can we convert gpt2 fine tuned model from run_language_model.py from pytorch to tensorflow model? I came into the following warning when importing pytorch model GPT2LMHeadModel into TFGPT2LMHeadModel.
```
WARNING:transformers.modeling_tf_pytorch_utils:Some weights or buffers of the PyTorch model TFGPT2LMHeadModel were not initialized from the TF 2.0 model and are newly initialized: ['transformer.h.2.attn.bias', 'transformer.h.4.attn.masked_bias', 'transformer.h.4.attn.bias', 'transformer.h.3.attn.bias', 'transformer.h.3.attn.masked_bias', 'transformer.h.0.attn.masked_bias', 'transformer.h.5.attn.masked_bias', 'transformer.h.1.attn.bias', 'transformer.h.0.attn.bias', 'transformer.h.5.attn.bias', 'transformer.h.1.attn.masked_bias', 'lm_head.weight', 'transformer.h.2.attn.masked_bias']
```
Or do we have any tutorial to train a GPT2 language model by TFTrainer?
## Details
<!-- Description of your issue -->
Reproduce
```
from transformers import *
model = GPT2LMHeadModel.from_pretrained("distilgpt2")
model.save_pretrained("./gpt2_pt")
model = TFGPT2LMHeadModel.from_pretrained("./gpt2_pt", from_pt=True)
```
However we won't see the above warning if we use
```
model = TFGPT2LMHeadModel.from_pretrained("distilgpt2")
```
If we already use `run_language_model.py` to train a GPT2LMHeadModel model and import it into TFGPT2LMHeadModel, can we safely use the converted model even if we see the warning? | 07-08-2020 01:27:49 | 07-08-2020 01:27:49 | Hi! The way you converted your model is the recommended way :ok_hand:
The warning is irrelevant to this conversion, we should try to make that clearer.<|||||>The warning is very confusing and can be fixed by pull request #6604.<|||||>The warning is very confusing and can be fixed by new pull request #6623 |
transformers | 5,587 | closed | Difference between AutoTokenizer.from_pretrained and BertTokenizer.from_pretrained | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
While loading pretrained BERT model, what's the difference between AutoTokenizer.from_pretrained and BertTokenizer.from_pretrained? I'm very new to transformers and still confused about some basic things.
Thanks.
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 07-08-2020 01:12:09 | 07-08-2020 01:12:09 | Hi! The [documentation](https://huggingface.co/transformers/model_doc/auto.html) mentions:
> In many cases, the architecture you want to use can be guessed from the name or the path of the pretrained model you are supplying to the from_pretrained method.
>
> AutoClasses are here to do this job for you so that you automatically retrieve the relevant model given the name/path to the pretrained weights/config/vocabulary:
>
> Instantiating one of AutoModel, AutoConfig and AutoTokenizer will directly create a class of the relevant architecture (ex: model = AutoModel.from_pretrained('bert-base-cased') will create a instance of BertModel).
So if the string with which you're calling `from_pretrained` is a BERT checkpoint (like `bert-base-uncased`), then this:
```py
AutoTokenizer.from_pretrained("bert-base-uncased")
```
is the same as this:
```py
BertTokenizer.from_pretrained("bert-base-uncased")
```
However, `Auto*` are more flexible as you can specify any checkpoint and the correct model will be loaded, e.g.:
```py
AutoTokenizer.from_pretrained("gpt2") # works and returns the correct GPT2Tokenizer instance
BertTokenizer.from_pretrained("gpt2") # fails
```
Closing this for now, let me know if you have other questions.<|||||>@LysandreJik
I'm trying to recreate the
```train_new_from_iterator```
method from the
```class PreTrainedTokenizerFast(PreTrainedTokenizerBase)```
class
But the class that we have for training is different
```
auto = transformers.AutoTokenizer.from_pretrained("bert-base-cased")
print(type(auto))
<class 'transformers.models.bert.tokenization_bert_fast.BertTokenizerFast'>
```
What is the correct way to do so?<|||||>very nice Infromation AutoTokenizer |
transformers | 5,586 | closed | GPT2 past usage | Hello everyone,
I tried to answer this [stackoverflow question](https://stackoverflow.com/questions/62703391/estimate-token-probability-logits-given-a-sentence-without-computing-the-entire) and stumbled about a strange beheaviour I can't explain.
The following code will calculate the loss for a sentence with different single words injected:
```
from transformers import GPT2Tokenizer, GPT2LMHeadModel
import torch
model = GPT2LMHeadModel.from_pretrained("gpt2")
model.eval()
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
def score(sentence):
tokenize_input = tokenizer.tokenize(sentence)
tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
loss = model(tensor_input, labels=tensor_input)
return -loss[0].item()
candidates = ["watch", "run", "think", "apple", "light"]
sent_template = "I like sitting in my new chair and {} about life"
print({candidate: score(sent_template.format(candidate)) for candidate in candidates})
```
Output:
```
{'watch': -5.406847953796387, 'run': -5.533411502838135, 'think': -4.525279521942139, 'apple': -6.158637046813965, 'light': -5.835141658782959}
```
Now I wanted to use the past parameter according to [documentation](https://huggingface.co/transformers/quickstart.html#using-the-past) and expected the same result:
```
from transformers import GPT2Tokenizer, GPT2LMHeadModel
import torch
model = GPT2LMHeadModel.from_pretrained("gpt2")
model.eval()
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
past = "I like sitting in my new chair and"
past_tokenize_input = tokenizer.tokenize(past)
past_tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(past_tokenize_input)])
_, _, past = model(past_tensor_input, labels=past_tensor_input)
def score(sentence, past):
tokenize_input = tokenizer.tokenize(sentence, )
tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
loss = model(tensor_input, labels=tensor_input, past=past)
return -loss[0].item()
candidates = ["watch", "run", "think", "apple", "light"]
sent_template = " {} about life"
print({candidate: score(sent_template.format(candidate), past) for candidate in candidates})
```
but the loss is different:
```
{'watch': -7.811002731323242, 'run': -6.370519638061523, 'think': -3.460831642150879, 'apple': -9.08120346069336, 'light': -8.28120231628418}
```
Is this the intended behaviour or am I doing something wrong? | 07-07-2020 21:51:27 | 07-07-2020 21:51:27 | Hey @cronoik,
Thanks for your issue. It does not really surprise me that the loss is different.
In the first case the following loss is calculated:
loss = CrossEntropy(`input_ids`: "I like sitting in my new chair and {} about" vs. `labels`: "like sitting in my new chair and {} about life").
where as in the second case the following loss is calculated:
loss = CrossEntropy(`input_ids`: "{} about" vs. `labels`: "about life").
This is simplied - in reality the loss between the tokens of those words are calculated.
The important part to note here is that 1) `past` should not be used for training. It should be used to speed up inference.
2) When using `past` only the output embeddings of the `input_ids` (in your case for "{} about life") are calculated and not also for the "cached" past input_ids.
Hope this answers your question<|||||>@patrickvonplaten Thank you a lot for your answer.<|||||>Hello! I have a question for gpt-2 lmhead model's input 'past_key_values'
I want to use this option for model.generate module but there is an error if I use this option by specifying **model_specific_kwargs={'past':past} in model.generate's inputs... dimension error...
what should I do for using this option for generation...?<|||||>Hey @chaeyoon-jang,
Could you open a new issue for this? |
transformers | 5,585 | closed | Update question template | Point people to the forum. | 07-07-2020 21:28:22 | 07-07-2020 21:28:22 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5585?src=pr&el=h1) Report
> Merging [#5585](https://codecov.io/gh/huggingface/transformers/pull/5585?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d6eab53058015483e9cbcbfee4bf900c3a8ab772&el=desc) will **increase** coverage by `0.80%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5585?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5585 +/- ##
==========================================
+ Coverage 76.96% 77.77% +0.80%
==========================================
Files 145 145
Lines 25317 25317
==========================================
+ Hits 19486 19690 +204
+ Misses 5831 5627 -204
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5585?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `83.08% <0.00%> (-14.71%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.62% <0.00%> (-2.54%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.88% <0.00%> (+0.98%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5585/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5585?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5585?src=pr&el=footer). Last update [d6eab53...ad24260](https://codecov.io/gh/huggingface/transformers/pull/5585?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,584 | closed | On running finetune.py for seq2seq, the following error comes up: optimizer_step() got an unexpected keyword argument 'using_native_amp' | File "finetune.py", line 344, in <module>
main(args)
File "finetune.py", line 322, in main
logger=logger,
File "/content/drive/My Drive/Colab Notebooks/transformers/examples/lightning_base.py", line 336, in generic_train
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 979, in fit
self.single_gpu_train(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 185, in single_gpu_train
self.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1156, in run_pretrain_routine
self.train()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train
self.run_training_epoch()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 452, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 657, in run_training_batch
grad_norm_dic = self.run_batch_backward_pass(split_batch, batch_idx, opt_idx, optimizer)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 709, in run_batch_backward_pass
self.call_optimizer_step(optimizer, opt_idx, batch_idx, split_batch)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 747, in call_optimizer_step
using_native_amp=native_amp)
TypeError: optimizer_step() got an unexpected keyword argument 'using_native_amp'
I am using pytorch-lightning==0.8.1.
Can I get some help.
Thank you | 07-07-2020 21:04:42 | 07-07-2020 21:04:42 | Same issue here on 0.8.4. Downgrading to 0.8.1 works for me.
Related to recent pytorch-lightning commits for Apex: https://github.com/PyTorchLightning/pytorch-lightning/commit/0a092f66836912a714804c5103f03c7929ebf774<|||||>however wandb is not working with 0.8.1<|||||>Getting the same issue today with 0.8.4. Was working fine last week -- not sure what changed :-( -- will try to downgrade Lightning and see what happens.
Update: downgrading to Lightning 0.7.5 -- something is running (training now), but getting bunch of warnings as well. Any idea of the expected compatibility requirements @sshleifer ? Can also try 0.8.1 -- or take a look at any related issues.
I'm running HF Transformers from source, sync'ed today.<|||||>0.8.1 is the current version that I am running. @marton-avrios, wandb works for me in that version. Want to post a traceback in a new issue with your error?<|||||>@sshleifer 0.8.1 works well for me as well. And no warnings, like the older 0.7.x. Seems that 0.8.4 is what you get by default via `pip`. Hopefully that gets fixed on one end or the other, but 0.8.1 good for me. Should have replied sooner...<|||||>The reason it changed is because the overloaded `optimizer_step` does not include the new parameters.
This can be fixed in the demo scripts by adding the parameters to the overriding function.<|||||>> 0.8.1 is the current version that I am running. @marton-avrios, wandb works for me in that version. Want to post a traceback in a new issue with your error?
See #5739 <|||||>> The reason it changed is because the overloaded `optimizer_step` does not include the new parameters.
>
> This can be fixed in the demo scripts by adding the parameters to the overriding function.
Thank you. This helped me out.<|||||>> > The reason it changed is because the overloaded `optimizer_step` does not include the new parameters.
> > This can be fixed in the demo scripts by adding the parameters to the overriding function.
>
> Thank you. This helped me out.
How did you solve it?<|||||>> > > The reason it changed is because the overloaded `optimizer_step` does not include the new parameters.
> > > This can be fixed in the demo scripts by adding the parameters to the overriding function.
> >
> >
> > Thank you. This helped me out.
>
> How did you solve it?
Are you facing this issue ?<|||||>I faced the same issue after upgrading to pytorch-lightning to 0.9.0. The solution above, adding parameters works. In my case using the huggingface lightning_base.py,
I add using_native_amp=None in `def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, second_order_closure=None,using_native_amp=None):`
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,583 | closed | Test XLA examples | Add a script to test examples on XLA. | 07-07-2020 18:55:28 | 07-07-2020 18:55:28 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5583?src=pr&el=h1) Report
> Merging [#5583](https://codecov.io/gh/huggingface/transformers/pull/5583?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33e43edddcab60217027dcf7f6570eead1195083&el=desc) will **increase** coverage by `1.52%`.
> The diff coverage is `40.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5583?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5583 +/- ##
==========================================
+ Coverage 76.31% 77.83% +1.52%
==========================================
Files 145 145
Lines 25049 25053 +4
==========================================
+ Hits 19116 19500 +384
+ Misses 5933 5553 -380
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5583?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/testing\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `76.47% <40.00%> (-4.39%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.09% <0.00%> (-1.03%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |
| [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+1.80%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+2.25%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.55% <0.00%> (+11.22%)` | :arrow_up: |
| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: |
| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/5583/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5583?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5583?src=pr&el=footer). Last update [33e43ed...8f84df6](https://codecov.io/gh/huggingface/transformers/pull/5583?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,582 | closed | Rename files | As discussed in #4829 | 07-07-2020 18:25:11 | 07-07-2020 18:25:11 | Mmm, guess this change is not possible as long as the file does not support evaluation as well. |
transformers | 5,581 | closed | [mbart] prepare_translation_batch passes **kwargs to allow DeprecationWarning | Since we deleted the old `pad_to_max_length` kwarg, we want to give the user a deprecation warning if it is passed. By passing all `**kwargs` to tokenizer.__call__, all improper parameter usage will cause appropriate warnings/errors. | 07-07-2020 17:35:44 | 07-07-2020 17:35:44 | |
transformers | 5,580 | closed | TypeError: 'BertTokenizer' object is not callable | ```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
sequence_a = "HuggingFace is based in NYC"
sequence_b = "Where is HuggingFace based?"
encoded_dict = tokenizer(sequence_a, sequence_b)
```
This will produce error 'BertTokenizer' object is not callable. Maybe I should call `tokenizer.tokenize()`? But I think the [doc](https://huggingface.co/transformers/glossary.html#token-type-ids) should also be updated! | 07-07-2020 16:45:06 | 07-07-2020 16:45:06 | Hello! It seems you're running on an older `transformers` version. The `__call__` was implemented only in version v3.0.0+. Do you mind pasting your environment information here?
You should have a dropdown in the documentation with the different versions, so that you can select the documentation that works for your version.<|||||>yes, you are right. I got the old version because I had the [mmf ](https://github.com/facebookresearch/mmf) framework depending on transformers==2.0.3. So when I run pip install transformers, it just did nothing due to the existence of the old version. Thank you!<|||||>Thank you a lot!<|||||>So we have to update the transformer to the latest version, and it would work? Is that the case |
transformers | 5,579 | closed | OSError: Model name 'facebook/bart-large-cnn' was not found in tokenizers model name list | # 🐛 Bug
## Information
Model I am using (Bart):**facebook/bart-large-cnn**
Language I am using the model on English
The problem arises when using:
* [x] the official example scripts: summarization
## To reproduce
Steps to reproduce the behavior:
run the code below
```python
from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn')
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
load the model
<!-- A clear and concise description of what you would expect to happen. -->
## Error infomation
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-5-3a96f15c5285> in <module>
----> 1 tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn")
~/.conda/envs/hug/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, *inputs, **kwargs)
1138
1139 """
-> 1140 return cls._from_pretrained(*inputs, **kwargs)
1141
1142 @classmethod
~/.conda/envs/hug/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1244 ", ".join(s3_models),
1245 pretrained_model_name_or_path,
-> 1246 list(cls.vocab_files_names.values()),
1247 )
1248 )
OSError: Model name 'facebook/bart-large-cnn' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'facebook/bart-large-cnn' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: centos
- Python version: python 3.7.0
- PyTorch version (GPU?): pytorch 1.5.0 cpu_py37hd91cbb3_0
- Tensorflow version (GPU?): tensorflow-gpu 2.0.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: no
| 07-07-2020 16:13:41 | 07-07-2020 16:13:41 | Did you get this solved? I cannot load in t5-large.<|||||>> Did you get this solved? I cannot load in t5-large.
solved simply by reactivating the environment and restarting the program. I installed old version of transformers, after upgrading transformers, I forgot to deactivate the environment. |
transformers | 5,578 | closed | [Reformer] - Cache hidden states and buckets to speed up inference | As discussed with the authors in Reformer we cache the hidden states and buckets to speed up inference for language generation. Caching only the hidden states can save at least twice the memory versus caching the key and value output vectors (often more since the `hidden_size` can be smaller than key and query projections).
The idea is to only recompute key and value projections within the same chunk so that the output will be equal.
- [x] Implement caching mechanism for Local Attention
- [x] Implement caching mechanism for LSH Attention
- [x] Add test
- [x] Add logic to generation
- [x] Refactoring and better naming
=> This results in a 10x speed up when generating up to 1000 tokens.
**Some fun trials trying to make Reformer generate 16000 tokens for its own Wikipedia article [here](https://colab.research.google.com/drive/1Oao8vBtDkz6v1E1efUhTlug5uuxC_BnE?usp=sharing).**
*Review*:
Not urgent. Merging this PR can wait a bit. Added a couple of tests to make sure caching gives same output as non-caching. | 07-07-2020 15:38:20 | 07-07-2020 15:38:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5578?src=pr&el=h1) Report
> Merging [#5578](https://codecov.io/gh/huggingface/transformers/pull/5578?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8ab565a4be5a7fd96b19ef88d474037ef31f27e5&el=desc) will **decrease** coverage by `0.10%`.
> The diff coverage is `98.47%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5578?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5578 +/- ##
==========================================
- Coverage 77.32% 77.21% -0.11%
==========================================
Files 146 146
Lines 26047 26198 +151
==========================================
+ Hits 20141 20230 +89
- Misses 5906 5968 +62
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5578?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `81.49% <ø> (ø)` | |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `90.56% <98.47%> (+2.70%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-0.76%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5578?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5578?src=pr&el=footer). Last update [8ab565a...7cd6a8f](https://codecov.io/gh/huggingface/transformers/pull/5578?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@LysandreJik @sgugger - when you guys are back, feel free to merge to PR or we can wait until I'm back - not too urgent this PR. LGTM to merge.<|||||>Okey merging this.
@sgugger @LysandreJik @sshleifer -> checked caching extensively and added multiple tests for equality.
Would be nice if you can tag me for possible future issues related to this caching mechanism. |
transformers | 5,577 | closed | Fix tokenizers pretrained saving/loading | Fix #5571
I managed to fix the saving part, but I don't really know what to do for the loading part (in `_from_pretrained`). | 07-07-2020 14:18:44 | 07-07-2020 14:18:44 | Hey @n1t0, great job, saving works now! :)
Do you think `AddedToken` instances should be serialized with additional meta data?
I managed to tweak JSON (de)serialization with custom hooks to process objects properly but it is difficult to infer the deserializing type from a single set of attributes, passing `__class__` meta data helps to make guesses at what types to deserialize. <|||||>Superseded by #6026 |
transformers | 5,576 | closed | Fix tests imports dpr | There were changes in the locations of some functions used for tests in #5350.
When #5279 was merged some tests couldn't run.
I fixed the imports of those functions.
When I re-ran the tests I noticed that some didn't pass for the DPRReaderTokenizer so I fixed them.
| 07-07-2020 14:08:54 | 07-07-2020 14:08:54 | Some tests are failing, please only merge once everything pass.<|||||>Yep I'm on it<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5576?src=pr&el=h1) Report
> Merging [#5576](https://codecov.io/gh/huggingface/transformers/pull/5576?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d2a93991158f15993eba9ab421d82766b892f948&el=desc) will **increase** coverage by `0.14%`.
> The diff coverage is `74.31%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5576?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5576 +/- ##
==========================================
+ Coverage 76.84% 76.99% +0.14%
==========================================
Files 141 145 +4
Lines 24685 25049 +364
==========================================
+ Hits 18969 19286 +317
- Misses 5716 5763 +47
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5576?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.65% <ø> (ø)` | |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <ø> (ø)` | |
| [src/transformers/data/datasets/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL3NxdWFkLnB5) | `47.56% <47.56%> (ø)` | |
| [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <57.65%> (ø)` | |
| [src/transformers/modeling\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kcHIucHk=) | `97.45% <97.45%> (ø)` | |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/configuration\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rwci5weQ==) | `100.00% <100.00%> (ø)` | |
| [src/transformers/data/datasets/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |
| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/5576/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5576?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5576?src=pr&el=footer). Last update [e49393c...878b09c](https://codecov.io/gh/huggingface/transformers/pull/5576?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>All green, merging |
transformers | 5,575 | closed | Seperating premise and hypothesis in MNLI | # ❓ Questions & Help
I'm adding it here since I didn't receive any reply on SO. I think this query might be relevant for people who are working in few-shot, contrastive/metric learning space.
I'm trying to implement Siamese like transformer architecture. Similar work has been done in SentenceBERT paper. I'm facing an issue. To seperate hypothesis and premise, I modify [this line](https://github.com/huggingface/transformers/blob/3dcb748e31be8c7c9e4f62926c5c144c62d07218/src/transformers/data/processors/glue.py#L131) from `_glue_convert_examples_to_features`. Instead I do
```
batch_encoding_a = tokenizer(
[example.text_a for example in examples],
max_length=max_length,
padding="max_length",
truncation=True,
)
```
Did the same thing for `examples.text_b` to obtain `batch_encoding_b`. Then I modify the GlueDataset by modifying [this line mainly](https://github.com/huggingface/transformers/blob/3dcb748e31be8c7c9e4f62926c5c144c62d07218/src/transformers/data/datasets/glue.py#L125), since this will return two items now (segregated "hypothesis" and premise").
Then `__getitem__` is modified accordingly to return `self.features_a[i], self.features_b[i]`.
That's the gist of how I'm obtaining segregated "hypothesis" and "premise". These are then passed to two BERTs (or one BERT if its weights are kept frozen).
This is how I've defined the `collate_fn`
```
def siamese_data_collator(batch):
features_a, features_b = [], []
for item in batch:
for k, v in item.items():
if k == "a":
features_a.append(v)
else:
features_b.append(v)
return {
"a": default_data_collator(features_a),
"b": default_data_collator(features_b),
}
```
Then the `dataloader` is created in the usual way. So when we iterate like this:
```
def _training_step(...):
model.train()
for k, v in inputs["a"].items():
if isinstance(v, torch.Tensor):
inputs["a"][k] = v.to(self.args.device)
# we get inputs['a'] and inputs['b'] which is passed to the model
```
I had to modify `_training_step` and `evaluate` accordingly in the `Trainer` class.
Now the problem is, the model doesn't learn at all (`bert-base-uncased`). I tried with using my `model` and modified `Trainer` with standard `GlueDataset`, and it works. This leads to the conclusion that something is off with the data. The model should learn something (even if is not being fed concatenated "hypothesis" and "premise").
The model basically has one BERT and one linear layer. The logits come from linear layer which is then used to compute loss function (typical siamese like architecture).
Can you suggest if there's an issue in how the `Dataset` is being created here, or propose something of your own to segregate "hypothesis" and "premise" so that they can be fed separately to BERT.
Link to [Stack Overflow question](https://stackoverflow.com/questions/62771502/seperating-premise-and-hypothesis-in-mnli) | 07-07-2020 14:05:29 | 07-07-2020 14:05:29 | Might be of interest to @joeddav :)<|||||>I've solved this problem. Thanks a lot @joeddav for even showing interest. You guys are very supportive. I'll post what I did so that if someone is stuck, they can refer.
In all `SequenceClassification` model, there's a linear layer. So we can straightaway add the loss from both heads. You can choose to decide how to process the logits , for ex. `[u+v, u-v, u*v]` where `u` and `v` are the respective output vector/logits. It was not a good idea to directly deal with raw hidden states from `BertModel` in my case. I'm closing it now.<|||||>@prajjwal1 Glad you figured it out! FYI we launched a [discussion forum](https://discuss.huggingface.co/) this week (after you opened this issue I think). Questions like this would be well-suited to that forum if you have more to ask or want to help out other people in the community! 😇<|||||>@joeddav Yeah I have answered a couple of questions there already. I was the one who commented about requesting a forum and posted a link at ACL chat. The forum came into being the next day. Maybe someone was working inside the team while other team members didn't know. But really good to have it.<|||||>@prajjwal1 Ahhh yess, sorry I didn't make the connection :) Yes, we had been having some discussions about having our own forum and I knew we had some people working on it, but none of us on rocket chat realized it would be released the next day haha |
transformers | 5,574 | closed | Create README.md for electra-base-squad2 | Readme file for deepset/electra-base-squad2 | 07-07-2020 14:00:53 | 07-07-2020 14:00:53 | Thank you! |
transformers | 5,573 | closed | MBARTTokenizer set_lang logic will only work for src_lang=en_XX | 07-07-2020 13:46:56 | 07-07-2020 13:46:56 | ||
transformers | 5,572 | closed | Create README.md | 07-07-2020 12:26:43 | 07-07-2020 12:26:43 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5572?src=pr&el=h1) Report
> Merging [#5572](https://codecov.io/gh/huggingface/transformers/pull/5572?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d2a93991158f15993eba9ab421d82766b892f948&el=desc) will **increase** coverage by `0.23%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5572?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5572 +/- ##
==========================================
+ Coverage 76.84% 77.07% +0.23%
==========================================
Files 141 141
Lines 24685 24685
==========================================
+ Hits 18969 19027 +58
+ Misses 5716 5658 -58
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5572?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5572/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5572?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5572?src=pr&el=footer). Last update [d2a9399...e5c8277](https://codecov.io/gh/huggingface/transformers/pull/5572?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,571 | closed | Tokenizers save_pretrained doesn't work with custom vocabs (v3.0.2) | # 🐛 Bug
## Information
I'm using an instance of `RobertaTokenizerFast` with my custom vocab (pretrained using `tokenizers` library). When I try to save the tokenizer using `tokenizer.save_pretrained(<path>)` method a type error occurs.
I did a bit of investigation why this happens and got a workaround https://github.com/huggingface/transformers/issues/5393#issuecomment-654342933
## To reproduce
Steps to reproduce the behavior:
1. Choose any tokenizer that adds `AddedToken` to kwargs on init (e.g. `RobertaTokenizerFast` which adds a `mask_token` in the following manner)
2. Train a custom vocab using `tokenizers` library or download a pretrained vocab
3. Create an instance of chosen tokenizer by passing vocab and merges files to its constructor
4. Call `save_pretrained(<path>)` method
```python
tokenizer_kwargs = RobertaTokenizerFast.from_pretrained("roberta-base").init_kwargs
vocab_file = tokenizer_kwargs.get("vocab_file")
merges_file = tokenizer_kwargs.get("merges_file")
tokenizer = RobertaTokenizerFast(vocab_file, merges_file)
tokenizer.save_pretrained(".")
```
This will result in the following error:
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
in
----> 1 tokenizer.save_pretrained(".")
~/app/.venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in save_pretrained(self, save_directory)
1360
1361 with open(tokenizer_config_file, "w", encoding="utf-8") as f:
-> 1362 f.write(json.dumps(tokenizer_config, ensure_ascii=False))
1363
1364 with open(special_tokens_map_file, "w", encoding="utf-8") as f:
~/.pyenv/versions/3.7.6/lib/python3.7/json/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
236 check_circular=check_circular, allow_nan=allow_nan, indent=indent,
237 separators=separators, default=default, sort_keys=sort_keys,
--> 238 **kw).encode(obj)
239
240
~/.pyenv/versions/3.7.6/lib/python3.7/json/encoder.py in encode(self, o)
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
~/.pyenv/versions/3.7.6/lib/python3.7/json/encoder.py in iterencode(self, o, _one_shot)
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
258
259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,
~/.pyenv/versions/3.7.6/lib/python3.7/json/encoder.py in default(self, o)
177
178 """
--> 179 raise TypeError(f'Object of type {o.__class__.__name__} '
180 f'is not JSON serializable')
181
TypeError: Object of type AddedToken is not JSON serializable
```
## Expected behavior
I expect the same behavior as if you would save a pretrained tokenizer from the hub:
```python
tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
tokenizer.save_pretrained(".")
```
Which in its turn results in producing 4 files:
```
('/app/vocab.json',
'/app/merges.txt',
'/app/special_tokens_map.json',
'/app/added_tokens.json')
```
## Possible solution
A possible approach would be to pass hooks to json methods for (de)serializing objects of complex type like `AddedToken`. Generally, a solution must preserve the type information but since types are a subject of change it's unclear whether such generalization needs to be followed at all.
This is a sketch for the solution described above.
```python
def deserialize_json_object(json_obj: Dict[Text, Any]) -> Any:
classname = json_obj.get("__class__", "dict")
try:
obj = eval(classname)(**json_obj)
except NameError:
obj = json_obj
return obj
def serialize_object_to_json(obj: Any) -> Dict[Text, Any]:
get_state = getattr(obj, "__getstate__", None)
if callable(get_state):
json_obj = get_state()
json_obj["__class__"] = type(obj).__name__
else:
json_obj = obj.__dict__
return json_obj
json.load(<obj>, object_hook=deserialize_json_object)
json.dumps(<obj>, default=serialize_object_to_json)
```
Another solution would be to map `AddedToken` instances to dicts in `tokenizer_config` similar to constructing a `write_dict` object here:
https://github.com/huggingface/transformers/blob/d6b0b9d451e7ffe0da72a5f532cc9bec1563e801/src/transformers/tokenization_utils_base.py#L1368
This will require to deserialize dicts into `AddedToken` instances on loading vocabs and symbols.
## Environment info
- `transformers` version: 3.0.2
- Platform: macOS Catalina 10.15.5
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.1 (No)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 07-07-2020 09:40:21 | 07-07-2020 09:40:21 | Fixed by #6026<|||||>> Fixed by #6026
I am still experiencing this issue when running transformers 3.1
If you load from pretrained as follows:
```py
tokenizer = RobertaTokenizer.from_pretrained("path/to/tokenizer/folder")
```
then the tokenizer init_kwargs will load appropriately. The init_kwargs takes the form of a dictionary as follows:
```py
{
merges_file: "file/path/...",
model_max_length: 512,
vocab_file: "file/path/...",
}
```
In such a scenario the tokenizer can be saved using the save_pretrained functionality as intended.
However, when defining the tokenizer using the vocab_file and merge_file arguments, as follows:
```py
tokenizer = RobertaTokenizer(vocab_file='file/path/vocab.json', merges_file='file_path/merges.txt')
```
the resulting init_kwargs appears to default to:
```py
{
bos_token: AddedToken(bos_token, lstrip=False, rstrip=False)
eos_token = AddedToken(eos_token, lstrip=False, rstrip=False)
sep_token = AddedToken(sep_token, lstrip=False, rstrip=False)
cls_token = AddedToken(cls_token, lstrip=False, rstrip=False)
unk_token = AddedToken(unk_token, lstrip=False, rstrip=False)
pad_token = AddedToken(pad_token, lstrip=False, rstrip=False)
}
```
which i see is defined within the RobertaTokenizer class of tokenization_roberta, but should be assigned along with the vocab_file and merges_file which does not appear to be the case.
This means that within the save_pretrained() function, the lines that are causing the issue are:
```py
tokenizer_config = copy.deepcopy(self.init_kwargs)
if len(self.init_inputs) > 0:
tokenizer_config["init_inputs"] = copy.deepcopy(self.init_inputs)
for file_id in self.vocab_files_names.keys():
tokenizer_config.pop(file_id, None)
with open(tokenizer_config_file, "w", encoding="utf-8") as f:
f.write(json.dumps(tokenizer_config, ensure_ascii=False))
```
The solution #6026 looks like it addresses a separate part of the save_pretrained function, and hasn't stopped this error being raised when I run the above scenario? I am running transformers = 3.1, with tokenizers = 0.8.1rc2<|||||>Hello! If you check the init method of the `RobertaTokenizer`, you'll see it does not only expect the `vocab_file` and `merges_file`, but it also accepts the special tokens you mention:
```py
def __init__(
self,
vocab_file,
merges_file,
errors="replace",
bos_token="<s>",
eos_token="</s>",
sep_token="</s>",
cls_token="<s>",
unk_token="<unk>",
pad_token="<pad>",
mask_token="<mask>",
add_prefix_space=False,
**kwargs
):
```
This should allow you to specify which special tokens should be used.
If this does not solve your issue, do you mind opening a new issue with your specific problem, including the code that raises the error and the full stack-trace? This will help us help you. Thank you!<|||||>Hi @LysandreJik , thanks for the quick reply. Unfortunately defining the other arguments didn't appear to solve the issue. I have open a new issue: #8306 which includes code to recreate the problem |
transformers | 5,570 | closed | Freeze the token embeddings for finetuning | Hi ,
I have finetuned the T5 model using the community notebooks given . But when i looked into the code under examples/seq2seq finetuning code token embedding weights are frozen.
Can some one throw some light on how does this effect the finetuning process?
Thanks | 07-07-2020 09:06:06 | 07-07-2020 09:06:06 | freezing embedding weights will result in faster training and will also consume less GPU memory. It might result in slightly less performance but I guess it ultimately depends on the task, the dataset etc.<|||||>A tangent question: is it possible to freeze only a subset of the token embedding weights? Background: I am adding some new tokens and I want the model to only fine tune on those.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>It would be a very interesting feature proposed by @AzharSultan like for ***prompt learning*** purpose.
PyTorch doesn't allow to only update a part of the embeddings lookup table. A solution would be either masking gradients or creating a new lookup table just for the added tokens. Any plan to add that ? <|||||>@montellasebastien did you experiment with these workarounds?<|||||>I am also interested in @AzharSultan's question. What is the recommended for doing this right now? |
transformers | 5,569 | closed | BertEmbeddings code for position_embeddings and word_embeddings | I am trying to figure out how exactly positions and words are embedded in Bert. I checked the code, and it seems following code is used for the purpose. This implies that both the embedding function are calling same `__init__' function from the class `nn.Embedding`. Where exactly is the code doing the actual position embedding (sine and cosine based). Where can I see that code ?.
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id)
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
inputs_embeds = self.word_embeddings(input_ids)
position_embeddings = self.position_embeddings(position_ids)` | 07-07-2020 08:21:30 | 07-07-2020 08:21:30 | Hi, BERT uses absolute positional embeddings, not based on sinusoidal functions. The positional embeddings are learned.
You can check the [XLNet code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_xlnet.py#L704-L712) or the [TransfoXL code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_transfo_xl.py#L170-L186) for examples showcasing sinusoidal (relative) embeddings. |
transformers | 5,568 | closed | Use pretrained bert withou embedding layers. | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
https://arxiv.org/pdf/2005.07421.pdf
I was inspirited by the above paper. But the author set the input of bert is there soft-masked embeddings which is not the original format of bert.
So are there any way for me to get Bert without its embedding layers.
| 07-07-2020 08:20:22 | 07-07-2020 08:20:22 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 5,567 | closed | How to get gradient wrt to a word embedding layer pytorch? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->'
The idea behind this is I want to try some old school gradient ascent style visualization with Bert Model.
I want to know the effect of changes of embedding layer on a specific layer's specific dimension. Thus, I took the gradient of the output of a specific layer's specific dimension wrt the first word embedding layer's output.
The best thing I can do here is the following:
```
from transformers import BertTokenizer, BertModel
model = BertModel.from_pretrained('bert-base-uncased', output_attentions=True,output_hidden_states=True)
tokenizer = BertTokenizer.from_pretrained(model_version, do_lower_case=True)
s = 'I want to sleep'
inputs = tokenizer.encode_plus(s,return_tensors='pt', add_special_tokens=False,is_pretokenized=True)
input_ids = inputs['input_ids']
output = model(input_ids)
hidden_states = output[-2]
X = hidden_states[0] #embedding space, shape: [1,4,768] (batch_size,sentence_length,embedding dimension)
y = hidden_states[3][0][0][0] ##the 0th position and 0th dimension of output of 3rd hidden layer. Dimension should just be [1], a scalar.
torch.autograd.grad(y,X,retain_graph=True, create_graph=True) #I take the gradient of y wrt. Since y is scalar. The dimension of the gradient is just the dimension of X.
```
This is, however, not good enough. I want the gradient wrt the actual word embedding layer. However, Transformer's embedding contains "position_embedding" and "token_type_embedding". Here's the code for the first layer embedding:
```
class BertEmbeddings(nn.Module):
"""Construct the embeddings from word, position and token_type embeddings.
"""
def __init__(self, config):
super(BertEmbeddings, self).__init__()
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=0)
self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)
# self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
# any TensorFlow checkpoint file
self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
def forward(self, input_ids, token_type_ids=None, position_ids=None):
seq_length = input_ids.size(1)
if position_ids is None:
position_ids = torch.arange(seq_length, dtype=torch.long, device=input_ids.device)
position_ids = position_ids.unsqueeze(0).expand_as(input_ids)
if token_type_ids is None:
token_type_ids = torch.zeros_like(input_ids)
words_embeddings = self.word_embeddings(input_ids)
position_embeddings = self.position_embeddings(position_ids)
token_type_embeddings = self.token_type_embeddings(token_type_ids)
embeddings = words_embeddings + position_embeddings + token_type_embeddings
embeddings = self.LayerNorm(embeddings)
embeddings = self.dropout(embeddings)
return embeddings
```
Ideally, I want the gradient wrt JUST “words_embeddings" Rather than, wrt "words_embeddings + position_embeddings + token_type_embeddings" and follows by layerNorm and dropout.
I think I can do this by modifying changing the model. Is there a way to it without changing the model?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: https://stackoverflow.com/questions/62750798/how-to-get-gradient-wrt-to-a-specific-layers-output-pytorch
| 07-07-2020 05:40:17 | 07-07-2020 05:40:17 | Hi, the `word_embeddings` variable is not returned so you won't obtain these except if you modify the file.<|||||>Thanks! |
transformers | 5,566 | closed | [docs] fix model_doc links in model summary | Possible fix for #5561
@sgugger | 07-07-2020 05:09:16 | 07-07-2020 05:09:16 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5566?src=pr&el=h1) Report
> Merging [#5566](https://codecov.io/gh/huggingface/transformers/pull/5566?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1d2332861f5225aef17bd7e75abc670a72239081&el=desc) will **increase** coverage by `0.35%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5566?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5566 +/- ##
==========================================
+ Coverage 77.20% 77.55% +0.35%
==========================================
Files 141 141
Lines 24638 24638
==========================================
+ Hits 19021 19108 +87
+ Misses 5617 5530 -87
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5566?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <0.00%> (+1.16%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.04% <0.00%> (+49.56%)` | :arrow_up: |
| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5566/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5566?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5566?src=pr&el=footer). Last update [1d23328...028e84a](https://codecov.io/gh/huggingface/transformers/pull/5566?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>No, this won't work across versions (if you switch to master docs for instance). The problem is in the first slash, I think removing it should be enough to make all link works.<|||||>Double-checked locally, this is the right fix: `/model_doc/gpt` -> `model_doc/gpt`
<|||||>should I make these changes ? also model_doc/gpt or model_doc/gpt.html ?<|||||>No need for the .html.
I can force-push on your branch or let you do the changes, your call!<|||||>How much time does it take to these changes be reflected in the website?
Also, can I close my issue after this fix?
@sgugger
Thanks! :)<|||||>The thing is that the bug will probably stay in the stable version of the doc forever (unless we manage to cherry-pick it somehow @LysandreJik ?). It is reflected in the [master version](https://huggingface.co/transformers/master/model_summary.html) already.
You can close your issue whenever you like :-) |
transformers | 5,565 | closed | ❓ Why multiplying the output of T5 by some scalar before LM head ? | # ❓ Questions & Help
I'm wondering why multiplying the outputs of T5 by some scalar before inputting in the LM head :
https://github.com/huggingface/transformers/blob/1d2332861f5225aef17bd7e75abc670a72239081/src/transformers/modeling_tf_t5.py#L1147
| 07-07-2020 04:53:44 | 07-07-2020 04:53:44 | I think this would be a great question for our new forum, with a title like "T5 architecture": https://discuss.huggingface.co/ - would you mind posting it there? This seems like an interesting question people would probably like to see there.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,564 | closed | Where is the documentation on migrating to the 3.0 tokenizer API? | I see that you folks have completely changed the API to do tokenizing, e.g. for BertTokenizer. I have a lot of code using the two methods `encode_plus()` and `batch_encode_plus()`, and when I went to the [documentation](https://huggingface.co/transformers/main_classes/tokenizer.html) to look up an argument, I found that these methods are completely gone. All that remains is a little blurb saying:
> `BatchEncoding` holds the output of the tokenizer’s encoding methods (`__call__`, `encode_plus` and `batch_encode_plus`) and is derived from a Python dictionary.
Are these two methods deprecated now? Did you post a migration guide for users?
On the main [Huggingface Transformers page](https://github.com/huggingface/transformers), you have sections for `Migrating from pytorch-transformers to transformers` and `Migrating from pytorch-pretrained-bert to transformers`, so it's not like there's no precedent for you to provide some information to users on major API changes. | 07-07-2020 03:17:26 | 07-07-2020 03:17:26 | Hi @githubrandomuser2017 , AFAIK these methods will still work in v.3.0 as backward compatibility is maintained. This can help
https://huggingface.co/transformers/preprocessing.html<|||||>@patil-suraj The page you mentioned (https://huggingface.co/transformers/preprocessing.html) doesn't mention anything about the missing functions (`encode_plus()` and `batch_encode_plus()`). Also, the functions are not in the tokenizer documentation page anymore (https://huggingface.co/transformers/main_classes/tokenizer.html). I need to look up some arguments.<|||||>for me `encode_plus()` and `batch_encode_plus()` are working as expected in v3.<|||||>Also as far as I understand
`tokenizer` object is now a callable and by default it behaves as encode_plus, i.e it returns `input_ids` along with `attention_mask`, `token_type_ids` etc, this can be controlled using `return_attention_mask`, `return_token_type_ids` arguments.
So by default if you provide a single example to `tokenizer` it will behave as encode_plus and if you provide a batch of examples it'll behave like `batch_encode_plus`.
padding and truncation is now controlled using `padding` and `truncation` arguments.
So
```python3
tokenizer.encode_plus("some text", max_length=512, pad_to_max_length=True)
```
is now equivalent to
```python3
tokenizer("some text", max_length=512, padding="max_length", truncation=True)
````
and
```python3
tokenizer.batch_encode_plus(["some text", "some other text"], max_length=512, pad_to_max_length=True)
```
is equivalent to
```python3
tokenizer(["some text", "some other text"], max_length=512, padding="max_length", truncation=True)
````
https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.__call__<|||||>Indeed, @patil-suraj is right, `__call__` is a wrapper on top of both `encode_plus` and `batch_encode_plus` which will dispatch depending on whether you are providing a single example or a batch.
We don't promote the use of `encode_plus` and `batch_encode_plus` but we kept backward compatibility for them so they are still available and behave identically as in transformers version `2.X`.
If you want to access the doc for them, use the doc for version `2.X`: https://huggingface.co/transformers/v2.11.0/main_classes/tokenizer.html?highlight=batch_encode_plus#transformers.PreTrainedTokenizer.batch_encode_plus<|||||>But we are right we should add a migration guide, in particular for the padding and truncation commands. We can use the detailed migration guide in the description of this PR as a basis https://github.com/huggingface/transformers/pull/4510.
@sgugger do you want to give it a look when you have some bandwidth?
<|||||>Will try do something before the end of the week.<|||||>I took a stab at it and added it to our brand new forum, look [here](https://discuss.huggingface.co/t/migration-guide-from-v2-x-to-v3-x-for-the-tokenizer-api/55).<|||||>Wonderful! Thank you.
For the forum, do I use my Github credentials?<|||||>No, you need to create an account on https://huggingface.co/ if you don't have one already (same as for uploading models to the hub).<|||||>Thank you. Can you please put that fact (creating a huggingface.co account) on the main forum webpage? https://discuss.huggingface.co/ |
transformers | 5,563 | closed | Bug in Question Answering pipeline when question is weird (unanswerable) | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): TFDistilBertForQuestionAnswering
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
- Code
```python
from transformers import pipeline
qanlp = pipeline("question-answering", framework="tf") # even PyTorch gives the error
qanlp(context="I am a company", question="When is the bill due?", handle_impossible_answer=True) # happens even without handle_impossible_answer
```
- Error
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-4da7a3b5ca0e> in <module>()
----> 1 qanlp(context="I am a company", question="When is the bill due?", handle_impossible_answer=True)
/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
1314 ),
1315 }
-> 1316 for s, e, score in zip(starts, ends, scores)
1317 ]
1318
/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <listcomp>(.0)
1314 ),
1315 }
-> 1316 for s, e, score in zip(starts, ends, scores)
1317 ]
1318
KeyError: 0
```
- Happens even on this hosted inference API https://huggingface.co/distilbert-base-cased-distilled-squad?text=When+is+the+bill+due%3F&context=I+am+a+company%0A
## Expected behavior
Either give a wrong answer or a blank answer without any errors
## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: Tried with and without GPU in Google colab
- Using distributed or parallel set-up in script?: No
| 07-07-2020 01:17:06 | 07-07-2020 01:17:06 | I believe https://github.com/huggingface/transformers/pull/5542 is trying to fix exactly this.<|||||>Yes, I see. The pipeline failing on that pull request shows the same error.
But, I think the fix would lie somewhere in squad feature creation because the `feature.token_to_orig_map` is what gives the `KeyError`. Took a diff between `v2.11.0` and `v3.0.2` for `pipelines.py` and here's what changed for the `QuestionAnsweringPipeline`
```diff
class QuestionAnsweringArgumentHandler(ArgumentHandler):
@@ -1165,12 +1253,12 @@ class QuestionAnsweringPipeline(Pipeline):
examples = self._args_parser(*args, **kwargs)
features_list = [
squad_convert_examples_to_features(
- [example],
- self.tokenizer,
- kwargs["max_seq_len"],
- kwargs["doc_stride"],
- kwargs["max_question_len"],
- False,
+ examples=[example],
+ tokenizer=self.tokenizer,
+ max_seq_length=kwargs["max_seq_len"],
+ doc_stride=kwargs["doc_stride"],
+ max_query_length=kwargs["max_question_len"],
+ is_training=False,
tqdm_enabled=False,
)
for example in examples
@@ -1184,33 +1272,34 @@ class QuestionAnsweringPipeline(Pipeline):
with self.device_placement():
if self.framework == "tf":
fw_args = {k: tf.constant(v) for (k, v) in fw_args.items()}
- start, end = self.model(fw_args)
+ start, end = self.model(fw_args)[:2]
start, end = start.numpy(), end.numpy()
else:
with torch.no_grad():
# Retrieve the score for the context tokens only (removing question tokens)
fw_args = {k: torch.tensor(v, device=self.device) for (k, v) in fw_args.items()}
- start, end = self.model(**fw_args)
+ start, end = self.model(**fw_args)[:2]
start, end = start.cpu().numpy(), end.cpu().numpy()
min_null_score = 1000000 # large and positive
answers = []
for (feature, start_, end_) in zip(features, start, end):
- # Normalize logits and spans to retrieve the answer
- start_ = np.exp(start_) / np.sum(np.exp(start_))
- end_ = np.exp(end_) / np.sum(np.exp(end_))
-
# Mask padding and question
start_, end_ = (
start_ * np.abs(np.array(feature.p_mask) - 1),
end_ * np.abs(np.array(feature.p_mask) - 1),
)
+ # Mask CLS
+ start_[0] = end_[0] = 0
+
+ # Normalize logits and spans to retrieve the answer
+ start_ = np.exp(start_ - np.log(np.sum(np.exp(start_), axis=-1, keepdims=True)))
+ end_ = np.exp(end_ - np.log(np.sum(np.exp(end_), axis=-1, keepdims=True)))
+
if kwargs["handle_impossible_answer"]:
min_null_score = min(min_null_score, (start_[0] * end_[0]).item())
- start_[0] = end_[0] = 0
-
starts, ends, scores = self.decode(start_, end_, kwargs["topk"], kwargs["max_answer_len"])
char_to_word = np.array(example.char_to_word_offset)
```
It doesn't seem like a lot of logic changed in this file except the position of `# Mask CLS` block and the pull request which fixes its position is not able to fix this bug yet.<|||||>@mfuntowicz, maybe you have some insights on this as you're working on the PR? :)<|||||>@pavanchhatpar Can you try on the master branch? I just pushed a change that should fix the case when questions are not answerable see b716a864f869ddc78c3c8eb00729fc9546c74ee4.
Let us know if it resolve the issue 🙏 <|||||>@mfuntowicz I tried with the master branch. Seems to work well now. Thanks for the fix! |
transformers | 5,562 | closed | Fix fast tokenizers too | 07-06-2020 22:44:23 | 07-06-2020 22:44:23 | ||
transformers | 5,561 | closed | [Docs] Incorrect links to models in the Summary of the Models page | ### Issues with the Summary of the Models page
- Description:
- Clicking on any model documentation located in https://huggingface.co/transformers/model_summary.html leads us to an empty page.
- Possible Fix:
- I've noticed that the GPT link is https://huggingface.co/model_doc/gpt while it should be https://huggingface.co/transformers/model_doc/gpt.html
- This fix seems to resolve all other issues in the page.
- Observations:
- I've tried to clone this repo and fix this issue, but for some reason, when I build my own version of the documentation, it doesn't use the `.../transformers/...` path inside the url leading to a different issue from the current online documentation.
- Examples:
<img width="726" alt="Screen Shot 2020-07-06 at 17 28 50" src="https://user-images.githubusercontent.com/9092284/86648750-20481b80-bfaf-11ea-9d3e-b8656d220fdb.png">
EXPECTED:
<img width="1280" alt="Screen Shot 2020-07-06 at 17 29 17" src="https://user-images.githubusercontent.com/9092284/86648771-250ccf80-bfaf-11ea-8552-dd5ab52e36b9.png">
RESULT:
<img width="742" alt="Screen Shot 2020-07-06 at 17 29 01" src="https://user-images.githubusercontent.com/9092284/86648785-28a05680-bfaf-11ea-9578-e16407f58fb4.png">
| 07-06-2020 21:32:57 | 07-06-2020 21:32:57 | Fixed in #5566 |
transformers | 5,560 | closed | [Reformer] Adapt Reformer MaskedLM Attn mask | The official trax code added an `attention_mask` to the LSH Attention layer a couple of weeks ago: https://github.com/google/trax/commit/94d7d8643d9a6ea38539486dea9ad16da30ec897#diff-a022408d1029c5cbeac49f4589f1b713R1180 . This PR adapts the Hugging Face code to have equal outputs to the official trax code for masked self attention.
Integration tests are adapted and run on branch: `reformer_trax_tests`. | 07-06-2020 21:23:50 | 07-06-2020 21:23:50 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5560?src=pr&el=h1) Report
> Merging [#5560](https://codecov.io/gh/huggingface/transformers/pull/5560?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `0.41%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5560?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5560 +/- ##
==========================================
- Coverage 77.83% 77.42% -0.42%
==========================================
Files 141 141
Lines 24634 24630 -4
==========================================
- Hits 19175 19069 -106
- Misses 5459 5561 +102
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5560?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5560/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.31% <100.00%> (-0.07%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5560/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5560/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5560/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5560/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+1.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5560/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5560?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5560?src=pr&el=footer). Last update [58cca47...382c309](https://codecov.io/gh/huggingface/transformers/pull/5560?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Pinging @LysandreJik @thomwolf @flozi00 for notification. |
transformers | 5,559 | closed | Fix #5507 | 07-06-2020 21:13:11 | 07-06-2020 21:13:11 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5559?src=pr&el=h1) Report
> Merging [#5559](https://codecov.io/gh/huggingface/transformers/pull/5559?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1bbc28bee719120f3b6f73dbe7dfe6f67e2e9fa7&el=desc) will **increase** coverage by `0.36%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5559?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5559 +/- ##
==========================================
+ Coverage 76.79% 77.16% +0.36%
==========================================
Files 141 141
Lines 24634 24634
==========================================
+ Hits 18917 19008 +91
+ Misses 5717 5626 -91
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5559?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `97.18% <ø> (ø)` | |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <ø> (-19.18%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `85.75% <0.00%> (-7.85%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.02% <0.00%> (-2.18%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.30% <0.00%> (-1.54%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-1.01%)` | :arrow_down: |
| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/5559/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5559?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5559?src=pr&el=footer). Last update [1bbc28b...3ec1a31](https://codecov.io/gh/huggingface/transformers/pull/5559?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,558 | closed | Various tokenizers fixes | Fix https://github.com/huggingface/transformers/issues/5486
Fix https://github.com/huggingface/transformers/issues/5482
Fix https://github.com/huggingface/transformers/issues/5393
Fix https://github.com/huggingface/transformers/issues/5490 | 07-06-2020 20:54:28 | 07-06-2020 20:54:28 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5558?src=pr&el=h1) Report
> Merging [#5558](https://codecov.io/gh/huggingface/transformers/pull/5558?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1bbc28bee719120f3b6f73dbe7dfe6f67e2e9fa7&el=desc) will **increase** coverage by `1.06%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5558?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5558 +/- ##
==========================================
+ Coverage 76.79% 77.85% +1.06%
==========================================
Files 141 141
Lines 24634 24634
==========================================
+ Hits 18917 19180 +263
+ Misses 5717 5454 -263
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5558?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.32% <ø> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.50%)` | :arrow_up: |
| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.55% <0.00%> (+11.22%)` | :arrow_up: |
| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.96% <0.00%> (+21.29%)` | :arrow_up: |
| [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `85.71% <0.00%> (+25.71%)` | :arrow_up: |
| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.56% <0.00%> (+35.82%)` | :arrow_up: |
| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/5558/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5558?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5558?src=pr&el=footer). Last update [1bbc28b...531dd46](https://codecov.io/gh/huggingface/transformers/pull/5558?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,557 | closed | Roberta Large doesn't train for sentiment classification | The following is my classifier class. I trained it with appropriate code and same tokenizer(roberta-base). It trained well on SST2 2 class sentiment problem. But when i replaced base model with roberta-large (also the tokenizer and dim of linear layer)my classifier is stuck at 50% accuracy and not matter how many epoch i run it doesn't improve. Is the weight file faulty or am i doing something wrong?
```
class Model(nn.Module):
def __init__(self):
super(Model,self).__init__()
self.Bert = transformers.RobertaModel.from_pretrained('roberta-base')
self.fc0 = nn.Linear(768,512)
self.d0 = nn.Dropout(0.5)
self.d1 = nn.Dropout(0.5)
self.fc1 = nn.Linear(512,1)
nn.init.normal_(self.fc0.weight,std= 0.1)
nn.init.normal_(self.fc0.bias ,0.)
nn.init.normal_(self.fc1.weight,std =0.1)
nn.init.normal_(self.fc1.bias, 0.)
def forward(self,input_ids,attention_mask):
hid= self.Bert(input_ids,attention_mask = attention_mask)
hid= hid[0][:,0,:]
x = self.d0(hid)
x = self.fc0(x)
x = torch.tanh(x)
x = self.d1(x)
x = self.fc1(x)
return x
```
This the optimizer i used and i froze the Roberta Layers and only trained the final layers. I also trained with lr 3e-5. It didn't do any help either.
```
model = Model().to('cuda')
criterion = nn.CrossEntropyLoss(reduction='mean').to('cuda')
for params in model.Bert.parameters():
params.requires_grad = False
optimizer = torch.optim.Adam(model.parameters(),lr=0.001)
``` | 07-06-2020 19:00:21 | 07-06-2020 19:00:21 | Your learning rate is pretty high and you might be overfitting. Try it again with 3E-5.<|||||>My question is i have freezed the whole roberta layer. Then why still it needs to be such small lr rather than normal 0.001 as i am training only a classifier head.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,556 | closed | [pl examples] add using_native_amp flag to support pl 0.8.4 | running the example `finetune_bart_tiny.sh` in `seq2seq` fails with:
```
Traceback (most recent call last):
File "finetune.py", line 344, in <module>
main(args)
File "finetune.py", line 322, in main
logger=logger,
File "/root/notebooks/analysis/transformers/examples/lightning_base.py", line 330, in generic_train
trainer.fit(model)
File "/apps/python3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1020, in fit
self.run_pretrain_routine(model)
File "/apps/python3/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1156, in run_pretrain_routine
self.train()
File "/apps/python3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train
self.run_training_epoch()
File "/apps/python3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 452, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx)
File "/apps/python3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 659, in run_training_batch
grad_norm_dic = self.run_batch_backward_pass(split_batch, batch_idx, opt_idx, optimizer)
File "/apps/python3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 711, in run_batch_backward_pass
self.call_optimizer_step(optimizer, opt_idx, batch_idx, split_batch)
File "/apps/python3/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 750, in call_optimizer_step
using_native_amp=native_amp)
TypeError: optimizer_step() got an unexpected keyword argument 'using_native_amp'
```
seems like the issue is with the missing argument. | 07-06-2020 18:47:47 | 07-06-2020 18:47:47 | What PL version are you on? The unittests should catch this.<|||||>@sshleifer `pytorch-lightning==0.8.4` which i believe is the latest<|||||>Got it.
I think you need to run `make style` to get CI to pass.
We also may need to wait to merge this until https://github.com/huggingface/transformers/pull/5361, which updates us from pl 0.8.1 to 0.8.5. (we are currently on 0.8.1, you can see in `examples/requirements.txt`).
<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5556?src=pr&el=h1) Report
> Merging [#5556](https://codecov.io/gh/huggingface/transformers/pull/5556?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d9b872b66f9ab9b7b7c73f2c00985dd92c4121b&el=desc) will **increase** coverage by `0.36%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5556?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5556 +/- ##
==========================================
+ Coverage 77.00% 77.36% +0.36%
==========================================
Files 141 141
Lines 24638 24638
==========================================
+ Hits 18973 19062 +89
+ Misses 5665 5576 -89
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5556?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `82.70% <0.00%> (-3.76%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |
| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.55% <0.00%> (+11.22%)` | :arrow_up: |
| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.96% <0.00%> (+21.29%)` | :arrow_up: |
| [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `85.71% <0.00%> (+25.71%)` | :arrow_up: |
| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/5556/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5556?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5556?src=pr&el=footer). Last update [9d9b872...c89d9ca](https://codecov.io/gh/huggingface/transformers/pull/5556?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I think this was fixed by another PR. |
transformers | 5,555 | closed | Training TFBertForSequenceClassification with DataFrame instead of tensorflow_datasets | I'm trying to fine tune `transformers` with my own dataset in the `csv` file. So, I found [an item in the docs which shows a basic usage example](https://huggingface.co/transformers/training.html#fine-tuning-in-native-tensorflow-2). But the main problem is that this example shows how to use transformers with the `tensorflow_datasets`, but not with the something more real, like with the `pandas` `Dataframe`. So, I have a problem with the `transformers` usage:
```
df = pd.DataFrame({'text': ['SOME ANGRY TEXT!!!', 'Some friendly text :)'], 'label': [1, 0]})
bert_model = transformers.TFBertForSequenceClassification.from_pretrained("bert-base-cased")
bert_tokenizer = transformers.BertTokenizer.from_pretrained("bert-base-cased")
x = bert_tokenizer.batch_encode_plus(
df.text.values,
max_length=10,
add_special_tokens=True,
pad_to_max_length=True,
)
bert_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['Accuracy'])
bert_history = bert_model.fit(
x=x,
y=df.label.values.reshape(-1, 1),
)
```
Output:
```
ValueError: Failed to find data adapter that can handle input: (<class 'list'> containing values of types {'(<class \'list\'> containing values of types {"<class \'int\'>"})'}), <class 'numpy.ndarray'>
```
**So, how can I use my custom `pandas` `DataFrame` as x and y or as dataset?**
Also, you need to understand that I found a few questions on the SO which ask almost the same question and has upvotes, [here is one of the last question which I found](https://stackoverflow.com/questions/60463829/training-tfbertforsequenceclassification-with-custom-x-and-y-data). **So, maybe people will find MWE with the `np` or `pd` useful?** | 07-06-2020 18:18:43 | 07-06-2020 18:18:43 | I also find out that every example which uses transformers with `binary_crossentropy` loss changes the pretrained model(i.e. adds some layers). Is it necessary? If so, then what minimal working example for this? <|||||>Now I'm just trying to copy [this Kaggle notebook](https://www.kaggle.com/definedennis/pretrained-bert-with-huggingface-tensorflow-2-1), here is my code:
```
df = pd.DataFrame({'text': ['SOME ANGRY TEXT!!!', 'Some friendly text :)'], 'label': [1, 0]})
def create_model():
bert_model = transformers.TFBertModel.from_pretrained("bert-base-cased")
input_ids = tf.keras.layers.Input(shape=(10,), dtype=tf.int32, name='input_ids')
token_type_ids = tf.keras.layers.Input((10,), dtype=tf.int32, name='token_type_ids')
attention_mask = tf.keras.layers.Input((10,), dtype=tf.int32, name='attention_mask')
# Use pooled_output(hidden states of [CLS]) as sentence level embedding
pooled_output = bert_model({'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids})[1]
x = tf.keras.layers.Dropout(rate=0.1)(pooled_output)
x = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model = tf.keras.models.Model(inputs={'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids}, outputs=x)
return model
bert_model = create_model()
bert_tokenizer = transformers.BertTokenizer.from_pretrained("bert-base-cased")
x = bert_tokenizer.batch_encode_plus(
df.text.values,
max_length=10,
pad_to_max_length=True,
return_tensors='tf'
)
bert_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['Accuracy'])
bert_history = bert_model.fit(
x=x,
y=df.label.values
)
```
Output:
```
~/.local/lib/python3.8/site-packages/tensorflow/python/framework/ops.py in __hash__(self)
724 if (Tensor._USE_EQUALITY and executing_eagerly_outside_functions() and
725 (g is None or g.building_function)):
--> 726 raise TypeError("Tensor is unhashable. "
727 "Instead, use tensor.ref() as the key.")
728 else:
TypeError: Tensor is unhashable. Instead, use tensor.ref() as the key.
```
How can I fix it?<|||||>Now I just use the previous Kaggle notebook, even without code modifications. My code:
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from tqdm.notebook import tqdm
import tensorflow as tf
from tensorflow import keras
import tensorflow.keras.backend as K
from tensorflow.keras import layers
from tensorflow.keras.utils import plot_model
from transformers import (
BertTokenizer,
TFBertForSequenceClassification,
TFBertModel,
BertConfig,
)
tf.__version__
MAX_SEQUENCE_LENGTH = 255
PRETRAINED_MODEL_NAME = 'bert-base-uncased'
BATCH_SIZE = 32
df = pd.read_csv('train.csv')
df.head()
df['target'].value_counts()
df.isnull().sum()
data = df['text'].values
targets = df['target'].values
def create_model():
bert_model = TFBertModel.from_pretrained(PRETRAINED_MODEL_NAME)
input_ids = layers.Input(shape=(MAX_SEQUENCE_LENGTH,), dtype=tf.int32, name='input_ids')
token_type_ids = layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32, name='token_type_ids')
attention_mask = layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32, name='attention_mask')
# Use pooled_output(hidden states of [CLS]) as sentence level embedding
pooled_output = bert_model({'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids})[1]
x = layers.Dropout(rate=0.1)(pooled_output)
x = layers.Dense(1, activation='sigmoid')(x)
model = keras.models.Model(inputs={'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids}, outputs=x)
return model
tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)
model = create_model()
model.summary()
plot_model(model, to_file='model.png', expand_nested=True, show_shapes=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt, loss='binary_crossentropy', metrics=['accuracy'])
X_train, X_val, y_train, y_val = train_test_split(data, targets, test_size=0.33, random_state=42, stratify=targets)
X_train = tokenizer.batch_encode_plus(X_train, max_length=MAX_SEQUENCE_LENGTH, pad_to_max_length=True, return_tensors='tf')
X_val = tokenizer.batch_encode_plus(X_val, max_length=MAX_SEQUENCE_LENGTH, pad_to_max_length=True, return_tensors='tf')
history = model.fit(
x=X_train,
y=y_train,
validation_data=(X_val, y_val),
epochs=3,
batch_size=BATCH_SIZE
)
```
Output:
```
/usr/lib/python3.8/_collections_abc.py in update(self, other, **kwds)
835 self[key] = other[key]
836 else:
--> 837 for key, value in other:
838 self[key] = value
839 for key, value in kwds.items():
ValueError: too many values to unpack (expected 2)
``` <|||||>You can check [this](https://colab.research.google.com/drive/1DT6lSWRZ3CIIm9noaJxPYjtOavWQB23S?usp=sharing) and [this](https://colab.research.google.com/drive/125jJ0qrXGIe6goNrH_Ja7XPZtYp7nMXU?usp=sharing) gists for reproduction of the errors.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,554 | closed | huggingface optimizer cannot de-serialize | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
>>> import transformers
>>> opt, _ = transformers.optimization_tf.create_optimizer(init_lr=2e-5, num_train_steps=1000, num_warmup_steps=10)
>>> opt.__class__.from_config(opt.get_config())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ec2-user/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py", line 741, in from_config
config["learning_rate"], custom_objects=custom_objects)
File "/home/ec2-user/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/optimizer_v2/learning_rate_schedule.py", line 992, in deserialize
printable_module_name="decay")
File "/home/ec2-user/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 292, in deserialize_keras_object
config, module_objects, custom_objects, printable_module_name)
File "/home/ec2-user/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 250, in class_and_config_for_serialized_keras_object
raise ValueError('Unknown ' + printable_module_name + ': ' + class_name)
ValueError: Unknown decay: WarmUp
## Expected behavior
One would expect that the `tf.keras.optimizers.Adam` object `opt` is successfully reconstructed from its config dictionary.
After inspecting `transformers.optimization_tf`, what I believe is happening here is that `create_optimizer` returns a regular `tf.keras.optimizers.Adam` object in the event that the `weight_decay_rate` parameter is zero:
91 else:
92 optimizer = tf.keras.optimizers.Adam(learning_rate=lr_schedule, epsilon=adam_epsilon)
However, if `num_warmup_steps` is nonzero, this object is instantiated with a _custom_ `tf.keras.optimizers.schedules.LearningRateSchedule` object of type `WarmUp`:
77 if num_warmup_steps:
78 lr_schedule = WarmUp(
79 initial_learning_rate=init_lr, decay_schedule_fn=lr_schedule, warmup_steps=num_warmup_steps,
80 )
In this case the `tf.keras.optimizers.Adam` implementation of `from_config` does not know to pass this class under the `custom_objects` keyword argument, causing deserialization to fail.
In my specific setting, this causes the method `horovod.create_distributed_optimizer(opt)` to fail, since it relies on serialization / deserialization to programmatically extend the passed optimizer class.
A solution that should work in my specific setting (and more generally) is to create an intermediate class such as
class AdamWarmUp(tf.keras.optimizers.Adam):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
@classmethod
def from_config(cls, config):
custom_objects = {"WarmUp":WarmUp}
return super().from_config(config, custom_objects=custom_objects)
which is returned by `create_optimizer` and correctly implements `tf.keras.optimizers.Optimizer`, and to have the `AdamWeightDecay` optimizer extend this class.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.0
- Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: True
| 07-06-2020 18:01:10 | 07-06-2020 18:01:10 | Great, thanks for your analysis! Do you think you can open a PR with your proposed fix?<|||||>Sure, will do!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi @suhasvk , Did you open a PR for the fix. I encountered the same issue. <|||||>Hi @LysandreJik , is there any PR for the fix. I encountered the same issue. |
transformers | 5,553 | closed | Customize widget text-generation inference with prepended input | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Some models for text generation make use of special tokens or additional initialization text.
It would be useful to add the possibility to pass additional text to the input string (before or after) and also let people strip input text.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
It is not clear how the widget API generates text.
For example with my little demo huggingtweets, I see 2 limitations:
1. I always strip input with `text.strip` and add a white space at the beginning. Otherwise we get wrong tokenization of `"Paris"` vs `" Paris"`
2. With the way my model is trained, I always want to add `<|endoftext|>` at the beginning (training adds it between every tweet)
Input such as `" this is my input "` would become, after stripping, adding white space and special token `"<|endoftext|> this is my input"` and be then passed to tokenizer.
I can see other cases where people add their own special tokens for certain tasks.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I don't think there is any public access to the API so cannot contribute there. | 07-06-2020 17:55:11 | 07-06-2020 17:55:11 | There is some documentation here https://huggingface.co/docs but we will add more later this week.<|||||>I don't think we want to mask (pun intended) too much in the inference widget what the model actually consumes, but did you see that:
- you can specify your model's widget's example input, by adding metadata to your model card, cf. https://huggingface.co/docs#how-can-i-control-my-models-widgets-example-inputs
- you can define a prefix in your config.json's `task_specific_params`, see T5 for instance: https://s3.amazonaws.com/models.huggingface.co/bert/t5-base-config.json (cc @patrickvonplaten)<|||||>> * you can define a prefix in your config.json's `task_specific_params`, see T5 for instance: https://s3.amazonaws.com/models.huggingface.co/bert/t5-base-config.json (cc @patrickvonplaten)
I had not noticed it! This looks comprehensive and exactly what is needed!<|||||>> * you can define a prefix in your config.json's `task_specific_params`, see T5 for instance: https://s3.amazonaws.com/models.huggingface.co/bert/t5-base-config.json (cc @patrickvonplaten)
Just a quick note that the prefix here is not for text generation pipeline but I used the same idea in PR #5885 with existing "padding_text" variable that I turned into a task specific configuration parameter.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This was implemented |
transformers | 5,552 | closed | [Don't merge] Reformer Trax Integration Tests | 07-06-2020 17:19:04 | 07-06-2020 17:19:04 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
|
transformers | 5,551 | closed | Fix #5544 | 07-06-2020 15:14:34 | 07-06-2020 15:14:34 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5551?src=pr&el=h1) Report
> Merging [#5551](https://codecov.io/gh/huggingface/transformers/pull/5551?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1bbc28bee719120f3b6f73dbe7dfe6f67e2e9fa7&el=desc) will **increase** coverage by `1.10%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5551?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5551 +/- ##
==========================================
+ Coverage 76.79% 77.90% +1.10%
==========================================
Files 141 141
Lines 24634 24634
==========================================
+ Hits 18917 19190 +273
+ Misses 5717 5444 -273
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5551?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.95% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `73.37% <0.00%> (-25.00%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.75%)` | :arrow_up: |
| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.55% <0.00%> (+11.22%)` | :arrow_up: |
| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.96% <0.00%> (+21.29%)` | :arrow_up: |
| [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `85.71% <0.00%> (+25.71%)` | :arrow_up: |
| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/5551/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5551?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5551?src=pr&el=footer). Last update [1bbc28b...f8f28f5](https://codecov.io/gh/huggingface/transformers/pull/5551?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,550 | closed | Fix the tokenization warning noted in #5505 | Fix https://github.com/huggingface/transformers/issues/5505 (unwanted warning in batch_encode_plus)
| 07-06-2020 14:58:21 | 07-06-2020 14:58:21 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5550?src=pr&el=h1) Report
> Merging [#5550](https://codecov.io/gh/huggingface/transformers/pull/5550?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1bc13697b1d2197a18f4ed6009e19aed315ab0f0&el=desc) will **increase** coverage by `0.56%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5550?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5550 +/- ##
==========================================
+ Coverage 77.30% 77.86% +0.56%
==========================================
Files 141 141
Lines 24634 24634
==========================================
+ Hits 19043 19181 +138
+ Misses 5591 5453 -138
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5550?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.95% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (ø)` | |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <0.00%> (+1.16%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.15% <0.00%> (+2.53%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.69% <0.00%> (+12.69%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+13.07%)` | :arrow_up: |
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5550?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5550?src=pr&el=footer). Last update [1bc1369...a9929ce](https://codecov.io/gh/huggingface/transformers/pull/5550?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,549 | closed | The `add_space_before_punct_symbol` is only for TransfoXL | Same fix as in the [pipelines](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L626). Not the cleanest way to do a per-model change, but is bug-free for v3.0.2.
closes https://github.com/huggingface/transformers/issues/5525 | 07-06-2020 14:47:16 | 07-06-2020 14:47:16 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5549?src=pr&el=h1) Report
> Merging [#5549](https://codecov.io/gh/huggingface/transformers/pull/5549?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1bbc28bee719120f3b6f73dbe7dfe6f67e2e9fa7&el=desc) will **decrease** coverage by `0.19%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5549?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5549 +/- ##
==========================================
- Coverage 76.79% 76.60% -0.20%
==========================================
Files 141 141
Lines 24634 24634
==========================================
- Hits 18917 18870 -47
- Misses 5717 5764 +47
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5549?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5549/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5549/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5549/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5549/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5549/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.50%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5549/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5549?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5549?src=pr&el=footer). Last update [1bbc28b...3d1ac81](https://codecov.io/gh/huggingface/transformers/pull/5549?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,548 | closed | Possibility to use WhitespaceSplit as pre_tokenizer instead of BPE/Sentencepiece? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I was customizing a tokenizer with vocab and merges from tokenizers module. However when running intended Language Modelling task with the customized tokenizer, the class in transformers repository(PretrainedTokenizer) is not the same. I was trying to run a roberta-for-masked-lm objective with whole word tokenization scheme but seems if I were to RobertaTokenizer, it cannot support changing the 'pre_tokenizer' scheme. May I know in such case what shall I do
| 07-06-2020 14:41:19 | 07-06-2020 14:41:19 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,547 | closed | [Feature Request] Extract Predictions from Trainer | # 🚀 Feature request
Add the possibility to return predictions along with its example id in the new Trainer class.
## Motivation
When working with extractive QA (i.e.: SQuAD), you get back the best predictions, but, the current example for running squad uses the old, plain training/eval script, without the new Trainer class.
Additionally, there are other tasks where predictions can be extremely useful (i.e.: Multiple Choice).
Adding such functionality in the Trainer class could solve this and unify both question answering and multiple choice examples.
## Your contribution
I am familiarized with the code (both the Trainer class and the old train/eval script), so I could submit a PR with the new functionality. | 07-06-2020 14:36:21 | 07-06-2020 14:36:21 | Unless I'm mistaken, Question-answering does support Trainer. Is a specific feature missing for your use case?<|||||>Hi @julien-c!
At the time of writing the PR was not merged yet (and I was not aware of it, my bad). It is great!
In any case, the previous example (the one without trainer) outputted both `predictions` and `nbest_predictions`, which are crucial for error analysis.
The new example with trainer looses this ability (because the new dataset/trainer API does not support it). Adding this feature could benefit not only SQuAD, but other datasets too (i.e.: RACE).
What do you think on this regard?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Any updates on this matter? I recently started using `transformers` and running the multiple-choice example I realized there's an output of the overall accuracy but I can't find the predictions. Currently going through the code to obtain them...<|||||>Hi @dianags,
Might be a little late, but tired of waiting, I programmed my own solution that output predictions and some more data, the code can be found [here](https://github.com/geblanco/mc_transformers)
Best,<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,546 | closed | GPT2 tokenizer should not output token type IDs | The GPT-2 and OpenAI GPT tokenizers should not output `token_type_ids` by default.
closes https://github.com/huggingface/transformers/issues/5517
code quality will pass when merged in master.
Edit (@thomwolf):
This will also fix #4922 | 07-06-2020 14:23:40 | 07-06-2020 14:23:40 | |
transformers | 5,545 | closed | Batching (TF)BertForQuestionAnswering deployment | # 🚀 Feature request
It may be already possible, but I couldn't figure out how, to call TFBertForQuestionAnswering on a batch on context-question pairs rather than just one by one.
## Motivation
I tried out the same code on a machine with 16 cores and a GPU. I could process 22 pairs/sec with the CPUs, but only 11 pairs/sec with the GPU. I assume it's not efficient due to I/O. However, I couldn't find in the example such a way to call the BERT model.
For example, when I do this:
```
input_ids = tokenizer.encode(question, context, add_special_tokens=True, max_length=512)
```
I would like to promote `question` and `context` to an array of pairs so that after calling
```
start_scores, end_scores = model({'input_ids': np.array([input_ids]), # The tokens representing our input text.
'token_type_ids': np.array([segment_ids])}) # The segment IDs to differentiate question from context
```
the returned objects `start_scores` and `end_scores` should be a 2D array (or list). | 07-06-2020 14:16:24 | 07-06-2020 14:16:24 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,544 | closed | incorrect typehint for PreTrainedTokenizer.convert_ids_to_tokens() return value | # 🐛 Bug
## Information
The return value of `tokenization_utils.PreTrainedTokenizer.convert_ids_to_tokens()` is declared as `Union[int, List[int]]` while the method clearly returns a `Union[str, List[str]]` object type.
Would you like me to open a PR? ;)
Thanks for the great work!
Andreas
PS: Excuse me for skipping my system specs since this is a rather small issue and I'm just a bit annoyed by PyCharm's inspection warning on this ;) | 07-06-2020 13:57:52 | 07-06-2020 13:57:52 | We want to make a clean patched release soon, so I opened #5551 to make sure it's fixed in it.
In the future, don't hesitate to directly open a PR for an issue with a clear fix like this :-)<|||||>Closed via #5551 |
transformers | 5,543 | closed | t5-base translation_en_to_de BLEU lower than the paper | I downloaded the "newstest2014.en" and "newstest2014.de" datasets. Then I used examples/translation/t5/evaluate_wmt.py to evaluate the BLEU value of en_to_de, and the BLEU finally obtained is equal to 22.15, which is much lower than the paper. I used the t5-base model and my transformers version is 2.11.0.
Is there something wrong with my operation? Is it necessary to fine-tune the t5 model to reproduce the results of the paper? | 07-06-2020 13:35:18 | 07-06-2020 13:35:18 | I think the script `evaluate_wmt.py` was never really tested. Note that `t5-base` is not a fine-tuned model, but just a pretrained model. So you would definitely get better results by fine-tuning the model on translation. I'm not 100% sure, but I think the T5 paper shows the "non-finetuned" results for translation somewhere as well. Pinging @sshleifer - This might be interesting for as well.<|||||>What patrick said is exactly correct.
I actually don't understand exactly which checkpoint points map to which table entries.
@craffel, is 22.15 a reasonable zero shot BLEU for t5-base on en-de/?
I am looking at appendix E, 3rd to rightmost column last page in [arxiv](https://arxiv.org/pdf/1910.10683.pdf), but am not sure which row corresponds to `t5-base` without finetuning. A machine readable version of that table would also be super helpful if it is easy to find.
For future reference/readers, `evaluate_wmt.py` has moved to `examples/seq2seq/run_eval.py`:
and the new command (for en-romanian) is:
```bash
export DATA_DIR=wmt_en_ro
python run_eval.py t5-base \
$DATA_DIR/val.source t5_val_generations.txt \
--reference_path $DATA_DIR/val.target \
--score_path enro_bleu.json \
--task translation_en_to_ro \
# --n_obs 100 \
--device cuda \
--fp16 \
--bs 32
```
You would need to update the first few args for your paths.
I had some reasonable results finetuning mbart on WMT en-ro. Got BLEU 24 after finetuning vs 27 for mbart-large-en-ro. (both #s before preprocessing)
I would be very interesting in seeing results/bug fixes for finetuning t5 on any language pair!<|||||>Hey all,
@anonymous1100
> Is it necessary to fine-tune the t5 model to reproduce the results of the paper?
Yes. The pre-trained checkpoints are trained on a multi-task mixture and need further fine-tuning to achieve maximal performance. See paragraph "Multi-Task Pre-training" in Section 3.7:
> ... In Section 3.5.3, we showed that pre-training on a multi-task mixture of unsupervised and supervised tasks before fine-tuning worked as well as pre-training on the unsupervised task alone. This is the approach advocated by the “MT-DNN” [Liu et al., 2015, 2019b]. It also has the practical benefit of being able to monitor “downstream” performance for the entire duration of training, rather than just during fine-tuning. We therefore used multi-task pre-training in our final set of experiments.
@patrickvonplaten
> I'm not 100% sure, but I think the T5 paper shows the "non-finetuned" results for translation somewhere as well.
No, we never reported those numbers, but they are trivial to get by running eval on the pre-trained checkpoints, e.g.
```bash
gsutil -m cp -r gs://t5-data/pretrained_models/base/* "${MODEL_DIR}"
t5_mesh_transformer \
--tpu="${TPU_NAME}" \
--gcp_project="${PROJECT}" \
--tpu_zone="${ZONE}" \
--model_dir="${MODEL_DIR}" \
--gin_file="gs://t5-data/pretrained_models/base/operative_config.gin" \
--gin_file="eval.gin" \
--gin_file="beam_search.gin" \
--gin_param="MIXTURE_NAME = 'wmt_t2t_ende_v003'" \
--gin_param="run.dataset_split = 'test'" \
--gin_param="eval_checkpoint_step = 'all'" \
--gin_param="utils.tpu_mesh_shape.tpu_topology = '2x2'" # or whatever
```
@sshleifer
> I actually don't understand exactly which checkpoint points map to which table entries.
The released checkpoints are (multi-task) pre-trained models which, after fine-tuning, produce the numbers in Table 14. We don't report the results before fine-tuning, and we didn't (and won't) release the fine-tuned checkpoints.
> is 22.15 a reasonable zero shot BLEU for t5-base on en-de/?
I ran the above command and got 28.664, so that seems very low. Not familiar with the HF eval script but I can take a look if you need ideas for figuring out what went wrong.
> I am looking at appendix E, 3rd to rightmost column last page in arxiv, but am not sure which row corresponds to t5-base without finetuning.
None of the rows in that table correspond to any of the T5 models. Those numbers are the results of our giant systematic (ablation) study that we did before training any of the T5 models.
> A machine readable version of that table would also be super helpful if it is easy to find.
The LaTeX source on arxiv have the tables in a format that would be easily parseable to whatever machine-readable format. https://arxiv.org/e-print/1910.10683<|||||>I think we figured out what went wrong. The tokenizer is not adding `eos_token="</s>"` to the source document.
It should be, right?<|||||>The inputs should definitely have an EOS before they are fed into the model. If it's the convention in Transformers that the tokenizer takes care of that, then yes! In the T5 codebase, the tokenizer ittself does not add an EOS; that's handled by the packing and padding code.<|||||>Awesome! is there a `bos` token that goes before sequence (After the prefix?)
like `<s>` in Roberta/GPT2/Bart?
(Is this the packing/padding code? https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py) <|||||>> is there a bos token that goes before sequence (After the prefix?)
Nope.
> Is this the packing/padding code? https://github.com/google-research/text-to-text-transfer-transformer/blob/master/t5/data/preprocessors.py
No, the packing/padding code is not part of the T5 codebase (T5 just provides (tokenized/preprocessed) sequences). It's assumed that it's handled by whatever the model implementation is. Here it is in the Mesh TF codebase: https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/dataset.py<|||||>Adding EOS does not appear to help zero shot performance in my first experiment, but open to hearing others' results. From a fork of this repo, you can run
```bash
git fetch upstream
git checkout t5tok
```
to get a version of the tokenizer that adds EOS.
When I ran eval on wmt_en_ro, I got
```
t5tok (with `</s>`):27.65
master (no EOS): 27.87
```
The commands to reproduce are in the PR [description](https://github.com/huggingface/transformers/pull/5866#issue-451879707)
Would love to know results on other datasets!<|||||>For what it's worth I'm using T5 for other purposes (style transfer) and have found SotA results. It looks like the master branch has diverged, but among other changes I modified seq2seq.utils.encode_file to this:
`lns = [prefix + text + " </s>" for text in lns]`<|||||>Hey @sshleifer , thanks for getting #5866 in. Does this resolve the discrepancy that originally started this issue, i.e. that non-fine-tuned T5-Base gets 28.664 BLEU on WMT EnDe using MTF whereas the HF version got 22.15?<|||||>IDK how OP got 22.1., I somehow just got BLEU 34.513 for en-de on what I thought was wmt_en_de 2019 (I can try to rerun on identical data if given pointer to such.)
For en-ro, I was getting 27.85 before the change, 27.65 after.
I am using `corpus_bleu` across the whole test set.
test data:
- verified that it is same as sacrebleu besides newline at end of file.
### To Reproduce
(21 mins on NVIDIA-RTX)
Get Data:
```bash
cd examples/seq2seq/
mkdir -p gens
wget https://s3.amazonaws.com/datasets.huggingface.co/translation/wmt_en_de.tgz
tar -xzvf wmt_en_de.tar.gz
export D=${PWD}/wmt_en_de
```
Eval Command:
```
python run_eval.py t5-base $D/test.source --reference_path $D/test.target gens/t5_base_ende_test_gens.txt --score_path gens/t5_base_ende_test_bleu.json --bs 16 --task translation_en_to_de
```
Going to leave this open until I am satisfied that I am getting a reasonably close BLEU on reasonably close data.<|||||>same 34.15 from
```bash
sacrebleu -t wmt16 -l en-de < gens/t5_base_ende_test_gens.txt
```
Translations: [here](https://raw.githubusercontent.com/sshleifer/transformers_fork/t5-gens/examples/seq2seq/t5_base_ende_test_gens.txt)<|||||>Hey Sam, the validation set we used was `newstest2013`. You can get the data here
https://nlp.stanford.edu/projects/nmt/data/wmt14.en-de/newstest2013.en
https://nlp.stanford.edu/projects/nmt/data/wmt14.en-de/newstest2013.de
or if you want to get exactly the same preprocessed inputs and outputs, you can use
```
python -c 'import t5; ds = t5.data.TaskRegistry.get("wmt_t2t_ende_v003").get_dataset({"inputs": 512, "targets": 512}, "validation")'
python -m t5.scripts.dump_task --task=wmt_t2t_ende_v003 --split=validation
```<|||||>I got `{"bleu": 23.8653, "n_obs": 3000, "runtime": 319, "seconds_per_sample": 0.1063}` on that data (`newstest2013.en`), with some sacrebleu warnings about my data not being properly detokenized.
```
WARNING:root:That's 100 lines that end in a tokenized period ('.')
WARNING:root:It looks like you forgot to detokenize your test data, which may hurt your score.
WARNING:root:If you insist your data is detokenized, or don't care, you can suppress this message with '--force'.
```
+ `{'bleu': 24.2978}` with by passing tokenize='intl' to sacrebleu.
Ran the t5 command now to try to check pre/post processing, but got:
```
AssertionError: Sizes do not match: 284246 vs 284253 for /home/shleifer/tensorflow_datasets/downloads/extracted/TAR_GZ.data.stat.org_wmt1_tran-task_trai-para-nc-6LWgxBgzCHdv_LtotNmnXjpCH6OhzkF8D3v10aRrznA.tgz/training-parallel-nc-v13/news-commentary-v13.de-en.de vs /home/shleifer/tensorflow_datasets/downloads/extracted/TAR_GZ.data.stat.org_wmt1_tran-task_trai-para-nc-6LWgxBgzCHdv_LtotNmnXjpCH6OhzkF8D3v10aRrznA.tgz/training-parallel-nc-v13/news-commentary-v13.de-en.en.
```
I think it is building more than just the val set for 1 language.
<|||||>> I got {"bleu": 23.8653, "n_obs": 3000, "runtime": 319, "seconds_per_sample": 0.1063} on that data (newstest2013.en), with some sacrebleu warnings about my data not being properly detokenized.
We have always run the BLEU on the TFDS versions and I don't ever recall seeing that error. Maybe there is something wrong with the text files I linked? I think sacrebleu can also download the appropriate test sets.
> Ran the t5 command now to try to check pre/post processing, but got:
That looks like a TFDS error, not sure how that would be happening. Do you want to open an issue in the TFDS repo and tag @adarob?<|||||>Yeah I can file an issue. Do you have an easy way to share your repo's generations?
Mine are here
[t5_base_newstest2013.de](https://github.com/huggingface/transformers/files/5163691/t5_base_gens.txt)
<|||||>Do you mean the predictions from T5 when run via the Mesh TF Transformer? Here are the inputs/targets/predictions that got spit out when I ran https://github.com/huggingface/transformers/issues/5543#issuecomment-656901662
[wmttmp_test_eval_wmt_t2t_ende_v003_targets.txt](https://github.com/huggingface/transformers/files/5163804/wmttmp_test_eval_wmt_t2t_ende_v003_targets.txt)
[wmttmp_test_eval_wmt_t2t_ende_v003_inputs.txt](https://github.com/huggingface/transformers/files/5163805/wmttmp_test_eval_wmt_t2t_ende_v003_inputs.txt)
[wmttmp_test_eval_wmt_t2t_ende_v003_999900_predictions.txt](https://github.com/huggingface/transformers/files/5163806/wmttmp_test_eval_wmt_t2t_ende_v003_999900_predictions.txt)
Also, I apologize, I got mixed up in the span of time between when this issue started and now. This thread is about a mismatch of performance on the test set, but since this issue was re-opened last week I was thinking we were discussing the validation set. You should use `newstest2014`; that is the test set used in the paper, mentioned in https://github.com/huggingface/transformers/issues/5543#issue-651544580, and is what I ran to get the predictions above and the score in https://github.com/huggingface/transformers/issues/5543#issuecomment-656901662 Here are the corresponding text files from Stanford NLP
https://nlp.stanford.edu/projects/nmt/data/wmt14.en-de/newstest2014.en
https://nlp.stanford.edu/projects/nmt/data/wmt14.en-de/newstest2014.de<|||||>On that data:
+ huggingface: 27.9273
+ mesh: 28.6642 (from your .predictions) file.
evaluation: used (pass tokenize='intl' to calculate_bleu). This improves both scores by 0.7 BLEU.<|||||>Cool, that is not too far off. Are you using beam search with the same hparams that we used? If so, we could maybe chalk this up to numerical differences. If not, I bet that beam search would explain a 0.7 BLEU difference.<|||||>That was the issue -- now equivalent! I got huggingface: 28.51 by adding `--max_length=128 --length_penalty=0.6`. (Num beams was already correct in the config.)
semi-interestingly, you can get 28.7 by adding "translate English to German: translate English to German:" (twice) to every source example (I was doing this by accident).<|||||>Awesome, great sleuthing! Good to hear that there is no disparity here. |
transformers | 5,542 | closed | QA pipeline should mask CLS tokens after handling impossible answer | Signed-off-by: Morgan Funtowicz <[email protected]> | 07-06-2020 11:48:16 | 07-06-2020 11:48:16 | Is the pytorch test failure unrelated here @mfuntowicz ?<|||||>Will check this :)<|||||>Closed as fix was included in #5496 |
transformers | 5,541 | closed | High F1 score. But poor accuracy during Inference due to tokenisation | # 🐛 Bug
## Information
I am using **Bert-Base-cased** model to train my custom Named entity recognition(NER) model with a sequence length of **512**.
Language I am using the model on: **English**
The problem arises when using:
* [ ] the official example scripts: **token-classification/run_ner.py**
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: **Named entity recognition**
* [ ] my own task or dataset: **Custom Dataset**
## To reproduce
Steps to reproduce the behavior:
1.Use the default NER Pipeline to load the custom trained model
` self.model_prediction_pipeline = pipeline(
"ner",
model=model_path,
tokenizer= model_path,
grouped_entities=True
)`
2. I've attached the Evaluation results of the model.
`eval_loss = 0.021479165139844086`
`eval_precision = 0.8725970149253731`
`eval_recall = 0.8868932038834951`
`eval_f1 = 0.8796870297923562`
`epoch = 5.0`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
1. Model should produce a good accuracy corresponding to the F1 score.
2. However during Inference, I am not getting an accuracy over **30%**
3. Not sure if the inconsistent tokenisation leads to poor results.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.0
- Platform: Linux-4.15.0-109-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?):NA
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 07-06-2020 11:32:34 | 07-06-2020 11:32:34 | Hello! Why do you believe the tokenization to be the issue here?<|||||>@LysandreJik Thanks for reaching out.
Please find my observations with the inconsistency in the Tokenizer(possible issue), since I was using the HuggingFace provided script for training the custom NER Model.
**1. Expected name:**
AUDLEY THOMPSON
Predicted name:
{'entity_group': 'B-PER', 'score': 0.9993636608123779, 'word': 'AUDLE'},
{'entity_group': 'I-PER', 'score': 0.8126876294612885, 'word': '##Y THOMPS'}
**Issue:**
Last two letters got skipped
**2. Expected name:**
DANIEL, BROWN
Predicted name:
{'entity_group': 'B-PER', 'score': 0.9559168517589569, 'word': 'DAN'},
{'entity_group': 'I-PER', 'score': 0.9092316627502441, 'word': '##IE'},
{'entity_group': 'B-PER', 'score': 0.5071505904197693, 'word': '##L'},
{'entity_group': 'I-PER', 'score': 0.849787175655365, 'word': ', BROWN'}
**Issue:**
The wordpiece tokenizer splits the begin entity into smaller pieces. However model predicts that as an "I-PER" entity which makes it really difficult to merge continuous entities
**3. Expected name:**
VINEY, PAJTSHIA
Predicted name:
{'entity_group': 'B-PER', 'score': 0.9991838335990906, 'word': 'VI'},
{'entity_group': 'I-PER', 'score': 0.9591831763585409, 'word': '##Y , PA'}
{'entity_group': 'I-PER', 'score': 0.7927274107933044, 'word': '##IA'}
**Issue:**
'NE' word is missed in the name: 'VINEY'
'JTSH' word is missed in the name: 'PAJTSHIA'
**4. Expected name:**
Pierson, Garcia
Predicted name:
{'entity_group': 'B-PER', 'score': 0.9972472190856934, 'word': 'Pierson'},
{'entity_group': 'I-PER', 'score': 0.8200799822807312, 'word': 'GA'},
{'entity_group': 'I-PER', 'score': 0.8131067156791687, 'word': '##IA'}
**Issue:**
'RC' word is missed in the name: 'Garcia'
Please let me know if I am missing something.
**Missing characters** and **split tokens** are major reasons for the **accuracy drop** while merging the Begin(**B-PER**) and Info(**I-PER**) entities.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,540 | closed | MobileBert embedding vectors values | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): MobileBert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
```py
def BERT_embeddings(data, max_seq_len):
tokenizer = MobileBertTokenizer.from_pretrained('google/mobilebert-uncased')
model = MobileBertModel.from_pretrained('google/mobilebert-uncased')
batches = np.array_split(data, 5)
encoded = None
for batch in batches:
encoded_text = tokenizer.batch_encode_plus(batch.values, max_length=max_seq_len, pad_to_max_length=True, truncation=True)
input_ids = tf.constant(encoded_text['input_ids'])#[None, :]
#attention_masks = encoded_data_train['attention_mask']
outputs = model(input_ids)
if encoded == None:
encoded = outputs[1]
else:
encoded = np.concatenate((encoded, outputs[1]), axis=0)
encoded_test_x = pd.DataFrame(data=encoded, index=test_x.index)
return encoded_test_x
print(BERT_embeddings(data, max_seq_len=18)
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
I would like to create sentence embeddings with MobileBert
## To reproduce
Steps to reproduce the behavior:
1. 'text' is one column pandas object
2. run above code
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Embeddings have a form of vectors with large values like [-24888514.0 222398.593750...]
## Expected behavior
Should have [-1, 1] range
<!-- A clear and concise description of what you would expect to happen. -->
I would expect that elements of a vector will not be in range hundred of thousands but [-1, 1]
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.1
- Platform:
- Python version: 3.7.3
- PyTorch version (GPU?): -
- Tensorflow version (GPU?): 2.0.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 07-06-2020 10:53:28 | 07-06-2020 10:53:28 | Hi! Firstly, in your code you're using a `MobileBertModel` with tensorflow inputs; does that work? You should either use torch inputs or use `TFMobileBertModel` instead.
The model outputs have no reason to be bound between [-1, 1], as they're logits. You can pass them through a softmax layer to get the total sum to be 1 and have each result bound between -1 and 1!<|||||>Hello, thank you for your answer. I modified the function according to your suggestion, applying softmax on outputs to get hidden states values:
```
def softmax(x):
e_x = np.exp((x - np.max(x))/2000000)
return e_x / e_x.sum(axis=0)
def BERT_embeddings2(test_x, max_seq_len):
tokenizer = MobileBertTokenizer.from_pretrained('google/mobilebert-uncased')
model = TFMobileBertModel.from_pretrained('google/mobilebert-uncased')
batches = np.array_split(test_x, 100)
encoded = None
for batch in batches:
encoded_text = tokenizer.batch_encode_plus(batch.values, max_length=max_seq_len, pad_to_max_length=True, truncation=True)
input_ids = tf.constant(encoded_text['input_ids'])#[None, :]
outputs = model(input_ids)
hidden_states = softmax(outputs[1])
if encoded is None:
encoded = hidden_states
else:
encoded = np.concatenate((encoded, hidden_states), axis=0)
```
Then I used encoded data to run experiment on multiple neural network architectures. What I observed, is that the loss function does not converge and is similar for various network shapes. This is not appearing while training on (not mobile) Bert Model. Do you know what can be the reason?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,539 | closed | Fine Tuning Using /question-answering/run_squad.py | How do I use `/question-answering/run_squad.py` to fine-tune my own custom dataset?
[StackOverflow][1]
[1]: https://stackoverflow.com/questions/62752709/how-to-fine-tune-bert-for-question-answering | 07-06-2020 10:42:59 | 07-06-2020 10:42:59 | Hi, there's a [readme](https://github.com/huggingface/transformers/tree/master/examples/question-answering) in `question-answering`!<|||||>Yes, I've gone through this resource. What I want to achieve is to fine-tune on a custom resource and not on SQuAD dataset. I want to train it on Question-Answer pairs.
You can find more about the issue [here](https://ai.stackexchange.com/questions/22358/how-to-fine-tune-bert-for-question-answering/22362?noredirect=1#comment34182_22362). |
transformers | 5,538 | closed | Easier way to download pretrained model files to local | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarily intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Is there any why we can save vocab and models files to local without having to run the following code with cache_dir parameter. What i am asking is some utility or some endpoint from where we could get a tar of all model files.
```
from transformers import BartTokenizer, BartForSequenceClassification
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large',cache_dir='path/to/local_dir')
model = BartForSequenceClassification.from_pretrained('facebook/bart-large',cache_dir='path/to/local_dir')
```
The reason for such a requirement is, we run our code in VMs where there is no internet access, so we have to run the code in our local for every model and save files. It would be helpful if there is a easier way to download all the files for pretrained models as a tar or zip file.
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
| 07-06-2020 09:35:31 | 07-06-2020 09:35:31 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Any news on this? We have the same problem when running a Docker-container (it takes a while to load the model all the time)<|||||>Hey! The recommended way of handling that is to use `from_pretrained`/`save_pretrained` to a directory, and to load from that directory from then on.
If for some reason that's not sufficient, you can use the `huggingface_hub` library as shown in the [following guide](https://huggingface.co/docs/huggingface_hub/how-to-downstream). I believe repositories cannot be downloaded as zips currently, cc @osanseviero @julien-c |
transformers | 5,537 | closed | Longformer - Compression | Hello, I have trained a longformer model on my custom question answering dataset, and now I wanted to know if there is a way to compress this trained model?
Thank you! | 07-06-2020 09:32:39 | 07-06-2020 09:32:39 | You could check out the example script under distillation: `/home/patrick/hugging_face/transformers/examples/distillation`<|||||>Did you upload the model to the model hub? I would be great so that we can take a closer look :-) <|||||>Hello, thanks for the reply. I checked the train.py script under examples/distillation, but it seems that it does not cater to longformer models (the mentioned models within the script are BERT, RoBERTa, and GPT2). Yes, I have uploaded the model: https://s3.amazonaws.com/models.huggingface.co/bert/Nomi97/Chatbot_QA.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,536 | closed | Create README | 07-06-2020 08:23:19 | 07-06-2020 08:23:19 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5536?src=pr&el=h1) Report
> Merging [#5536](https://codecov.io/gh/huggingface/transformers/pull/5536?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `1.14%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5536?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5536 +/- ##
==========================================
- Coverage 77.83% 76.69% -1.15%
==========================================
Files 141 141
Lines 24634 24634
==========================================
- Hits 19175 18892 -283
- Misses 5459 5742 +283
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5536?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.44% <0.00%> (-1.17%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+1.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5536?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5536?src=pr&el=footer). Last update [58cca47...30f3554](https://codecov.io/gh/huggingface/transformers/pull/5536?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,535 | closed | How i can set the special token <|endoftext|> to an other id ? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 07-06-2020 07:43:29 | 07-06-2020 07:43:29 | Now is set to 0 and let's say i want to change it to 50256 for GPT-2 .<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,534 | closed | How-to-fine-tune-bert-for-question-answering? | I wish to train two domain-specific models:
Domain 1: Constitution and related Legal Documents
Domain 2: Technical and related documents.
For Domain 1, I've access to a text-corpus with texts from the constitution and no question-context-answer tuples. For Domain 2, I've access to Question-Answer pairs.
Is it possible to fine-tune a light-weight BERT model for Question-Answering using just the data mentioned above?
If yes, what are the resources to achieve this task?
Some examples, from the huggingface/models library would be mrm8488/bert-tiny-5-finetuned-squadv2, sshleifer/tiny-distilbert-base-cased-distilled-squad, /twmkn9/albert-base-v2-squad2.
[You can find the question here.](https://datascience.stackexchange.com/questions/77213/how-to-fine-tune-bert-for-question-answering) | 07-06-2020 07:02:33 | 07-06-2020 07:02:33 | |
transformers | 5,533 | closed | [wip] Label smooth | 07-06-2020 03:06:20 | 07-06-2020 03:06:20 | ||
transformers | 5,532 | closed | Error in Loading bert-large-uncased-whole-word-masking-finetuned-squad | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
In order to use bert-large-uncased-whole-word-masking-finetuned-squad, I firstly download it from https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-config.json and https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-pytorch_model.bin. Then, as I used other pretrained model before, I change the files name into config.json and pytorch_model.bin. Finally, I run code bert = BertForQuestionAnswering.from_pretrained(root_path + "/bert-large-uncased-whole-word-masked-finetuned-squad/") and meat some errors.
The error info shows as follows:
File "/home/dl/anaconda3/lib/python3.6/site-packages/transformers/modeling_utils.py", line 662, in from_pretrained
"Unable to load weights from pytorch checkpoint file. "
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
*** Error in `/home/dl/anaconda3/bin/python3': corrupted double-linked list: 0x000055e3278c88b0 ***
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 07-06-2020 00:37:56 | 07-06-2020 00:37:56 | Sounds like your downloads were corrupted.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,531 | closed | fixed ImportError: cannot import name 'hf_bucket_url' on convert_pytorch_checkpoint_to_tf2.py | So I installed the latest version of transformers on Google Colab
!pip install transformers
When trying to invoke the conversion file using
!python /usr/local/lib/python3.6/dist-packages/transformers/convert_pytorch_checkpoint_to_tf2.py .py --help
Or trying to use
from transformers.file_utils import hf_bucket_url. // works
from transformers.convert_pytorch_checkpoint_to_tf2 import *. // fails
convert_pytorch_checkpoint_to_tf("gpt2", pytorch_file, config_file, tf_file).
I get this error
ImportError Traceback (most recent call last)
<ipython-input-3-dadaf83ecea0> in <module>()
1 from transformers.file_utils import hf_bucket_url
----> 2 from transformers.convert_pytorch_checkpoint_to_tf2 import *
3
4 convert_pytorch_checkpoint_to_tf("gpt2", pytorch_file, config_file, tf_file)
/usr/local/lib/python3.6/dist-packages/transformers/convert_pytorch_checkpoint_to_tf2.py in <module>()
20 import os
21
---> 22 from transformers import (
23 ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP,
24 BERT_PRETRAINED_CONFIG_ARCHIVE_MAP,
ImportError: cannot import name 'hf_bucket_url'
Turns out, there's a problem in the import of `hf_bucket_url`. Once fixed, the code ran properly. | 07-05-2020 22:18:20 | 07-05-2020 22:18:20 | `from transformers.file_utils import default_cache_path, hf_bucket_url`
I want to import hf_bucket_url on Colab, but I still got the error
"ImportError: cannot import name 'hf_bucket_url' from 'transformers.file_utils' (/usr/local/lib/python3.9/dist-packages/transformers/file_utils.py)"
Am I do something wrong? |
transformers | 5,530 | open | Tabert | # 🌟 New model addition
## Model description
a pre-trained language model for learning joint representations of natural language utterances and (semi-)structured tables for semantic parsing. TaBERT is pre-trained on a massive corpus of 26M Web tables and their associated natural language context, and could be used as a drop-in replacement of a semantic parsers original encoder to compute representations for utterances and table schemas (columns).
<!-- Important information -->
## Open source status
* [X] the model implementation is available: (give details)
https://github.com/facebookresearch/TaBERT
* [X] the model weights are available: (give details)
https://github.com/facebookresearch/TaBERT
* [ ] who are the authors: (mention them, if possible by @gh-username)
| 07-05-2020 20:29:42 | 07-05-2020 20:29:42 | In their readme they say that the implementation to this project is still WIP, but they said this on release.
It should be really easy to add it here while they are already using this library in their implementation.
Does the huggingface team has more information about this, if the community can open an PR for it or waiting for the original authors ?<|||||>Looking forward to this new model.<|||||>Hello all!
I looked at the list of models in the transformers site and TaBERT is still not listed. Does anyone know when it is going to be ready?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Not a stale, still cant see TaBert on HuggingFace List

|
transformers | 5,529 | closed | [fix] pin sacrebleu to fix CI ImportError | 07-05-2020 19:11:44 | 07-05-2020 19:11:44 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5529?src=pr&el=h1) Report
> Merging [#5529](https://codecov.io/gh/huggingface/transformers/pull/5529?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `0.34%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5529?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5529 +/- ##
==========================================
- Coverage 77.83% 77.49% -0.35%
==========================================
Files 141 141
Lines 24634 24634
==========================================
- Hits 19175 19090 -85
- Misses 5459 5544 +85
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5529?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |
| [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `71.83% <0.00%> (-23.95%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+1.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5529?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5529?src=pr&el=footer). Last update [58cca47...e21636a](https://codecov.io/gh/huggingface/transformers/pull/5529?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,528 | closed | 3.0.1: "unexpected keyword argument 'is_pretokenized'" when using batch_encode_plus() w/ Fast Tokenizers | # 🐛 Bug
## Information
See title. Does not occur with "slow" tokenizers.
## To reproduce
```python
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained('gpt2')
texts = ['I am a teapot', 'Short and stout']
tokenizer.batch_encode_plus(texts)
```
## Trace
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-a99759a2bc25> in <module>
1 texts = ['I am a teapot', 'Short and stout']
2
----> 3 tokenizer.batch_encode_plus(texts)
/usr/local/lib/python3.8/site-packages/transformers-3.0.1-py3.8.egg/transformers/tokenization_utils_base.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding, truncation, max_length, stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
1813 )
1814
-> 1815 return self._batch_encode_plus(
1816 batch_text_or_text_pairs=batch_text_or_text_pairs,
1817 add_special_tokens=add_special_tokens,
/usr/local/lib/python3.8/site-packages/transformers-3.0.1-py3.8.egg/transformers/tokenization_gpt2.py in _batch_encode_plus(self, *args, **kwargs)
365 )
366
--> 367 return super()._batch_encode_plus(*args, **kwargs)
368
369 def _encode_plus(self, *args, **kwargs) -> BatchEncoding:
/usr/local/lib/python3.8/site-packages/transformers-3.0.1-py3.8.egg/transformers/tokenization_utils_fast.py in _batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_pretokenized, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
340 encodings = [encodings]
341 else:
--> 342 encodings = self._tokenizer.encode_batch(
343 batch_text_or_text_pairs, add_special_tokens=add_special_tokens, is_pretokenized=is_pretokenized
344 )
TypeError: encode_batch() got an unexpected keyword argument 'is_pretokenized'
```
## Environment info
- `transformers` version: 3.0.1
- `tokenizers` version: 0.8.0rc4
- Platform: macOS
- Python version: 3.8.2
| 07-05-2020 18:40:52 | 07-05-2020 18:40:52 | I must admit that I am confused here. I don't see how this is possible with one of the `0.8.0` version of `tokenizers`, they should have `is_pretokenized` as expected argument of both `encode` and `encode_batch`. Unfortunately, I was not able to reproduce it using the snippet you provided.
Would it be possible that you share a Google Colab showing this bug?<|||||>Hmm, it doesn't appear to be happening on Colab: https://colab.research.google.com/drive/1TCGbkP63BAwKHFob9YvP_PE7-8cyAZ3b?usp=sharing
I have seen other weird issues with tokenizers so it might be an issue on my local config; however the versions are definitely correct, which is baffling.<|||||>If you can reproduce this on your local config, can you check `tokenizers.__version__`? I suspect that for some reason you had a previous version that didn't include this argument. Sometimes I have weird bugs with `pip` that I don't really understand either, and so it might be reporting a version different from the one actually loaded.<|||||>Ugh, yes, `tokenizers.__version__` returned 0.7.0. (seems like it didn't get overwritten when upgrading to `transformers` 3.0.1)
After uninstalling tokenizers (twice for some reason) via `pip`, then reinstalling 0.8.0rc4, it works now.
Thanks for the help! Closing. |
transformers | 5,527 | closed | Tf to pytorch | Hi, I have XLNet checkpoint file and want to convert it to pytorch. I have converted the file to pytorch and when I am using the same file with simpletramsformers for text classification. It says the number of labels is not matching. Where should I specify the number of classes while converting from tf to pytorch? | 07-05-2020 16:55:13 | 07-05-2020 16:55:13 | #Code used to convert from tf to Pytorch
import torch
from transformers import (
CONFIG_NAME,
WEIGHTS_NAME,
XLNetConfig,
XLNetForQuestionAnswering,
XLNetForSequenceClassification,
XLNetLMHeadModel,
load_tf_weights_in_xlnet,
XLNetModel
)
tf_checkpoint_path="./model/"
xlnet_config_file = "./model/config.json"
pytorch_dump_path="./xlnetpytorch/"
config = XLNetConfig.from_json_file(xlnet_config_file)
print("Building PyTorch model from configuration: {}".format(str(config)))
model = XLNetForSequenceClassification(config)
#Load weights from tf checkpoint
model = load_tf_weights_in_xlnet(model, config, tf_checkpoint_path)
#Save pytorch-model
print("Save PyTorch model to {}".format(pytorch_dump_path))
model.save_pretrained(pytorch_dump_path)<|||||>Use config.num_labels = 536 (number of labels) in my case to define the number of labels in custom data set before initialising model i.e., before line model = XLNetForSequenceClassification(config) |
transformers | 5,526 | closed | Fix `RobertaClassificationHead` style consistency. | There's a slight inconsistency in `RobertaClassificationHead` in that it takes in the whole sequence output from the `RobertaModel`, and extracts the pooled output inside its own forward method, seen [here](https://github.com/huggingface/transformers/blob/58cca47c16149e43d1b516623d59e3c5d97f695e/src/transformers/modeling_roberta.py#L561).
This is different from other models, where the pooled output is computed beforehand and directly expected as input in the classifier. E.g. in [`BertForSequenceClassification`](https://github.com/huggingface/transformers/blob/58cca47c16149e43d1b516623d59e3c5d97f695e/src/transformers/modeling_bert.py#L1270), [`DistilBertForSequenceClassification`](https://github.com/huggingface/transformers/blob/58cca47c16149e43d1b516623d59e3c5d97f695e/src/transformers/modeling_distilbert.py#L631), [`BartForSequenceClassification`](https://github.com/huggingface/transformers/blob/58cca47c16149e43d1b516623d59e3c5d97f695e/src/transformers/modeling_bart.py#L1138), etc.
This mainly addresses this issue in `modeling_roberta.py` and `modeling_tf_roberta.py`. Additionally, some minor aesthetic changes are made to these files in order to pass the black / sort code quality checks.
Note: This PR is a duplicate of #4107 with minor changes made to pass code quality checks. Closed that one since it was outdated. | 07-05-2020 16:01:04 | 07-05-2020 16:01:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5526?src=pr&el=h1) Report
> Merging [#5526](https://codecov.io/gh/huggingface/transformers/pull/5526?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **increase** coverage by `0.56%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5526?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5526 +/- ##
==========================================
+ Coverage 77.83% 78.39% +0.56%
==========================================
Files 141 141
Lines 24634 24634
==========================================
+ Hits 19175 19313 +138
+ Misses 5459 5321 -138
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5526?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.04% <100.00%> (ø)` | |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5526/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5526?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5526?src=pr&el=footer). Last update [58cca47...21fae42](https://codecov.io/gh/huggingface/transformers/pull/5526?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I understand why it's inconsistent, but it would introduce a breaking change to the `RobertaClassificationHead` and `TFRobertaClassificationHead` objects!<|||||>@LysandreJik Thanks, yes, that's correct. Do you think this is something that may be incorporated in a future major release, or should I close this MR?<|||||>As an alternative, I've been using a simple [SequencePooler](https://github.com/ranamihir/pytorch_common/blob/ec2d9baa57d3cb5ac80232bea75ea0a49a6ba100/pytorch_common/utils.py#L912-L993) class to provide a generic way of extracting the pooled representation from different models. I'd be happy to create a PR for this if you think it could be useful.
```
>>> from pytorch_common.utils import SequencePooler
>>> pooler = SequencePooler("bert")
>>> pooler
SequencePooler(model_type=bert)
>>> pooler = SequencePooler("roberta")
>>> pooler
SequencePooler(model_type=roberta)
>>> pooler = SequencePooler("dummy")
WARNING:root:No supported sequence pooler was found for model of type 'dummy'. Using the default one.
>>> pooler
SequencePooler(model_type=default)
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,525 | closed | WARNING:transformers.tokenization_utils:Keyword arguments {'add_space_before_punct_symbol': True} not recognized. | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
ctrl
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ ] the official example scripts: (give details below)
```
python run_generation.py \
--model_type ctrl \
--model_name ctrl --length=100 --temperature 0.2 --num_return_sequences=5 --p=0.8 --seed=17 --repetition_penalty=1.2
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
```
- `transformers` version: 3.0.1
- Platform: Darwin-19.5.0-x86_64-i386-64bit
- Python version: 3.7.4
- PyTorch version (GPU?): 1.2.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
``` | 07-05-2020 11:44:22 | 07-05-2020 11:44:22 | |
transformers | 5,524 | closed | How to fine-tune tinyBERT for question-asnwering | # ❓ Questions & Help
How do I fine-tune tiny-BERT for question answering on my custom dataset. I've a list of questions and corresponding answers in a csv format, How do I fine-tune tiny-BERT on this dataset.
[You can find it at StackOverFlow here](https://stackoverflow.com/questions/62710931/huggingface-transformers-model-for-legal-question-answering)
| 07-05-2020 10:18:44 | 07-05-2020 10:18:44 | |
transformers | 5,523 | closed | Create model card | 07-05-2020 10:17:40 | 07-05-2020 10:17:40 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5523?src=pr&el=h1) Report
> Merging [#5523](https://codecov.io/gh/huggingface/transformers/pull/5523?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **increase** coverage by `0.02%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5523?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5523 +/- ##
==========================================
+ Coverage 77.83% 77.85% +0.02%
==========================================
Files 141 141
Lines 24634 24634
==========================================
+ Hits 19175 19180 +5
+ Misses 5459 5454 -5
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5523?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5523/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5523?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5523?src=pr&el=footer). Last update [58cca47...0780d53](https://codecov.io/gh/huggingface/transformers/pull/5523?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,522 | closed | Added data collator for permutation (XLNet) language modeling and related calls | Added `DataCollatorForPermutationLanguageModeling` in `data/data_collator.py` to return necessary inputs (applies masking and generates revelant tensors input_ids, perm_mask, target_mask and labels as per https://github.com/zihangdai/xlnet/blob/master/data_utils.py) for language modeling training with XLNetLMHeadModel. Also added related arguments, logic and calls in `examples/language-modeling/run_language_modeling.py`. Defined a separate `--plm_probability` flag for its use.
Also looked into CTRL - it uses a CLM loss just like GPT and GPT-2, so should work out of the box with this script (provided `past` is taken care of similar to `mems` for XLNet). Added a few words in the comments to reflect this.
Changed calls and imports appropriately.
Resolves: #4739, #2008 (partially) | 07-05-2020 10:01:14 | 07-05-2020 10:01:14 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5522?src=pr&el=h1) Report
> Merging [#5522](https://codecov.io/gh/huggingface/transformers/pull/5522?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `2.04%`.
> The diff coverage is `16.32%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5522?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5522 +/- ##
==========================================
- Coverage 77.83% 75.79% -2.05%
==========================================
Files 141 141
Lines 24634 24682 +48
==========================================
- Hits 19175 18708 -467
- Misses 5459 5974 +515
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5522?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `19.81% <14.58%> (-78.60%)` | :arrow_down: |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.22% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |
| [...rc/transformers/data/datasets/language\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `34.69% <0.00%> (-57.15%)` | :arrow_down: |
| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `50.74% <0.00%> (-35.83%)` | :arrow_down: |
| [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `60.00% <0.00%> (-25.72%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `16.66% <0.00%> (-21.30%)` | :arrow_down: |
| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `32.00% <0.00%> (-17.10%)` | :arrow_down: |
| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/5522/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5522?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5522?src=pr&el=footer). Last update [58cca47...a99871b](https://codecov.io/gh/huggingface/transformers/pull/5522?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@patrickvonplaten Please review<|||||>Hey @shngt - Thanks a mille for your PR. Pretraining for XLNet is not easy at all, so this is a super valuable PR :-)
Just re-read the paper and looked into the official xlnet code -> I think I finally understand now how it works exactly :D
Because it's not at all straight-forward to understand what happens there, it would be great if we can try to make the code as understandable as possible -> that's why I left so many comments mainly on variable names.
Also, we should add a test to `test_trainer.py` to next to the other data collators to test this one :-)
Looking forward to merge this soon :-) <|||||>@patrickvonplaten I've added more detailed comments and fixed the naming issues. I also added a few tests based on what I could see in `tests/test_trainer.py`. Let me know what you think :)<|||||>Awesome job @shngt ! This looks very clean now :-) Good to merge IMO.
Pinging @julien-c @LysandreJik @sgugger. <|||||>Pinging @thomwolf and @julien-c for notification and merging - Great job @shngt!<|||||>Impressive work, I was following this from the beginning.
Can we continue training XLNet now on domain-specific corpus ? and when will this merge be available to use?
Thanks<|||||>PR should be ready for use as of now, if you install from master :-) <|||||>Great work @shngt, this is very nice.<|||||>Hi, I raised an issue here #22435 because I have a question about this `DataCollatorForPermutationLanguageModeling`. Looking forward to your guys' response. Thank you! |
transformers | 5,521 | closed | What should i do if I want a model class similar to BertForSequenceClassification? | # ❓ Questions & Help
Hi!
For my task I want to implement a class similar to BertForSequenceClassification, but a little different, for example, i may want multiple linear head on top of the bert encoding. Call it BertForMySequenceClassification,
But I still want to use the pretrained bert parameters, and hopefully do easy load/save.
For example, i hope i can BertForSequenceClassification.from_pretrianed(some_path)
Any suggestion or help are much appreciated!
Thanks! | 07-05-2020 02:41:39 | 07-05-2020 02:41:39 | Hi @cloudygoose ,
Take a look at the code of `BertForSequenceClassification` https://huggingface.co/transformers/_modules/transformers/modeling_bert.html#BertForSequenceClassification,
All it does is, pass the inputs through `BertModel`, which returns the pooled output of bert and the applies the classifier on top it.
You can follow the same process, create `YourSequenceClassification` class by sub-classing `BertPreTrainedModel`,
init your `BertModel` and the additional linear heads and final classfication layer and do forward as
`BertModel -> additional linear heads -> classifier`
You can just take the `BertForSequenceClassification` as is and additional layers before the `classifier`
Hope this helps<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,520 | closed | AdamW step device error | I have a model:
```
class M(nn.Module):
def __init__(self, ...):
self.A = nn.Parameter(...)
self.B = nn.Parameter(...)
self.C = torch.einsum(..., self.A, self.B)
def forward(self, D):
return func(self.C.to(D.device), D)
```
However, I'm getting the following error when training this model on GPU:
```
File "file.py", line 71, in optimizer_step
optimizer.step()
File ".../lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 67, in wrapper
return wrapped(*args, **kwargs)
File ".../lib/python3.7/site-packages/transformers/optimization.py", line 155, in step
exp_avg.mul_(beta1).add_(grad, alpha=1.0 - beta1)
RuntimeError: expected device cuda:0 but got device cpu
```
This is happening in 2.11.0. Am I doing something incorrectly here? | 07-04-2020 20:24:08 | 07-04-2020 20:24:08 | |
transformers | 5,519 | closed | Get prediction_scores from BART forward method | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I'm trynig to implement a model using finetuining BART for dialouge task. This is my sample code:
```
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large')
inputs = tokenizer.encode(" Hello, my dog is cute", return_tensors="pt")
decoder_input = tokenizer.encode(" Oh! I don't know that you have dog! How is it?", return_tensors="pt")
output = model(input_ids=inputs,decoder_input_ids=decoder_input)[0]
```
I want to get prediction scores for tokens, so I expected that output would have shape [1, 17, 50265] (The length of the decoder_input string), but it has shape [1, 1, 50265].
How can I get the prediction_scores?
| 07-04-2020 20:22:47 | 07-04-2020 20:22:47 | I just noticed the same.
It seems that you have to include the `labels` parameter as well to get the predictions for your `decoder_input_ids`. You didn't have to do this in versions prior to 3.0, so it appears as if something has changed so that it's required now.
Maybe @sshleifer knows since I think he's been doing quite a bit of work on the summarization bits as of late.<|||||>Great catch. I think you are spot on that the API changed a bit in 3.0. We should have documented it better.
If you pass `use_cache=False` to `model() this problem goes away. (use_cache is set to true by default to speed up seq2seq tasks).
You can also pass `use_cache=False` to `from_pretrained`, as shown below:
```python
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large', use_cache=False)
inputs = tokenizer.encode(" Hello, my dog is cute", return_tensors="pt")
decoder_input = tokenizer.encode(" Oh! I don't know that you have dog! How is it?", return_tensors="pt")
output = model(input_ids=inputs,decoder_input_ids=decoder_input)[0]
assert output.shape[1] ==17 # passes
```<|||||>> Great catch. I think you are spot on that the API changed a bit in 3.0. We should have documented it better.
>
> If you pass `use_cache=False` to `model() this problem goes away. (use_cache is set to true by default to speed up seq2seq tasks). You can also pass `use_cache=False`to`from_pretrained`, as shown below:
>
> ```python
> tokenizer = BartTokenizer.from_pretrained('facebook/bart-large')
> model = BartForConditionalGeneration.from_pretrained('facebook/bart-large', use_cache=False)
> inputs = tokenizer.encode(" Hello, my dog is cute", return_tensors="pt")
> decoder_input = tokenizer.encode(" Oh! I don't know that you have dog! How is it?", return_tensors="pt")
> output = model(input_ids=inputs,decoder_input_ids=decoder_input)[0]
> assert output.shape[1] ==17 # passes
> ```
Thanks a lot! I think it's better to publish a post which explains differences between version 2 and 3.0 |
transformers | 5,518 | closed | Make T5 compatible with ONNX | This is a small PR to make T5 exportable to ONNX with any op>9. It addresses an issue outlined in #5075 where T5 would not export to ONNX. In order to make it exportable, 2 changes are made:
- A torch.einsum is replaced with a tensor multiplication in 96d0ec7 since onnx does not currently support this notation
- Decoder inputs / embeddings are defaulted to the encoder's inputs / embeddings if they are not declared. I believe this is clearer as most of the examples right now include something along the lines of `model(input_ids=input_ids, decoder_input_ids=input_ids)`. It also allows t5 to be executed with the more common paradigm of calls like model(inputs) | 07-04-2020 15:41:38 | 07-04-2020 15:41:38 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5518?src=pr&el=h1) Report
> Merging [#5518](https://codecov.io/gh/huggingface/transformers/pull/5518?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `1.02%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5518?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5518 +/- ##
==========================================
- Coverage 77.83% 76.81% -1.03%
==========================================
Files 141 141
Lines 24634 24637 +3
==========================================
- Hits 19175 18925 -250
- Misses 5459 5712 +253
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5518?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5518/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `84.44% <100.00%> (+0.09%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5518/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5518/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5518/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5518/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+1.50%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5518/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5518/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5518?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5518?src=pr&el=footer). Last update [58cca47...904fa94](https://codecov.io/gh/huggingface/transformers/pull/5518?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks a lot for the review @mfuntowicz! I adjusted it to the coding style you outlined. Feel free to merge if you're happy with it.<|||||>LGTM! Thanks @abelriboulot, great addition 👍 <|||||>Hey, did you happen to make a colab which shows this off? I was trying to figure out exporting T5 as ONNX a week ago, but got stuck. It seems you've fixed it though?<|||||>@ConProgramming sure thing, I’ll share something this weekend!<|||||>@abelriboulot Did you ever get around to making that colab? It'd help a lot. 😅<|||||>Hey @ConProgramming, I had a very ad-hoc solution for this, therefore I worked on a PR to make the huggingface conversion compatible with all models with a compatible graph. You can take a look at it there: #5687
If you pull this version you should be able to export T5 with the following line:
`python convert_graph_to_onnx.py --framework pt --model t5-base ~/test-t5/t5.onnx --check-loading --opset 12`
I checked and it seems to work well! Let me know if it works for you.<|||||>Thanks @abelriboulot, but I'm still having some issues with it... it works with t5-base, but depending on how I provide the path to my own model I get two different errors:
- `!python transformers/src/transformers/convert_graph_to_onnx.py --framework pt --model "drive/My Drive/paraphraser/t5_paraphrase/pytorch_model.bin" onnx/paraphraser.onnx --check-loading --opset 12` : Error while converting the model: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
- `drive/My Drive/paraphraser/t5_paraphrase` : Error while converting the model: Model name 'drive/My Drive/paraphraser/t5_paraphrase' was not found in tokenizers model name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). We assumed 'drive/My Drive/paraphraser/t5_paraphrase' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.
Is it designed to work with finetuned models?<|||||>Hey @ConProgramming, it should work on fine tuned models, you can have a look at the test_onnx file as an example of this. The model path should be to the directory that contains the model (and the tokenizer in case you do not specify it). It looks like the second error relates to not being able to find a tokenizer, is it present in your directory? If you are using another directory / pretrained model you can specify it with --tokenizer
If you still have issues and it's something you can share, I'm happy to have a look and help you with this.<|||||>@abelriboulot Adding `--tokenizer t5-base` fixed the issue and exported a model without any errors... looks like it worked, thanks again!!<|||||>Oh awesome! Great to hear it! I might add a message to make it more obvious to the user.<|||||>@abelriboulot
I tried this (for cpu):
convert_graph_to_onnx.py --framework=pt --tokenizer=t5-base --model=t5-base onnx\t5.onnx --check-loading --opset=12
but getting error:
ONNX opset version set to: 12
Loading pipeline (model: t5-base, tokenizer: t5-base)
Some weights of T5Model were not initialized from the model checkpoint at t5-base and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Using framework PyTorch: 1.5.1+cpu
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch', 1: 'sequence'}
**Error while converting the model: 'BaseModelOutputWithPast' object has no attribute 'shape'**
Am I doing something wrong here?<|||||>Hey @oliversms, are you using the specific fork or master? I can confirm the command you submitted works on my side.<|||||>Apologies for the delayed reply; Im actually using the fork. I beleive it may have been an env related issue. However after getting past that issue Im now running into a new issue:
Specifically on this line:
[tokens = nlp.tokenizer("This is a sample output", return_tensors=framework)](https://github.com/abelriboulot/transformers/blob/58cca47c16149e43d1b516623d59e3c5d97f695e/src/transformers/convert_graph_to_onnx.py#L89)
getting this error:
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
Attempting set padding & truncation to True doesnt fix the issue.<|||||>Hey @oliversms! It looks like you are not using the right branch. You need [this specific branch](https://github.com/abelriboulot/transformers/tree/update_convert_to_onnx) for it to work. Hope it works for you!
Abel<|||||>If anyone needs, I created a small package ([onnxt5](https://github.com/abelriboulot/onnxt5)) which lets you easily and efficiently export T5 and serve it! Feel free to raise issues, it's an alpha at the moment.<|||||>@abelriboulot Hi, I pulled your branch and tried to convert a t5-base with
python ../transformers/src/convert_graph_to_onnx.py --framework pt --model t5-base t5-base.onnx --check-loading --opset 12
and still got the "Error while converting the model: You have to specify either decoder_input_ids or decoder_inputs_embeds" Any ideas? |
transformers | 5,517 | closed | getting different model result from tokenizer vs tokenizer.encode function | # 🐛 Bug
I am using gpt2 model to encode sentences. I am confused between tokenizer.encode vs tokenizer :
If I am using tokenizer.encode:
```
sentence = 'checking single sentences'
from transformers import GPT2Tokenizer, GPT2Model
import torch
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = GPT2Model.from_pretrained('gpt2-medium')
input_ids = torch.tensor(tokenizer.encode(sentence)).unsqueeze(0) # Batch size 1
outputs = model(input_ids)
outputs[0]
```
output is :
```
tensor([[[ 0.6804, 0.4182, 0.3483, ..., -0.3102, 0.0341, 0.4901],
[-0.4060, 0.7790, 0.2695, ..., -0.4763, 0.1817, 0.0600],
[ 0.7916, 0.6078, 0.4642, ..., -0.5557, -0.1571, -0.1220]]],
grad_fn=<ViewBackward>)
```
While using tokenizer alone :
```
from transformers import GPT2Tokenizer, GPT2Model
sentence = 'checking single sentences'
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
model = GPT2Model.from_pretrained('gpt2-medium')
inputs = tokenizer(sentence, return_tensors="pt")
outputs = model(**inputs)
outputs[0]
```
output is :
```
# tensor([[[ 0.5511, 0.4063, 0.2453, ..., -0.4217, 0.1519, 0.0898],
# [-0.1138, -0.0232, 0.1736, ..., -0.5408, 0.0145, 0.2985],
# [ 0.1856, 0.3127, 0.2430, ..., -0.7532, -0.2332, 0.1506]]],
# grad_fn=<ViewBackward>)
```
I tried to examine the output of both tokenizer :
```
from transformers import GPT2Tokenizer, GPT2Model
import torch
sentence = 'checking single sentences'
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium')
input_ids = tokenizer.encode(sentence)
inputs = tokenizer(sentence, return_tensors="pt")
print(input_ids)
print(inputs)
```
output :
```
[41004, 2060, 13439]
{'input_ids': tensor([[41004, 2060, 13439]]), 'token_type_ids': tensor([[0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1]])}
```
Which is recommended tokenizer function?
| 07-04-2020 12:09:45 | 07-04-2020 12:09:45 | |
transformers | 5,516 | closed | Addition of a DialoguePipeline | - Addition of a Conversation object to keep track of multi-turn conversation
- Creation of a DialoguePipeline to process Conversations using the history context
- Integration tests for DialoguePipeline using `microsoft/DialoGPT-medium` | 07-04-2020 10:45:00 | 07-04-2020 10:45:00 | This is a back-port of https://github.com/guillaume-be/rust-bert/pull/57. I did not implement the ConversationManager as I felt it did not quite fit the general API of this library. I however added the concept of `Conversations` keeping track of past user inputs, generated responses and the history token ids. Conversations include a print option that will display the entire dialogue.
```python
print(conversation)
```
```
Conversation id: 2716da3e-8cde-4071-97bc-218d88764b7b
user >> What's the last book you have read?
bot >> The Last Question
user >> Why do you recommend it?
bot >> It's a good book.
```
(ps: note that this example is the response of `DialoGPT-medium` without sampling, which is an interesting coincidence for a computer-generated response)<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5516?src=pr&el=h1) Report
> Merging [#5516](https://codecov.io/gh/huggingface/transformers/pull/5516?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/91cb95461e438dc57555c4f57f8ce95a56328036&el=desc) will **increase** coverage by `1.49%`.
> The diff coverage is `84.34%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5516?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5516 +/- ##
==========================================
+ Coverage 78.35% 79.85% +1.49%
==========================================
Files 146 146
Lines 26454 26568 +114
==========================================
+ Hits 20729 21215 +486
+ Misses 5725 5353 -372
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5516?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (ø)` | |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.36% <84.34%> (+0.86%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.22% <0.00%> (-63.98%)` | :arrow_down: |
| [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `71.83% <0.00%> (-23.95%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.65% <0.00%> (-23.68%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.36% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `66.66% <0.00%> (+3.63%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+34.61%)` | :arrow_up: |
| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5516/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5516?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5516?src=pr&el=footer). Last update [91cb954...9734829](https://codecov.io/gh/huggingface/transformers/pull/5516?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@patrickvonplaten Thank you so much for the detailed review! I have pushed some changes addressing most of the suggested changes.
A few points still open for discussion:
- I am still unsure about using the default `max_length` from the model_config, especially since the default model used for the pipeline `DialoGPT` does not give this value and it would default to a very low value.
- I have updated the history truncation to take out entire spans (delimited by EOS tokens). I believe this would lead to a more natural truncation of a conversation (skip the first N turn) rather than possibly cutting an input/response in the middle
- You are right that the API differs a bit from the other pipeline (although the QA pipeline also takes special inputs). Here the challenge is that the conversations are by definition stateful. I actually started the Rust implementation storing the conversations in the model itself, allowing simpler inputs to be passed on. This requires the model to be mutable, which can be problematic for multithreading. Now in Python you may not have this issue, but you may want to deploy an conversational application that scales and deploys multiple conversation pipeline workers / containers (the Rust compiler actually points out to an interesting design decision). If you make these stateful, you'll have to ensure that you always send the conversation to the correct worker which can be sub-optimal for load balancing. I believe carrying the history in the conversation itself would be easier to handle for the full system that may leverage this pipeline. Happy to change if you'd like to allow the user to pass a simpler input, but a conversation `id` would probably be required anyway. In any case happy to expose `Conversation` to the library API if the design remains as is.
Thanks again for the feedback - looking forward to your thoughts on these few points.<|||||>Thanks a lot for the detailed answer @guillaume-be,
1. (answering to your comment above) - exactly, I would update the config on AWS
and it would then look like this:
```
{
"activation_function": "gelu_new",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": 50256,
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 1024,
"n_head": 16,
"n_layer": 24,
"n_positions": 1024,
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"vocab_size": 50257,
"task_specific_params" : {
"dialogue": {
"max_length": 1000
}
}
}
```
=> This way in the `__init__` of `Pipeline` the `task_specific_params` will overwrite the default params and the `max_length` will be 1000. I think this has following advantage: If the user wants to define a specific `max_length` via the model config, he could update or create a config to `model.config.task_specific_params.dialogue.max_length = 1000` and does not have to input any generate kwargs to the `__call__` function (otherwise the config would always be overwritten by 1000). As soon as we merge this PR, I can update all the DialoGPT configs with the `task_specific_params`.
2. Very nice! This indeed makes more sense and should lead to good (possible never-ending chats) :-)
3. I agree - I think I'm also in favor of your design choice here. Sadly, I don't have the necessary Rust knowledge to have a better understanding of the consequences of stateful vs. stateless, but it makes a lot of sense what you say! Maybe looping in @mfuntowicz here to see what he thinks :-) <|||||>Thanks @guillaume-be for your PR!
I left a bunch of comments, but this is in good shape, can't wait to use it on https://huggingface.co/microsoft/DialoGPT-large 😄
Re. statefulness, yes, we definitely want to keep a stateless system.
As an example, the way our https://convai.huggingface.co/ demo from last year works (and probably the way our conversational widget powered by this PR will work too 😉) is, we store an array of `Message` on the client side, grow it, and send it with each request. It's text so the bandwidth is not an issue.
Message looks like:
```
interface Message {
incoming: boolean; // <- is it from bot or user
content: string;
}
```
<|||||>LGTM<|||||>@julien-c @patrickvonplaten I believe all comments have been addressed - please let me know if I have missed anything. Just resolved the conflict with master. Getting an error with `code quality`, not quite sure what is wrong as I did not change the torch requirement with this PR.<|||||>Given that `patrickvonplaten` is off for one more week I believe, do you want to give this PR a last look-over @sgugger and merge if it's fine?<|||||>@LysandreJik Thank you very much for the review. Good catch on the behaviour of the `eos` token cut-off. I have updated based on your suggestions, and added docstrings to the `Conversation` class. I have also added both `Conversation` and `ConversationalPipeline` to the top-level `__init__` for consistency with the other pipelines.<|||||>Thanks for the PR @guillaume-be
Docstrings could be improved but I'll clean up the docs in the pipelines file soon, so will take of that. For future PRs, please remember that `thing` will render thing in italics in the docs, and not in code (you have to use ``thing`` or :obj:`thing`).<|||||>@sgugger Thank you for the review - was indeed a typo on my end. The tests got triggered again and unfortunately a hash verification on torch fails. Could you please restart the build if you have a chance?<|||||>This is awesome, congrats everyone on shipping this! 🔥<|||||>`test_torch_conversation` and `test_integration_torch_conversation` are broken on github actions CI. Could someone fix or delete? https://github.com/huggingface/transformers/runs/1289790225?check_suite_focus=true<|||||>Will take a look.<|||||>Should be fixed in https://github.com/huggingface/transformers/pull/7970 |
transformers | 5,515 | closed | Create model card | 07-04-2020 09:37:25 | 07-04-2020 09:37:25 | ||
transformers | 5,514 | closed | added model card for ukr-roberta-base | 07-04-2020 09:28:07 | 07-04-2020 09:28:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5514?src=pr&el=h1) Report
> Merging [#5514](https://codecov.io/gh/huggingface/transformers/pull/5514?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `1.73%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5514?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5514 +/- ##
==========================================
- Coverage 77.83% 76.10% -1.74%
==========================================
Files 141 141
Lines 24634 24634
==========================================
- Hits 19175 18748 -427
- Misses 5459 5886 +427
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5514?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.62% <0.00%> (-73.11%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `20.49% <0.00%> (-55.49%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: |
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+1.50%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5514/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5514?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5514?src=pr&el=footer). Last update [58cca47...95bbeef](https://codecov.io/gh/huggingface/transformers/pull/5514?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,513 | closed | Fail in some tests (with detailed description) | # ❓ Questions & Help
I have followed the README to install transformers.
However, there are some failures in the test. (mainly about multigpu_data_paralle_forward)
```
======================================================================================================================== short test summary info =========================================================================================================================
FAILED tests/test_modeling_electra.py::ElectraModelTest::test_multigpu_data_parallel_forward - RuntimeError: tensor.ndimension() == static_cast<int64_t>(expected_size.size()) INTERNAL ASSERT FAILED at /pytorch/torch/csrc/cuda/comm.cpp:225, please report a bug to ...
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_multigpu_data_parallel_forward - RuntimeError: Gather got an input of invalid size: got [2, 2, 4, 7, 8], but expected [2, 4, 4, 7, 8] (gather at /pytorch/torch/csrc/cuda/comm.cpp:231)
FAILED tests/test_modeling_bart.py::BartHeadTests::test_lm_forward - timeout_decorator.timeout_decorator.TimeoutError: 'Timed Out'
FAILED tests/test_modeling_ctrl.py::CTRLModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA error: out of memory
FAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA error: out of memory
FAILED tests/test_modeling_openai.py::OpenAIGPTModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.78 GiB total capacity; 5.33 MiB already allocated; 17.19 MiB free; 6.00 MiB reserved in total ...
FAILED tests/test_modeling_mobilebert.py::MobileBertModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA error: out of memory
================================================================================================= 7 failed, 1066 passed, 562 skipped, 1396 warnings in 333.27s (0:05:33) =================================================================================================
```
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I have tried many environments:
- conda: python3.7 + torch1.5.1 + cuda10.2
- venv: python3.7 + torch1.5.1 + cuda10.2
- conda: python3.6 + torch1.4.0 + cuda10.1
- venv: python3.6 + torch1.4.0 + cuda10.1
- conda: python3.7 + torch1.5.1 + cuda10.2 + apex
- venv: python3.6 + torch1.4.0 + cuda10.1 + apex
And I installed `transformers` **from source**.
All of them failed in 5-7 tests.
The `short test summary info` outputs of them are not all the same.
But **all** of them are related to `test_multigpu_data_parallel_forward`.
One of the outputs of `transformers-cli env` is as the following
(with the setting: venv: python3.6 + torch1.4.0 + cuda10.1 + apex):
```
The current process just got forked. Disabling parallelism to avoid deadlocks...
To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 3.0.1
- Platform: Linux-4.15.0-1067-azure-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.11
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
For my case (just fill out the two last points),
```
- `transformers` version: 3.0.1
- Platform: Linux-4.15.0-1067-azure-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.11
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
```
For this case, the `short test summary info` shows:
```
======================================================================================================================== short test summary info =========================================================================================================================
FAILED tests/test_modeling_electra.py::ElectraModelTest::test_multigpu_data_parallel_forward - RuntimeError: tensor.ndimension() == static_cast<int64_t>(expected_size.size()) INTERNAL ASSERT FAILED at /pytorch/torch/csrc/cuda/comm.cpp:225, please report a bug to ...
FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_multigpu_data_parallel_forward - RuntimeError: Gather got an input of invalid size: got [2, 2, 4, 7, 8], but expected [2, 4, 4, 7, 8] (gather at /pytorch/torch/csrc/cuda/comm.cpp:231)
FAILED tests/test_modeling_bart.py::BartHeadTests::test_lm_forward - timeout_decorator.timeout_decorator.TimeoutError: 'Timed Out'
FAILED tests/test_modeling_ctrl.py::CTRLModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA error: out of memory
FAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA error: out of memory
FAILED tests/test_modeling_openai.py::OpenAIGPTModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.78 GiB total capacity; 5.33 MiB already allocated; 17.19 MiB free; 6.00 MiB reserved in total ...
FAILED tests/test_modeling_mobilebert.py::MobileBertModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA error: out of memory
================================================================================================= 7 failed, 1066 passed, 562 skipped, 1396 warnings in 333.27s (0:05:33) =================================================================================================
```
Some failures are about CUDA memory.
Seems weird, especially the following one.
```
FAILED tests/test_modeling_openai.py::OpenAIGPTModelTest::test_multigpu_data_parallel_forward - RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.78 GiB total capacity; 5.33 MiB already allocated; 17.19 MiB free; 6.00 MiB reserved in total ...
```
So I checked the output of `nvidia-smi` **before and after** the test to make sure that **no other** processes hold memory.
```
Sat Jul 4 08:49:02 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-PCIE... Off | 00000217:00:00.0 Off | 0 |
| N/A 30C P0 37W / 250W | 0MiB / 16160MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-PCIE... Off | 00001C95:00:00.0 Off | 0 |
| N/A 30C P0 36W / 250W | 0MiB / 16160MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 Tesla V100-PCIE... Off | 0000735E:00:00.0 Off | 0 |
| N/A 31C P0 39W / 250W | 0MiB / 16160MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 Tesla V100-PCIE... Off | 0000AC50:00:00.0 Off | 0 |
| N/A 31C P0 36W / 250W | 0MiB / 16160MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
The **details of the failures** are too long to post here.
Please refer to [this page](https://github.com/xx-zhou16/tmp_issue/blob/master/README.md) for details.
## Related
I find a similar issue. #5070
I think the only difference between his environment and mine is the version of transformers.
The failures of his and mine are both about `test_multigpu_data_parallel_forward`.
But #5070 has not been solved.
----
If you need me to provide other details, please reply to this issue. I will reply ASAP.
I am focusing on this issue recently.
I would be very glad if anyone can help solve this problem.
| 07-04-2020 08:35:14 | 07-04-2020 08:35:14 | It seems most of the tests that fail are due to `CUDA error: out of memory`?<|||||>> It seems most of the tests that fail are due to `CUDA error: out of memory`?
Yes. As the message shows, they are due to `CUDA error: out of memory`.
But I am sure that **no** other processes are using CUDA **before and after** I run the test.
My machine is with 4 Tesla V100 GPUs. And it works fine with other programs using both single and multi GPUs (e.g. fairseq).
So I am confused with the error messages.
Besides, there are some other error messages that show `invalid size` and `INTERNAL ASSERT FAILED`.
I have tried many environments. All of them fail in `test_multigpu_data_parallel_forward` due to **different** errors which are mainly `out of memory` and `invalid size`.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,512 | closed | Allow tests in examples to use cuda or fp16,if they are available | The tests in examples didn't use the Cuda or fp16 even if they were available.
- The text classification example (`run_glue.py`) didn't use the fp16 even if it was available but
the device was take based on the availability(Cuda/CPU).
- The language-modeling example (`run_language_modeling.py`) was having `--no_cuda` argument which
made the test work without Cuda. This example is having an issue when running with fp16
thus it not enabled (got an assertion error for perplexity due to its higher value).
- The Cuda and fp16 is not enabled for question-answering example (`run_squad.py`) as it is having a
the difference in the f1 score.
- The text-generation example (`run_generation.py`) will take the Cuda or fp16 whenever it is available.
Resolves some of #5057 | 07-04-2020 06:08:33 | 07-04-2020 06:08:33 | @sshleifer Hope you are doing well. sorry for the delay, I have created the PR for some of the related issues which were mentioned in #5057.
> 1. They never use Cuda or fp16, even if they are available.
I have some doubts which had encountered when making this PR
---
**1.** As you had said there where some test which failed when enabling the Cuda or fp16
- The Cuda and fp16 are not enabled for question-answering example (`run_squad.py`) as it is having a difference in the f1 score.
- The language-modeling example (`run_language_modeling.py`) is having an issue when running with fp16.
---
**2.** I was not able to find the `test_hans.py` but was able to find a readme to run it. Is this intentional if not shall I have a `test_hans.py` file to run the same.

---
**3.** This is the list of tests which I got
```bash
$ ls -l examples/**/test*.py
examples/bert-loses-patience/test_run_glue_with_pabee.py
examples/seq2seq/bertabs/test_utils_summarization.py
examples/seq2seq/test_seq2seq_examples.py
examples/test_examples.py
examples/token-classification/test_ner_examples.py
```
I was not able to find the `test_summarization_examples.py` and `test_t5_examples.py`. I think I am doing something wrong.
---
**4.** I can have made the PR for some tests only, can do the same for others if the current PR satisfies your requirement. <|||||>1) noted
2)test_hans.py would be nice, but can be added in a separate PR.
3) You are all set, nothing is wrong.
4) Would love to see the current PR!
Thanks!<|||||>> 2)test_hans.py would be nice, but can be added in a separate PR.
The test_hans.py was created once via the PR https://github.com/huggingface/transformers/pull/2239 but I think somehow or for some reason, it was removed
last interaction with the file was in the PR https://github.com/huggingface/transformers/pull/4213<|||||>
also I missed this earlier, but what was the issue with run_language_modeling.py? Can you get a traceback?<|||||>> also I missed this earlier, but what was the issue with run_language_modeling.py? Can you get a traceback?
This is the short long and I have attached the pull log with this comment.
```log
with patch.object(sys, "argv", testargs):
result = run_language_modeling.main()
> self.assertLess(result["perplexity"], 35)
E AssertionError: 36.684365356893885 not less than 35
```
[test.log](https://github.com/huggingface/transformers/files/4955269/test.log)
<|||||>@sshleifer Hope you are doing well.
Is this PR ok or Should I split this into multiple smaller PRs to make things easier?<|||||>Thanks for the review @sshleifer, I have updated the code accordingly and regarding the new command line args, I have removed the args which were related to the multi-GPU support as we can make that a separate PR.
**Any Idea why the test `run_tests_tf` and `run_tests_torch_and_tf` failing? Is it related to my change?**<|||||>Thanks for the review @LysandreJik, I have updated the PR accordingly.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5512?src=pr&el=h1) Report
> Merging [#5512](https://codecov.io/gh/huggingface/transformers/pull/5512?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f?el=desc) will **decrease** coverage by `0.72%`.
> The diff coverage is `78.87%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5512?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5512 +/- ##
==========================================
- Coverage 80.37% 79.64% -0.73%
==========================================
Files 156 156
Lines 28058 28261 +203
==========================================
- Hits 22552 22509 -43
- Misses 5506 5752 +246
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5512?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `0.00% <0.00%> (ø)` | |
| [src/transformers/commands/user.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy91c2VyLnB5) | `0.00% <ø> (ø)` | |
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |
| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `48.91% <0.00%> (-0.18%)` | :arrow_down: |
| [src/transformers/data/test\_generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Rlc3RfZ2VuZXJhdGlvbl91dGlscy5weQ==) | `0.00% <0.00%> (ø)` | |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | |
| [src/transformers/hf\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `67.74% <0.00%> (-1.49%)` | :arrow_down: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.73% <ø> (ø)` | |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <ø> (+0.16%)` | :arrow_up: |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `82.13% <ø> (ø)` | |
| ... and [58 more](https://codecov.io/gh/huggingface/transformers/pull/5512/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5512?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5512?src=pr&el=footer). Last update [a573777...51be82f](https://codecov.io/gh/huggingface/transformers/pull/5512?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,511 | closed | example code missing `encode` | following example, and realized by running
```
In [27]: encoding = tokenizer(text_batch, return_tensors='pt', padding=True, truncation=True)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-27-c8731cd264f2> in <module>
----> 1 encoding = tokenizer(text_batch, return_tensors='pt', padding=True, truncation=True)
TypeError: 'BertTokenizer' object is not callable
``` | 07-04-2020 04:55:22 | 07-04-2020 04:55:22 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5511?src=pr&el=h1) Report
> Merging [#5511](https://codecov.io/gh/huggingface/transformers/pull/5511?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `0.41%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5511?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5511 +/- ##
==========================================
- Coverage 77.83% 77.42% -0.42%
==========================================
Files 141 141
Lines 24634 24634
==========================================
- Hits 19175 19072 -103
- Misses 5459 5562 +103
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5511?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `82.99% <0.00%> (-6.13%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.94% <0.00%> (-5.27%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.44% <0.00%> (-1.17%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5511?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5511?src=pr&el=footer). Last update [58cca47...68a7acd](https://codecov.io/gh/huggingface/transformers/pull/5511?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>found out the doc, calling it should work. |
transformers | 5,510 | closed | Fix typo in training | Fixes a small typo in the code example.
Ref code: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L1211 | 07-04-2020 03:17:13 | 07-04-2020 03:17:13 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5510?src=pr&el=h1) Report
> Merging [#5510](https://codecov.io/gh/huggingface/transformers/pull/5510?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/58cca47c16149e43d1b516623d59e3c5d97f695e&el=desc) will **decrease** coverage by `0.41%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5510?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5510 +/- ##
==========================================
- Coverage 77.83% 77.42% -0.41%
==========================================
Files 141 141
Lines 24634 24634
==========================================
- Hits 19175 19074 -101
- Misses 5459 5560 +101
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5510?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5510/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5510?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5510?src=pr&el=footer). Last update [58cca47...121648f](https://codecov.io/gh/huggingface/transformers/pull/5510?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks a lot! |
transformers | 5,509 | closed | TPU Trainer memory leak and memory requirements | # 🐛 Bug
The TPU Trainer for Pytorch is extremely memory inefficient as it eats up lot of CPU memory while training. Moreover the training pipeline has a memory leak somewhere that leads to increased memory utilization as the training proceeds.
## Information
I am using a huge machine (n1-highmem-32) with 208 GB of RAM. I am trying to train seq-seq models like T5-base and BART-base with 880,000 examples. The problem is that with xla_spawn.py method of training models, multiprocessing starts 8 processes that all load up the features separately, resulting in 8x the memory usage than if they all shared the memory. Also this scaling of memory restricts what TPU one can use, I cannot imagine having to use a 1 TB RAM machine just to use v2-32 instance with 32 processes! I am trying to productionize a training pipeline with TPUs and the memory scaling problem makes it really hard and costly. TPUs are designed for training really large models and with large datasets but this memory issue is a large bottleneck against it. I could train with larger datasets but the memory usage prevents me.
In the images below one can see that for BART training, the memory utilization at the beginning is 80% but it rises to 100% in 14 hours, killing the training. I have to train the model 3 times to make progress.


This is the memory usage with T5. It starts with 45% and ends with 75%.

Could you consider rearchitecting the TPU Trainer so that it does not takes humongous amounts of RAM? Maybe a streaming or lazy loading architecture that only loads the features as and when they are required? Also multiprocessing is inherently not scalable, does one launch 1024 processes to train on a 1024 Core TPU Pod? I have made a TPU Training pipeline with TensorFlow 1 previously and it worked spectacularly effectively with its architecture of TFRecords and Tensorflow handling all data streaming through Google storage. The pipeline performance scaled linearly with the number of TPU Cores without consuming large memory. (Image attached below).

A scalable pipeline would make Huggingface a real contender for production level TPU PyTorch training pipelines.
Model I am using (Bert, XLNet ...): T5, BART
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Use the official example at https://github.com/huggingface/transformers/tree/master/examples#running-on-tpus
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Memory usage contained and does not increase with time.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-5.3.0-1026-gcp-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0a0+4121d34 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: Yes
| 07-03-2020 23:32:29 | 07-03-2020 23:32:29 | Hi @misrasaurabh1 I have also faced this issue. One immediate solution is to use lazy loading dataset, as xla by default loads the dataset in all processes. Something like `nlp` can really help here. as posted by @thomwolf here https://gist.github.com/thomwolf/13ca2b2b172b2d17ac66685aa2eeba62
`nlp` can load full english wikipedia dataset (17 GB+) in just 9 MB of RAM. However I've not been able to use it with TPU's yet.<|||||>Very interesting, if one could load data in a lazy-loading manner using `nlp` it would help a lot! Although I am not sure if we can use our custom datasets with `nlp`.
If there was a way to use a shared memory between different processes or each process reading independently from memory-mapped on drive, that could solve this problem.<|||||>It's possible to use custom datasets with `nlp`. This section can help you https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset
We can create dataset as described in above steps, and then just pass the path of created dataset to `nlp.load_dataset` <|||||>Although I still want to highlight that this bug is still unfixed and a big problem. Drawing attention from the HuggingFace team!<|||||>Update on how I solved the large memory usage. Because I just wanted one copy of the features in memory, so I used a redis-server locally to cache the features in memory in a separate process. I am also using unix sockets to make connections faster. Items stored in Redis have keys equal to the array index and value equal to the pickled feature dictionary. In the __getitem__ I am unpickling the data on the fly.
This reduced memory consumption from 100s of GBs to 30GB. Surprisingly, the training also became 20% faster after using redis-server.<|||||>Thanks for sharing this @misrasaurabh1 <|||||>Thanks for letting us know @misrasaurabh1! We're very close to having TPU CI, only a few days away to a week, so we'll be able to monitor such metrics more closely.<|||||>Although the memory still keeps increasing as the training progresses. If it becomes too much of an issue for a reliable training pipeline I can dig deeper.<|||||>The memory spikes up everytime evaluation loop happens.

<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,508 | closed | T5 Masking: | While this works, the outputs are not according to what is specified in "model.generate."
```
# tokenizer = T5Tokenizer.from_pretrained(model_name)
# model = T5ForConditionalGeneration.from_pretrained(model_name)
# # Input text
# text = 'This <extra_id_0> sentence. </s>'
# encoded = tokenizer.encode_plus(text, add_special_tokens=True, return_tensors='pt')
# input_ids = encoded['input_ids']
# # Generating 20 sequences with maximum length set to 5
# outputs = model.generate(input_ids=input_ids,
# num_beams=200, num_return_sequences=20,
# min_length = 5, max_length=10)
# _0_index = text.index('<extra_id_0>')
# _result_prefix = text[:_0_index]
# _result_suffix = text[_0_index+12:] # 12 is the length of <extra_id_0>
# def _filter(output, end_token='<extra_id_1>'):
# # The first token is <unk> (inidex at 0) and the second token is <extra_id_0> (indexed at 32099)
# _txt = tokenizer.decode(output[2:], skip_special_tokens=False, clean_up_tokenization_spaces=False)
# if end_token in _txt:
# _end_token_index = _txt.index(end_token)
# return _result_prefix + _txt[:_end_token_index] + _result_suffix
# else:
# return _result_prefix + _txt + _result_suffix
# results = list(map(_filter, outputs))
```
Example Output:
`This perception holds no validity.`
As can be seen, "holds no" is only two words despite that I put the "min_length" as five. | 07-03-2020 22:20:00 | 07-03-2020 22:20:00 | `min_length` corresponds to the minimum number of tokens, not words.
Feel free to reopen if this does not answer your question. |
transformers | 5,507 | closed | What's the correct way to use add_prefix_space for the fast RoBERTa tokenizer in 3.0.0/3.0.1? | The documentation says we should pass this flag to **the encoding methods**.
https://github.com/huggingface/transformers/blob/58cca47c16149e43d1b516623d59e3c5d97f695e/src/transformers/tokenization_roberta.py#L259-L260
However, the encoding methods don't take such an argument, and passing it in actually causes an error:
```
File "test.py", line 127, in func
self.tokenizer.encode(token, add_special_tokens=False, add_prefix_space=True)
File ".../transformers/tokenization_utils_base.py", line 1425, in encode
**kwargs,
File ".../transformers/tokenization_utils_base.py", line 1737, in encode_plus
**kwargs,
File ".../transformers/tokenization_gpt2.py", line 377, in _encode_plus
return super()._encode_plus(*args, **kwargs)
File ".../transformers/tokenization_utils_fast.py", line 420, in _encode_plus
**kwargs,
File ".../transformers/tokenization_gpt2.py", line 367, in _batch_encode_plus
return super()._batch_encode_plus(*args, **kwargs)
File ".../transformers/tokenization_utils_fast.py", line 313, in _batch_encode_plus
raise ValueError(f"Keyword arguments {kwargs} not recognized.")
ValueError: Keyword arguments {'add_prefix_space': True} not recognized.
```
Then, I assumed that this documentation was a typo and we should pass this argument to `__init__`. But why is it default to `False` in `__init__`? Don't we pretty much always need to add a prefix space? Does `from_pretrained` set it to `True` automatically? If not, should we always do `AutoTokenizer.from_pretrained(..., use_prefix_space=True)`?
https://github.com/huggingface/transformers/blob/58cca47c16149e43d1b516623d59e3c5d97f695e/src/transformers/tokenization_roberta.py#L299-L314 | 07-03-2020 21:54:12 | 07-03-2020 21:54:12 | Thanks for flagging, the docstrings were indeed very confusing!
I hope the new doc introduced in #5559 is clearer and solves your problem. Please reopen if needed.<|||||>Thanks for the clarification in the new doc! Does it mean RoBERTa was pre-trained **without** this add_prefix_space behavior? In other words, in the original model, does it encode "Hello world" with 2 or 3 tokens, excluding special tokens? If it was pre-trained without adding the prefix space, does it mean the **fast tokenizers** in the pre-3.0.0 transformers versions were adding this prefix space incorrectly?<|||||>E.g. in 2.11.0
```
>>> from transformers import RobertaTokenizerFast
>>> tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
>>> tokenizer.tokenize("Hello world")
['ĠHello', 'Ġworld']
```<|||||>I think it was incorrect, yes, since it currently returns `['Hello', 'Ġworld']` like the slow tokenizer. Multiple bugs in fast tokenizer were fixed in 3.0.0. |
transformers | 5,506 | closed | Why is `encoder_extended_attention_mask = None` when `config.is_decoder == False` | Potential Bug(?)
Reading the codebase I see that attention masks are ignored for many of the pretrained model configs such as `'bert-base-uncased'`. We can see [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L743) that the attention mask is simply cleared out. Is this intentional?
```
from transformers import BertModel
config_path = 'bert-base-uncased'
config = BertModel.config_class.from_pretrained(config_path)
print(f'is_decoder: {config.is_decoder}')
```
outputs False | 07-03-2020 20:29:16 | 07-03-2020 20:29:16 | The `encoder_attention_mask` is only relevant if BERT is uses as a Encoder-Decoder model using the `EncoderDecoderModel` wrapper class. In this case the decoder should be able to accept an `encoder_attention_mask` for its cross-attention layers.
In all other cases this mask is not relevant and should be set to None.
I agree that the check `if self.is_decoder` is probably not the best one here it should rather be `if self.is_encoder_decoder and self.is_decoder`. will update this soon.
Feel free to reopen if this does not answer your question<|||||>Hi Patrick, thanks for the swift response. I’m not sure if I understand: shouldn’t we always want to mask the padded tokens, even in the encoder?
In fact the canonical BERT model suggests this, where they have no such check: https://github.com/google-research/bert/blob/master/modeling.py#L200<|||||>@patrickvonplaten Sorry for the noise. Noticed you said to reopen the issue but I think only maintainers have this permission :)<|||||>This `encoder_attention_mask` is only relevent for a Bert EncoderDecoder model. It is not the same as the usual `attention_mask`<|||||>Ah, I see. Looking again at the code I definitely misunderstood that. Thanks a ton. |
transformers | 5,505 | closed | 3.0.1 BertTokenizer batch_encode_plus() shows warnings "Truncation was not explicitely activated but `max_length` is provided a specific value" | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): I'm using Transformers 3.0.1 and the BERT model to do two-sentence NLI-style classification.
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I have a question and two issues:
1. What happened to the `BertTokenizer.encode_plus()` and `BertTokenizer.batch_encode_plus()` methods? I see there must have been a change somewhere to remove them in Transformers 3.0.0, but I cannot find any online change log or other description of what the replacement methods are.
2. This issue I am getting is that there are a lot of repeated messages stating
> Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
This issue is apparently the same as the closed issue #5377. However, that one was with respect to `encode_plus ()`, **but my issue is with `batch_encode_plus()`**.
3. Please fix the typo: `Truncation was not explicitely ...` to be `Truncation was not explicitly ...`.
## To reproduce
Steps to reproduce the behavior for `batch_encode_plus()`:
I adapted the code from #5377:
```python
import transformers
print(transformers.__version__)
# 3.0.1
from transformers import BertTokenizer
max_len = 50
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# This example works fine with encode_plus().
text = 'Transformers are more than meets the eye'
encoded_dict = tokenizer.encode_plus(text,
text_pair=None,
add_special_tokens=True,
max_length=max_len,
pad_to_max_length=False,
truncation='longest_first')
# This call to batch_encode_plus() shows warnings.
list_of_pair_of_string = []
list_of_pair_of_string.append( ('Transformers are', 'more than meets the eye') )
encoded_dict = tokenizer.batch_encode_plus(batch_text_or_text_pairs=list_of_pair_of_string,
add_special_tokens=True,
max_length=max_len,
pad_to_max_length=False,
truncation='longest_first'
#truncation=True
)
```
Note that for my call to `batch_encode_plus()`, I tried both `truncation='longest_first'` and also `truncation=True`.
However, the call always shows:
> Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The call to `batch_encode_plus()` should not show any warning because I specifically provided the `truncation=` parameter.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.1
- Platform: Google Colab
- Python version: 3.6.9 (default, Apr 18 2020, 01:56:04)
- PyTorch version (GPU?): 1.5.1+cu101
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 07-03-2020 19:54:07 | 07-03-2020 19:54:07 | same problem here<|||||>same problem here<|||||>I bumped the same problem. After a plenty of warning messages my colab stuck and web-page didn't answer. As a temporary solution I installed previous version of transformers library ```!pip install transformers==3.0.0``` and evething with ```BertTokenizer.batch_encode_plus()``` is okay.<|||||>The massive amount of generated warnings is crashing the Google Colab session in my Chrome browser. The only temporary fix I can find is to turn off all warning messages.
```python
import logging
logging.basicConfig(level=logging.ERROR)
```<|||||>same even when the `batch_encode_plus` method is not called (directly calling `tokenizer()` with a list of string)<|||||>Ok indeed, we are on it and will release a fix soon. Sorry for that.<|||||>Tested in 3.0.2 - warning persists.<|||||>Can you post the command line (tokenizer call) you are running?<|||||>Same here, have tried myriad combinations to try to eliminate the warning.
```
model = Seq2SeqModel(
encoder_type,
"roberta-base",
"bert-base-cased",
args=model_args,
truncation=True,
use_cuda=cuda_available
)
```
> WARNING:transformers.tokenization_utils_base:Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
> WARNING:transformers.tokenization_utils_base:Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
> WARNING:transformers.tokenization_utils_base:Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
> WARNING:transformers.tokenization_utils_base:Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
<|||||>I can't find the class `Seq2SeqModel` on our repo so I can't reproduce your code.
You need to open a new issue with all the details so I can try to reproduce and debug this.<|||||>Should only print it once |
transformers | 5,504 | closed | Write With Transformers | # 🐛 Bug
## Information
Model: XLNet
Outputs are extremely unusual, and I suspect there is an issue.
| 07-03-2020 19:44:07 | 07-03-2020 19:44:07 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,503 | closed | T5 Training on TPU doesnt use TPU | I am trying to train T5 on TPU .Training started without errors but its very slow . I think its not using TPU backend.
https://colab.research.google.com/drive/10TN0zgPWCIAzbA0PKYIo3fF8wOvfa1P7?usp=sharing | 07-03-2020 15:36:39 | 07-03-2020 15:36:39 | Hi @santhoshkolloju what is the xla and pytorch version ?<|||||>Pytorch - 1.7.0a0+542ac74
torch_xla - 1.6+30b65e9
The setup file is taken from
VERSION = "nightly" #@param ["1.5" , "20200325", "nightly"]
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
!python pytorch-xla-env-setup.py --version $VERSION<|||||>@patil-suraj i just realised that i was using the notebook created by you. Thanks for the notebook.
Did anything has changed in the current xla library because of which this is happening? <|||||>Hi @santhoshkolloju , yes there's been some changes to xla, I'm working on the fix, will let you know when I find it.
Also looks like you are trying to do question generation, I am going to release my own experiments by the end of the week. It's pretty exciting. Here's a relevant thread for QG #4399<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,502 | closed | licens | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| 07-03-2020 14:18:28 | 07-03-2020 14:18:28 | |
transformers | 5,501 | closed | Merge pull request #1 from huggingface/master | Updated | 07-03-2020 13:35:26 | 07-03-2020 13:35:26 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.