repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 4,800 | closed | [isort] add matplotlib to known 3rd party dependencies | Many people (like me) have it installed locally, so this will synchronize local isort and circleci. | 06-05-2020 17:55:00 | 06-05-2020 17:55:00 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4800?src=pr&el=h1) Report
> Merging [#4800](https://codecov.io/gh/huggingface/transformers/pull/4800?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/acaa2e6267ebfda9814795fa00b6ad86c35ea5d6&el=desc) will **increase** coverage by `2.09%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4800?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4800 +/- ##
==========================================
+ Coverage 74.59% 76.68% +2.09%
==========================================
Files 128 128
Lines 21500 21500
==========================================
+ Hits 16037 16488 +451
+ Misses 5463 5012 -451
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4800?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.41% <0.00%> (+0.15%)` | :arrow_up: |
| [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `43.57% <0.00%> (+0.35%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+1.17%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.03% <0.00%> (+1.35%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.69% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <0.00%> (+2.71%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <0.00%> (+3.87%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.69% <0.00%> (+10.04%)` | :arrow_up: |
| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/4800/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4800?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4800?src=pr&el=footer). Last update [acaa2e6...650208b](https://codecov.io/gh/huggingface/transformers/pull/4800?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Awesome, thanks @sshleifer - I should have added this when adding the benchmarks! |
transformers | 4,799 | closed | [cleanup] consolidate some prune_heads logic | factors out 5 repetitions of the following logic
```python
def find_pruneable_heads_and_indices(
heads: List, n_heads: int, head_size: int, already_pruned_heads: set
) -> Tuple[set, "torch.LongTensor"]:
mask = torch.ones(n_heads, head_size)
heads = set(heads) - already_pruned_heads # Convert to set and remove already pruned heads
for head in heads:
# Compute how many pruned heads are before the head and move the index accordingly
head = head - sum(1 if h < head else 0 for h in already_pruned_heads)
mask[head] = 0
mask = mask.view(-1).contiguous().eq(1)
index: torch.LongTensor = torch.arange(len(mask))[mask].long()
return heads, index
``` | 06-05-2020 17:52:28 | 06-05-2020 17:52:28 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4799?src=pr&el=h1) Report
> Merging [#4799](https://codecov.io/gh/huggingface/transformers/pull/4799?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/56d5d160cdd177ae6e644506535b56e79feccf68&el=desc) will **decrease** coverage by `0.77%`.
> The diff coverage is `91.30%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4799?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4799 +/- ##
==========================================
- Coverage 76.15% 75.38% -0.78%
==========================================
Files 128 128
Lines 21497 21464 -33
==========================================
- Hits 16371 16180 -191
- Misses 5126 5284 +158
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4799?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `31.93% <50.00%> (-53.83%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `84.26% <50.00%> (+0.99%)` | :arrow_up: |
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `76.33% <100.00%> (-0.33%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.27% <100.00%> (-0.16%)` | :arrow_down: |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.52% <100.00%> (-0.05%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <100.00%> (+0.96%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.02% <100.00%> (-0.14%)` | :arrow_down: |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `89.27% <100.00%> (-0.16%)` | :arrow_down: |
| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/4799/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4799?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4799?src=pr&el=footer). Last update [56d5d16...bfb3251](https://codecov.io/gh/huggingface/transformers/pull/4799?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Nice! LGTM |
transformers | 4,798 | closed | [ctrl] has broken code for pruning that is not tested | references an `attn` parameter than should be `multiheaded_attn`. Easy-ish fix.
| 06-05-2020 17:35:45 | 06-05-2020 17:35:45 | Yeah change
```python
self.h[layer].attn.prune_heads(heads)
```
to
```python
self.h[layer].multi_head_attention.prune_heads(heads)
```
in modeling_ctrl.py
and set `test_pruning=True` in `CTRLModelTest`<|||||>@sshleifer
I did the above-mentioned changes however the testing of pruning still failed. There is no `function` called `prune_heads` implemented in the `MultiHeadAttention` class.
In `modelling_xlm.py` I do observe the implementation of `prune_heads`.
Should I just raise a PR with the above-mentioned changes or look into implementing `prune_heads` for `ctrl`(Would require some help there 😓)?

<|||||>You could try to get that test passing or
work on https://github.com/huggingface/transformers/issues/4902, which is easier in my opinion.
|
transformers | 4,797 | closed | Write With Transformer Request: | # 🚀 Feature request
I understand if this is unattainable for reason of money, but I was wondering if you could replace three autocomplete suggestions with five. | 06-05-2020 17:27:50 | 06-05-2020 17:27:50 | This wouldn't change the unit economics of providing this service, but I'm curious, why don't you press tab twice instead?<|||||>The previous suggestions disappear after I press tab.
It is easier to pick an autocompletion that is suitable, the more options I have to compare. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,796 | closed | Ignore simlink | Didn't get an answer to my question on #4774 so asking again in the form of a PR ;-)
Currently, building the docs require making a simlink to the examples README (as per the [instructions](https://github.com/huggingface/transformers/tree/master/docs#building-the-documentation)) and that file then becomes untracked bit git. We should either ignore it (as proposed in this PR) or add it once for all (might be OS-dependent though).
Happy to amend this PR to the second solution, I just don't like untracked files. :-) | 06-05-2020 17:20:49 | 06-05-2020 17:20:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4796?src=pr&el=h1) Report
> Merging [#4796](https://codecov.io/gh/huggingface/transformers/pull/4796?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/acaa2e6267ebfda9814795fa00b6ad86c35ea5d6&el=desc) will **increase** coverage by `1.47%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4796?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4796 +/- ##
==========================================
+ Coverage 74.59% 76.06% +1.47%
==========================================
Files 128 128
Lines 21500 21500
==========================================
+ Hits 16037 16355 +318
+ Misses 5463 5145 -318
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4796?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |
| [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `43.57% <0.00%> (+0.35%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.36% <0.00%> (+1.10%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+1.17%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.69% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <0.00%> (+2.71%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <0.00%> (+3.87%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.69% <0.00%> (+10.04%)` | :arrow_up: |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `71.83% <0.00%> (+40.50%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4796/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4796?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4796?src=pr&el=footer). Last update [acaa2e6...9ce1f67](https://codecov.io/gh/huggingface/transformers/pull/4796?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I let @LysandreJik check here, he knows better how to deal with the docs :-) <|||||>Yes this sounds like a Windows-specific issue :)<|||||>Well having the symlink in the repo would certainly cause issues on Windows (since of course Linux symlinks are incompatible with Linux ones), but it's not linked to Windows, just to the doc-building setup :-p . <|||||>I think this symbolic link is redundant with the `docs/source/examples.md@`. Since we commited the symbolic link, it would probably just be better to remove the instruction from the doc installation?<|||||>Eh, since I don't know how to read, I did not run the command in the right folder, hence the untracked file.
So yes, since the symbolic link is in the repo, there is no need to do anything! Closing this. |
transformers | 4,795 | closed | Explain how to preview the docs in a PR | As discussed offline, add the instructions to check how to preview the docs. | 06-05-2020 17:14:50 | 06-05-2020 17:14:50 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4795?src=pr&el=h1) Report
> Merging [#4795](https://codecov.io/gh/huggingface/transformers/pull/4795?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/acaa2e6267ebfda9814795fa00b6ad86c35ea5d6&el=desc) will **decrease** coverage by `0.63%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4795?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4795 +/- ##
==========================================
- Coverage 74.59% 73.95% -0.64%
==========================================
Files 128 128
Lines 21500 21500
==========================================
- Hits 16037 15900 -137
- Misses 5463 5600 +137
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4795?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `17.54% <0.00%> (-75.97%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `28.15% <0.00%> (-63.03%)` | :arrow_down: |
| [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.96% <0.00%> (-6.70%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.97% <0.00%> (-0.19%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `43.57% <0.00%> (+0.35%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.04% <0.00%> (+0.78%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+1.17%)` | :arrow_up: |
| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/4795/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4795?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4795?src=pr&el=footer). Last update [acaa2e6...b41e2cf](https://codecov.io/gh/huggingface/transformers/pull/4795?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,794 | closed | enable multiprocessing in glue dataset | enable multiprocessing when converting examples to features utilizing the multiple cpu cores. N time faster... | 06-05-2020 17:11:49 | 06-05-2020 17:11:49 | |
transformers | 4,793 | closed | 🐛 run_ner.py runtime error linked to TPU training | # 🐛 Bug
## Information
Model I am using **Longformer For Token Classification**
Language I am using the model on **German**:
The problem arises when using:
* [x] the official example scripts:
T**he problem arises when trying to run run_ner.py on google colab in TPU fp16 mode.**
The tasks I am working on is:
* [x] an official GLUE/SQUaD task:
**CoNLL NER**
## To reproduce
I have a colab up if you want to see exactly what I did. You just need to upload data files in a new folder in /Content/<YOUR_FOLDER> to use in training and make sure to modify the run_ner paths correspondingly. [Google Colab](https://github.com/vinmorel/transformers/blob/master/run_ner_TPU.ipynb)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Runtime error after running the following :
```
!python3 "/content/transformers/examples/token-classification/run_ner.py" --data_dir "/content/CoNLL/" \
--labels "/content/CoNLL/labels.txt" \
--model_name_or_path "allenai/longformer-base-4096" \
--output_dir "xlnet-base-cased" \
--max_seq_length 200 \
--num_train_epochs 2 \
--per_device_train_batch_size 1 \
--save_steps 750 \
--seed 1 \
--do_train \
--do_eval \
--do_predict \
--fp16
```
```
Traceback (most recent call last):
File "/content/transformers/examples/token-classification/run_ner.py", line 303, in <module>
main()
File "/content/transformers/examples/token-classification/run_ner.py", line 228, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 390, in train
model, optimizer = amp.initialize(model, optimizer, opt_level=self.args.fp16_opt_level)
File "/usr/local/lib/python3.6/dist-packages/apex/amp/frontend.py", line 358, in initialize
return _initialize(models, optimizers, _amp_state.opt_properties, num_losses, cast_model_outputs)
File "/usr/local/lib/python3.6/dist-packages/apex/amp/_initialize.py", line 171, in _initialize
check_params_fp32(models)
File "/usr/local/lib/python3.6/dist-packages/apex/amp/_initialize.py", line 93, in check_params_fp32
name, param.type()))
File "/usr/local/lib/python3.6/dist-packages/apex/amp/_amp_state.py", line 32, in warn_or_err
raise RuntimeError(msg)
RuntimeError: Found param longformer.embeddings.word_embeddings.weight with type torch.FloatTensor, expected torch.cuda.FloatTensor.
When using amp.initialize, you need to provide a model with parameters
located on a CUDA device before passing it no matter what optimization level
you chose. Use model.to('cuda') to use the default device.
```
## Expected behavior
Would expect the code to run and start training on TPU without the runtime error.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0.dev20200528 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 06-05-2020 17:02:57 | 06-05-2020 17:02:57 | Hi! did you solve your issue? |
transformers | 4,792 | closed | Fix argument label | After #4722 the labels are called just `labels` now, not `masked_lm_labels`. The fact it wasn't caught by the tests probably means we have some test missing... | 06-05-2020 17:02:50 | 06-05-2020 17:02:50 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4792?src=pr&el=h1) Report
> Merging [#4792](https://codecov.io/gh/huggingface/transformers/pull/4792?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/acaa2e6267ebfda9814795fa00b6ad86c35ea5d6&el=desc) will **increase** coverage by `1.47%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4792?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4792 +/- ##
==========================================
+ Coverage 74.59% 76.06% +1.47%
==========================================
Files 128 128
Lines 21500 21500
==========================================
+ Hits 16037 16354 +317
+ Misses 5463 5146 -317
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4792?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `89.23% <100.00%> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.91% <0.00%> (-0.13%)` | :arrow_down: |
| [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `43.57% <0.00%> (+0.35%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.36% <0.00%> (+1.10%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+1.17%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.69% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <0.00%> (+2.71%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <0.00%> (+3.87%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.69% <0.00%> (+10.04%)` | :arrow_up: |
| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/4792/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4792?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4792?src=pr&el=footer). Last update [acaa2e6...7725750](https://codecov.io/gh/huggingface/transformers/pull/4792?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>True, it seems like we don't test the `DataCollatorForLangugeModeling` in combination with the trainer. I think we should add a test that runs one training step on trainer with each available DataCollator. What do you think @julien-c ?<|||||>Indeed the `Trainer` will need more thorough testing (which will probably be done in the next few weeks).
This wouldn't have been caught by tests though, since `masked_lm_labels` is deprecated but does not raise an error, right?<|||||>Ah yes, should have issued a warning but not an error, right. |
transformers | 4,791 | closed | parse arguments from dict | This PR adds `parse_dict` method to `HfArgumentParser` to allow parsing arguments from `dict`.
I find this necessary for notebook workflows where I'm not using `Trainer` from command line. Otherwise I need to write arguments to json file and use that path with `parse_json_file` or pass a list of strings to `parse_args_into_dataclasses`.
@julien-c @patrickvonplaten | 06-05-2020 16:35:08 | 06-05-2020 16:35:08 | Hey @patil-suraj :-),
Hmm, not sure we would definitely need that...let's say you have a dict of arguments that you want to parse it into a dataclass like `TrainingArguments`, you could just do
```python
training_args = TrainingArguments(**your_dict)
```
like it is done in the Reformer Colab for example: https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb
Can you give me an example, where going over the `HfArgumentParser` instead of directly instantiating the dataclass makes more sense? <|||||>@patrickvonplaten
wouldn't this
```
train_args, model_args, data_args = parser.parse_dict(your_dict)
```
be better than this
```
training_args = TrainingArguments(**your_dict)
model_args = ModelArguments(**your_dict)
data_args = DataArguments(**your_dict)
```
Anyway, its just a small utility, so if not needed by lot of people we can close this.<|||||>I'm ok with merging this!
Always nice to add a unit test though:)<|||||>closing this, accidentally merged upstream into this. Will open a new one |
transformers | 4,790 | closed | Clean-up code | Looks like #4747 introduced some bad formatting, fixing so CI is happy again. | 06-05-2020 16:31:31 | 06-05-2020 16:31:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4790?src=pr&el=h1) Report
> Merging [#4790](https://codecov.io/gh/huggingface/transformers/pull/4790?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa661ce749b0d14ae1999d1b097866248624a842&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4790?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4790 +/- ##
==========================================
+ Coverage 76.28% 76.29% +0.01%
==========================================
Files 128 128
Lines 21500 21500
==========================================
+ Hits 16401 16404 +3
+ Misses 5099 5096 -3
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4790?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4790/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `94.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4790/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.51% <0.00%> (+0.31%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4790/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4790?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4790?src=pr&el=footer). Last update [fa661ce...a3f8a97](https://codecov.io/gh/huggingface/transformers/pull/4790?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>👍 |
transformers | 4,789 | closed | Add model summary | This PR adds a high-level summary of all the models in the documentation. | 06-05-2020 14:35:55 | 06-05-2020 14:35:55 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4789?src=pr&el=h1) Report
> Merging [#4789](https://codecov.io/gh/huggingface/transformers/pull/4789?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b9109f2de1c4f52967976dc840074a9d62713498&el=desc) will **increase** coverage by `0.40%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4789?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4789 +/- ##
==========================================
+ Coverage 76.06% 76.46% +0.40%
==========================================
Files 128 128
Lines 21498 21498
==========================================
+ Hits 16353 16439 +86
+ Misses 5145 5059 -86
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4789?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4789/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4789/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.20% <0.00%> (-0.16%)` | :arrow_down: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4789/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4789?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4789?src=pr&el=footer). Last update [b9109f2...0e3789d](https://codecov.io/gh/huggingface/transformers/pull/4789?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Looks great :-) <|||||>Eh, forgot to finish my sentence linking to the pretrained models doc page at the beginning. No sure if I can link to part of the tables however, does restructured text can support that?<|||||>I doubt it. One link is OK I think |
transformers | 4,788 | closed | Onnx conversion for bert models with classification layers | Hi,
I am trying to convert a NER model trained with BertForTokenClassification to Onnx format. Am able to convert it using the convert_graph_to_onnx.py script using the following params
convert(framework="pt", model="bert_small_ner", output="bert_small_onnx/bert_small_ner.onnx", opset=11)
There are no errors thrown either. However, when i run the inference, I see only the Bert layer outputs but not the output of the Linear classification layer as it is in BertForTokenClassification.
Am I missing something? Or this is a know issue with some work around? Kindly let me know
Thanks in advance | 06-05-2020 14:24:16 | 06-05-2020 14:24:16 | Might be of interest to @mfuntowicz <|||||>I think the issue could be due to pipeline which extracts the model
# Allocate tokenizer and model
return pipeline("feature-extraction", model=model, tokenizer=tokenizer, framework=framework)
Seems like default is 'feature-extraction'
I could get this to work by changing it to 'ner'. Also passed in a config option
return pipeline("ner", model=model, config=config, tokenizer=tokenizer, framework=framework)
Not sure if this is the correct solution but it seems to work for me<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,787 | closed | 🚀 [Feature Request] Add self-contained browsable examples/notebooks in the docs | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation and feature request
Recently I'm seeing lots of beginner-level issues asking help on how to fine-tune a particular model for a particular task, how to use already trained model from the hub or how to do inference for certain task, what some parameters mean etc. See #4744, #4406, #4677, #4639
While the examples in the /examples directory are really awesome, it seems that they are bit hard to understand from a beginner's perspective. Also even though there is notebooks section, some people still seem to not find it. And as the library is getting very popular lots of students/beginners are starting their NLP journey with Transformers.
So IMO it would be really awesome if we have self-contained end-to-end browsable notebook examples with clear task and model descriptions in the docs itself (like the ones in pytorch and keras docs)
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I have few notebooks in community notebooks section and I would be happy to contribute more examples with clear descriptions for individual task with both fine-tuning and inference details.
@patrickvonplaten @julien-c | 06-05-2020 13:36:37 | 06-05-2020 13:36:37 | Good point! I think we could definitely give more links to the community notebooks and the docs in general to people so that users check the resources first before opening an issue...what are your thoughts on this @sgugger @julien-c @thomwolf ?<|||||>I agree we could add links to example notebooks in the docs, either community or from the notebooks folder in the repo. The same way there is a tips section, there could be a Getting started section with notebooks that illustrate one or several tasks the model is good at. <|||||>Let me add something from a transformers-beginner's point of view: I read the example scripts and notebooks, and what makes them harder to understand, is that most of them use some kind of downloaded pre-made datasets like GLUE, with custom dataloaders and everything. Unless I'm missing something, it makes questions like "How to prepare my data for sequence classification fine-tuning with transformers.Trainer ?" hard to answer, and that might prevent many people from training their own models <|||||>@klasocki,
Thanks, that's very valuable feedback! We are actually planning to replace all those custom data preprocessing steps with the `nlp` library which should be easier to understand. Here is an example how it can be used:
https://github.com/patrickvonplaten/notebooks/blob/master/How_to_evaluate_Longformer_on_TriviaQA_using_NLP.ipynb
Regarding the specific task of sequence classification, it would definitely be nice to have that in the examples as well! Also cc @julien-c and @thomwolf here <|||||>@patrickvonplaten
Thank you! But I think you forgot to attach the example, at least I can't see it 😄 <|||||>> @patrickvonplaten
> Thank you! But I think you forgot to attach the example, at least I can't see it
True, sorry :D Editing the comment above...<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Side note since this pops up in my notifications: all the tutorials in the doc now have associated notebooks. |
transformers | 4,786 | closed | Usage of Ġ in BPE tokenizer | Hello,
I want to add new words to my BPE tokenizer. I know the symbol Ġ means the end of a new token and the majority of tokens in vocabs of pre-trained tokenizers start with Ġ. Assume I want to add the word **Salah** to my tokenizer. I tried to add both **Salah** token and **ĠSalah**:
`tokenizer.add_tokens(['Salah', 'ĠSalah'])` # they get 50265 and 50266 values respectively.
However, when I tokenize a sentence where **Salah** appears, the tokenizer will never return me the second number (neither when using `.tokenize` nor `.encode`), for instance:
`tokenizer.tokenize('I love Salah and salad')` returns `['I', 'Ġlove', 'Salah', 'Ġand', 'Ġsalad']`.
The question is: should I use the symbol `Ġ` when adding new tokens or the tokenizer does it itself? Or, probably, it must be specified manually?
Thanks in advance! | 06-05-2020 13:34:00 | 06-05-2020 13:34:00 | @maschasap
AFAIK, you won't need to add `Ġ ` when adding a new token.
And you can use the `convert_tokens_to_string` method to convert these tokens to their respective strings.
tagging @mfuntowicz for more info<|||||>@patil-suraj thanks for your response! But still why doesn't the tokenizer add `Ġ` symbol when returning the tokenized sentence in the example? Does it mean it still needs to be tuned or is it OK?<|||||>@patil-suraj btw using the method `convert_tokens_to_string` the whitespace between the words **love** and **Salah** really disappears :(
`tokenizer.convert_tokens_to_string(tokenizer.tokenize('I love Salah and salad'))` outputs `'I loveSalah and salad`<|||||>@LysandreJik @mfuntowicz may I ask you for help?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Can someone help me understand the symbol Ġ ? I am running into the same thing , want to make sure I am not breaking things down the line<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>There is also a difference here between the fast and slow tokenisers:
```python
import transformers
tokeniser1 = transformers.RobertaTokenizer.from_pretrained('roberta-base')
tokeniser2 = transformers.RobertaTokenizerFast.from_pretrained('roberta-base')
tokeniser1.add_tokens(['Salah'])
tokeniser2.add_tokens(['Salah'])
print(tokeniser1.tokenize('I love Salah and salad'))
print(tokeniser2.tokenize('I love Salah and salad'))
```
Outputs:
```
['I', 'Ġlove', 'Salah', 'and', 'Ġsalad']
['I', 'Ġlove', 'Ġ', 'Salah', 'Ġand', 'Ġsalad']
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>same here<|||||>(in case someone comes across this issue, have a look at [this post in our forum](https://discuss.huggingface.co/t/bpe-tokenizers-and-spaces-before-words/475/2?u=joaogante)) |
transformers | 4,785 | closed | Question Answering Pipeline with big texts. | # ❓ Questions & Help
I am searching for some ideas of how to use QA pipeline with big data texts and get a good time response. (I am already using GPU) | 06-05-2020 13:06:25 | 06-05-2020 13:06:25 | Longformer would be a good choice, but it is currently not implemented in the pipelines. This issue might help :-) : https://github.com/huggingface/transformers/issues/4762<|||||>and #4615<|||||>This notebook might help :-)
https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing |
transformers | 4,784 | closed | Can I train question-answering on TPU using Huggingface | # ❓ Questions & Help
I am trying to run run_squad from question answering on TPU but I get the following error.
## Details
export SQUAD_DIR=/path/to/SQUAD
python examples/xla_spawn.py --num_cores 8 \
run_squad.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--do_train \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
**run_squad.py: error: unrecognized arguments: --tpu_num_cores 1**
| 06-05-2020 09:42:05 | 06-05-2020 09:42:05 | as you can see in https://github.com/huggingface/transformers/tree/master/examples#the-big-table-of-tasks `question-answering` is not implemented using Trainer, yet – so doesn't have TPU support.
This is on our todo-list but might take a few weeks. Feel free to open a PR though – or use TFTrainer is TF is an option<|||||>I think I can live with TFTrainer ... Thanks for the reply<|||||>how do i use TFTrainer with TPU<|||||>Hi! The [running on TPU](https://github.com/huggingface/transformers/tree/master/examples#running-on-tpus) section of the examples covers that.
Basically, if on a correctly setup TPU, the TF trainer will automatically launch on TPU. |
transformers | 4,783 | closed | question about tokenizer | hi, is there any difference between initializing BertTokenizer and load it using from_pretrained,
such as:
1、tokenizer = BertTokenizer(bert-base-uncased-vocab)
2、tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
i want to pretrain for some new language by myself, so i need to create the vocabulary, and how could i integrate it to the transformers's tokenizer
thanks a lot! | 06-05-2020 08:36:16 | 06-05-2020 08:36:16 | This notebook might help you, it shows how you can train a LM from scratch on a new language.
https://github.com/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb<|||||>@patil-suraj thanks for your reply. i have read this notebook. but i want to re-use the functions in the transfomers such as functions like encode_plus, batch_encode_plus
so my question is :
after train a tokenizer by myself, how could i integrate it to the transformers's tokenizer to use functions in it?
thanks again<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,782 | closed | ❓ How to use Gradient Accumulator in TF_Trainer ? | # ❓ Questions & Help
According to the documentation of Gradient Accumulator :
> the accumulator should be called in a
replica context. Gradients will be accumulated locally on each replica and
without synchronization. Users should then call ``.gradients``, scale the
gradients if required, and pass the result to ``apply_gradients``.
The default optimizer for TF_Trainer does not provide gradient accumulation.
**Is there any example available showing how to use Gradient Accumulator with TF_Trainer ?** | 06-05-2020 08:29:05 | 06-05-2020 08:29:05 | Hello!
The usage of the gradient accumulator is fully integrated since the beginning in the TF Trainer, the default value of accumulation is 1, if you want to change it you have to fill the `--gradient_accumulation_steps` parameter.<|||||>That's awesome ! Thank you very much for your answer |
transformers | 4,781 | closed | Regarding generate method used in BART | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarily intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiasts can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. -->
I am new to NLP and text generation so please help me understand some basic things.
I fine-tuned BART on CNN/DM dataset using provide scripts in the examples section and it works fine.
I have some understanding of how model.generate() method works. But need to clarify some basic questions.
Is the generate() method used at the time of training/validation?
If yes, then how loss is computed if the beam size is greater than 1?
Or do we even use the parameters like in generate() method during training?
Please help. Thank you.
| 06-05-2020 07:29:21 | 06-05-2020 07:29:21 | Hey @kunalpagarey,
The `.generate()` method cannot be used for training. It can be used for validation and testing.
To create a good model for summarization the following components are important:
**Train**:
1. What model do you want to use? => Bart for example
2. What loss function do you want to use? => Usually people do pretraining and then finetuning on summarization using standard maximum likelihood. For some detail on how to fine-tune Bart with `transformers` check out: https://github.com/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb
** Decoding method**
After having fine-tuned a model on summarization, the decoding method to apply is another whole question and is often done independently of training. Now, you can decide how to use the `generate()` method. You could for example try out a bunch of different hyperparameters for `.generate()` on your validation set and then decide on one setting for want to use for your test set.
For more details on how to choose hyperparameters for `.generate()` check out:
https://huggingface.co/blog/how-to-generate
|
transformers | 4,780 | closed | Reformer hidden_size of output is doubled. | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
The default hidden_size is 256 but the output dim is 512. Similarly, when I change the config and set hidden_size to 512, the output dim is 1024.
```python
enc_config = {
"attention_head_size": 64,
"attn_layers": ["local", "lsh", "local", "lsh", "local", "lsh"],
"axial_pos_embds": True,
"sinusoidal_pos_embds": False,
"axial_pos_embds_dim": [256, 256],
"axial_pos_shape": [64, 64],
"lsh_attn_chunk_length": 64,
"local_attn_chunk_length": 64,
"feed_forward_size": 256,
"hidden_act": "relu",
"hidden_size": 512,
"is_decoder": False,
"max_position_embeddings": 4096,
"num_attention_heads": 12,
"num_buckets": [64, 64],
"num_hashes": 4,
"lsh_attention_probs_dropout_prob": 0.0,
"lsh_num_chunks_before": 1,
"lsh_num_chunks_after": 0,
"local_num_chunks_before": 1,
"local_num_chunks_after": 0,
"local_attention_probs_dropout_prob": 0.025,
"hidden_dropout_prob": 0.025,
"pad_token_id": tokenizer.pad_token_id,
"eos_token_id": tokenizer.eos_token_id,
"vocab_size": tokenizer.vocab_size,
}
reformer = ReformerModel(ReformerConfig(**enc_config))
input_ids = tokenizer.encode(doc, max_length=4096, pad_to_max_length=True, return_tensors='pt')
out = reformer.forward(input_ids)
out[0].shape
```
```
torch.Size([1, 4096, 1024])
```
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 06-05-2020 03:10:38 | 06-05-2020 03:10:38 | Hey @h324yang,
Good observation! The reason for this is that Reformer uses Reversible Residual Layers and thus always has two input streams (two hidden states inputs) instead of one. Both of these streams have to be concatenated after running through the model which leads to double the size of `hidden_states`. I will soon ~1 week publish a notebook explaining this in more detail :-) |
transformers | 4,779 | closed | 🐛 [BART] Pipeline OOM | # 🐛 Bug
I try to run BART model myself versus running the model through `pipeline`.
Running the BART model myself is fine, but I have OOM on my GPU if I try to run the same model through pipeline.
Please see the following code : https://gist.github.com/Colanim/4fae6ab52c05716062a0f20c4a6b9737
_(It assume you have a file `cnndm/test.source` with an article on each line)_
Run with :
`python pipeline_oom.py --model HuggingFace --batch-size 32`
(Should **not** produce OOM on 11G-GPU)
and `python pipeline_oom.py --model Pipeline --batch-size 32`
(Should produce OOM on 11G-GPU)
---
**Why the pipeline use more memory ?**
@sshleifer | 06-05-2020 01:17:51 | 06-05-2020 01:17:51 | @sshleifer Can you reproduce on your side or is it just me ?<|||||>Yes I can replicate, sorry for the slow response. I am still trying to figure out why this is happening.<|||||>OK I figured out the problem
Long articles are not getting truncated anymore by pipeline.
Will have a look.
If you look at the second val.source example it's 1583 tokens, and pipeline does not truncated it, whereas `Huggingface` does.
Related: #4236 <|||||>May be related #5398<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,778 | closed | Updated path "cd examples/text-generation/pplm" | https://github.com/huggingface/transformers/issues/4776 | 06-05-2020 00:04:56 | 06-05-2020 00:04:56 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4778?src=pr&el=h1) Report
> Merging [#4778](https://codecov.io/gh/huggingface/transformers/pull/4778?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f9414f7553d3f1872b372990ef03205c0d1141df&el=desc) will **increase** coverage by `1.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4778?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4778 +/- ##
==========================================
+ Coverage 76.06% 77.08% +1.01%
==========================================
Files 128 128
Lines 21498 21498
==========================================
+ Hits 16352 16571 +219
+ Misses 5146 4927 -219
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4778?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.04% <0.00%> (-0.32%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.39% <0.00%> (+0.48%)` | :arrow_up: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `89.17% <0.00%> (+2.01%)` | :arrow_up: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/4778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.40% <0.00%> (+4.80%)` | :arrow_up: |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.39% <0.00%> (+14.55%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |
| [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <0.00%> (+61.53%)` | :arrow_up: |
| [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.20% <0.00%> (+64.93%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4778?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4778?src=pr&el=footer). Last update [f9414f7...dd5e08e](https://codecov.io/gh/huggingface/transformers/pull/4778?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,777 | closed | The purpose of files merges.txt, special_tokens_map.json, training_args.bin and add_tokens.json | Good evening!
After I have my RoBERTa model pre-trained, I get the list of the following files:
`merges.txt`, `special_tokens_map.json`, `training_args.bin`. I have also seen if you add extra tokens to the tokenizer, the file `add_tokens.json` appears. Could I ask to clarify the meaning of the first three files - how they are used and what they contain? And also how can I add extra tokens when pre-training RoBERTa or any BERT-type model? Million of thanks in advance!
Be safe,
Akim | 06-04-2020 23:03:15 | 06-04-2020 23:03:15 | Hi.
You will get an explanation about `merges.txt` in this [post](https://github.com/huggingface/transformers/issues/1083#issuecomment-524303077).<|||||>@piegu , thanks for you answer! I have already read this post, though still did not quite understand, does it contain all the possible tokens? If so, what is the purpose of it if we can simply take the keys from `vocab.json`? Thanks!<|||||>My understanding is that the file `merges.txt` is build during the training of the BBPE (Byte Level BPE) tokenizer on the corpus: it gets a new entry (line) at each iteration of the tokenizer to find the byte pairs most frequent.
For example, the first line can be `Ġ d`. Why? Because at the first iteration, the token most frequent is ` d` (with a space in front of d) and the character `Ġ` means space.
What is the consequence in the vocabulary? The token `Ġd` is listed.
Hope I'm right. If not, please give me your explanation as I have not found any online.<|||||>@piegu thank you! So you mean this is the vocabulary sorted by the frequency on the training data, right?
And what about these lines (which are 3rd - 7th for RoBERTa-base, for instance):
```
h e
i n
r e
o n
```
I clearly see these are popular words if we stack them but why are they divided?<|||||>First of all, like for GPT2, the Hugging Face (HF) tokenizer of RoBERTa is a [Byte-level Byte-Pair-Encoding](https://arxiv.org/pdf/1909.03341.pdf) (BBPE) as written in the [documentation](https://huggingface.co/transformers/_modules/transformers/tokenization_roberta.html).
Then, we can check in this page that in the attribute `vocab_files_names`, there are 2 files
```
VOCAB_FILES_NAMES = {
"vocab_file": "vocab.json",
"merges_file": "merges.txt",
}
```
Let's open [merges.txt](https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-merges.txt) of RoBERTa-base, for instance. The file starts like this:
```
#version: 0.2
Ä t
Ä a
h e
i n
r e
o n
Ä t he
e r
Ä s
a t
Ä w
Ä o
...
```
_Note: In this [Roberta Tokenizer merge file](https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-merges.txt), the special character `Ä` is used for encoding space instead of `Ġ` that is used by GPT2 Tokenizer ([explanation 1](https://github.com/openai/gpt-2/issues/80) and [explanation 2](https://github.com/pytorch/fairseq/issues/1716)) but in the corresponding [RoBERTa vocab file](https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-vocab.json), the character `Ġ` is used. I do not know why._
The merge file shows what tokens will be merged at each iteration (thats' why there is a space between tokens in the merge file).
About your example: It means that at the third iteration, the tokens pair `he` formed by the 2 tokens `h` and `e` is the most frequent in the corpus (token `he` without space before the token `h`).
If at the end of iterations, there is at least one pair `he` left (not merged with other tokens), it will appear in the [vocab file](https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-vocab.json) (depends as well of the `min_freq` rules and number of tokens in vocab). Here, the id of `he` in the vocab file is 700.
Hope it helps but that would be great to get the point of view of someone from Hugging Face like @sshleifer or @sgugger.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@piegu appreciate your clear explanation!i am still confused about the defination of "iteration" |
transformers | 4,776 | closed | Correcting path to pplm examples | # 🐛 Bug
On https://github.com/huggingface/transformers/tree/master/examples/text-generation/pplm#setup
It says
```
git clone https://github.com/huggingface/transformers && cd transformers
pip install .
pip install nltk torchtext # additional requirements.
cd examples/pplm
```
and as you can guess from the url, the correct path is
```
git clone https://github.com/huggingface/transformers && cd transformers
pip install .
pip install nltk torchtext # additional requirements.
cd examples/text-generation/pplm
```
cd examples/**text-generation**/pplm
| 06-04-2020 22:52:16 | 06-04-2020 22:52:16 | Thanks for the report – care to open a PR?<|||||>Done
https://github.com/huggingface/transformers/pull/4778
I hope it is done well. |
transformers | 4,775 | closed | Create model card for tblard/allocine | Model card for: https://huggingface.co/tblard/tf-allocine
This is a french sentiment analysis model, trained from camembert-base, finetuned on Allociné.fr data.
Original repo: https://github.com/TheophileBlard/french-sentiment-analysis-with-bert | 06-04-2020 21:24:44 | 06-04-2020 21:24:44 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4775?src=pr&el=h1) Report
> Merging [#4775](https://codecov.io/gh/huggingface/transformers/pull/4775?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/17a88d31925a9308e4d7275420033f07a20cd680&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4775?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4775 +/- ##
==========================================
+ Coverage 77.08% 77.10% +0.01%
==========================================
Files 128 128
Lines 21059 21059
==========================================
+ Hits 16234 16237 +3
+ Misses 4825 4822 -3
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4775?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.03% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.96% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4775?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4775?src=pr&el=footer). Last update [17a88d3...b1ef414](https://codecov.io/gh/huggingface/transformers/pull/4775?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks for sharing, and great dataset. (consider uploading it to [`nlp`](https://github.com/huggingface/nlp)?)<|||||>> Thanks for sharing, and great dataset. (consider uploading it to [`nlp`](https://github.com/huggingface/nlp)?)
Thanks for merging. Sure, will do asap ! |
transformers | 4,774 | closed | Add .vs to gitignore | My VS code writes stuff there so added it to the .gitignore.
I also have other untracked files:
- in tests/fixtures/ I have some cached_lm_\*Tokenizer_\*.txt and .txt.lock after running the tests
- in docs/ I have the simlink to examples.md as indicated in [here](https://github.com/huggingface/transformers/tree/master/docs#building-the-documentation)
Should I add stuff in .gitignore to ignore those as well? | 06-04-2020 20:19:54 | 06-04-2020 20:19:54 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4774?src=pr&el=h1) Report
> Merging [#4774](https://codecov.io/gh/huggingface/transformers/pull/4774?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cd4e07a85e6161111016ca6d811d97e59368971a&el=desc) will **increase** coverage by `0.22%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4774?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4774 +/- ##
==========================================
+ Coverage 77.09% 77.32% +0.22%
==========================================
Files 128 128
Lines 21059 21059
==========================================
+ Hits 16235 16283 +48
+ Misses 4824 4776 -48
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4774?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4774/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4774/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.96% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4774/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.39% <0.00%> (+14.55%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4774?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4774?src=pr&el=footer). Last update [cd4e07a...e51cd58](https://codecov.io/gh/huggingface/transformers/pull/4774?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,773 | closed | Don't access pad_token_id if there is no pad_token | When using the `encode` method of a fast tokenizer, we end up here and accessing `pad_token_id` may log an error even when no padding is done. This PR corrects that.
This fixes #4764 .
| 06-04-2020 18:15:50 | 06-04-2020 18:15:50 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4773?src=pr&el=h1) Report
> Merging [#4773](https://codecov.io/gh/huggingface/transformers/pull/4773?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cd4e07a85e6161111016ca6d811d97e59368971a&el=desc) will **increase** coverage by `0.34%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4773?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4773 +/- ##
==========================================
+ Coverage 77.09% 77.43% +0.34%
==========================================
Files 128 128
Lines 21059 21059
==========================================
+ Hits 16235 16308 +73
+ Misses 4824 4751 -73
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4773?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.91% <ø> (+0.11%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.03% <0.00%> (+1.35%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `35.03% <0.00%> (+6.36%)` | :arrow_up: |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4773/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.39% <0.00%> (+14.55%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4773?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4773?src=pr&el=footer). Last update [cd4e07a...734e4af](https://codecov.io/gh/huggingface/transformers/pull/4773?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LGTM, also pinging @mfuntowicz for interface of Rust tokenizer and transformers tokenizer |
transformers | 4,772 | closed | Fix the __getattr__ method in BatchEncoding | Fix the issue where the `__getattr__` method in `BatchEncoding` was raising a `KeyError` instead of an `AttributeError` when the attribute was accessed with `getattr()`.
Example:
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
features = tokenizer.encode_plus("Hello here")
getattr(features, "attr", False)
```
Previous output:
```
/home/jplu/transformers/src/transformers/tokenization_utils.py:204 __getattr__
return self.data[item]
KeyError: 'attr'
```
New output:
```
False
``` | 06-04-2020 17:58:29 | 06-04-2020 17:58:29 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4772?src=pr&el=h1) Report
> Merging [#4772](https://codecov.io/gh/huggingface/transformers/pull/4772?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5856999a9f2926923f037ecd8d27b8058bcf9dae&el=desc) will **decrease** coverage by `0.02%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4772?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4772 +/- ##
==========================================
- Coverage 77.98% 77.96% -0.03%
==========================================
Files 123 123
Lines 20436 20437 +1
==========================================
- Hits 15938 15933 -5
- Misses 4498 4504 +6
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4772?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.47% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/hf\_api.py](https://codecov.io/gh/huggingface/transformers/pull/4772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcGkucHk=) | `93.06% <0.00%> (-4.96%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4772?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4772?src=pr&el=footer). Last update [5856999...fa0abc6](https://codecov.io/gh/huggingface/transformers/pull/4772?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,771 | closed | Remove unnecessary model_type arg in example | 06-04-2020 16:31:07 | 06-04-2020 16:31:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4771?src=pr&el=h1) Report
> Merging [#4771](https://codecov.io/gh/huggingface/transformers/pull/4771?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e645b9ab9407e1c1b2c168317dc79fe13fc6e0b4&el=desc) will **decrease** coverage by `0.81%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4771?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4771 +/- ##
==========================================
- Coverage 77.31% 76.49% -0.82%
==========================================
Files 128 128
Lines 21059 21059
==========================================
- Hits 16281 16110 -171
- Misses 4778 4949 +171
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4771?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4771/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `31.32% <0.00%> (-55.07%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4771/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.91% <0.00%> (-0.25%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4771/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4771/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.03% <0.00%> (+1.35%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4771?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4771?src=pr&el=footer). Last update [e645b9a...7ad3a66](https://codecov.io/gh/huggingface/transformers/pull/4771?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 4,770 | closed | Add note about doc generation | Just make it explicit that doc generation is only for local inspection. | 06-04-2020 16:19:42 | 06-04-2020 16:19:42 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4770?src=pr&el=h1) Report
> Merging [#4770](https://codecov.io/gh/huggingface/transformers/pull/4770?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e645b9ab9407e1c1b2c168317dc79fe13fc6e0b4&el=desc) will **decrease** coverage by `0.04%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4770?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4770 +/- ##
==========================================
- Coverage 77.31% 77.26% -0.05%
==========================================
Files 128 128
Lines 21059 21059
==========================================
- Hits 16281 16271 -10
- Misses 4778 4788 +10
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4770?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4770/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `36.80% <0.00%> (-3.88%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4770/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.79% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4770/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4770/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.03% <0.00%> (+1.35%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4770?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4770?src=pr&el=footer). Last update [e645b9a...39dc245](https://codecov.io/gh/huggingface/transformers/pull/4770?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,769 | closed | Bert (sentence classification) output is non-deterministic for PyTorch (not for TF) | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): German
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load model:
`config = BertConfig.from_json_file(config_filename)
model = BertForSequenceClassification(config)
state_dict = torch.load(model_filename)
model.load_state_dict(state_dict)`
2. Do inference twice on the same input + compare results.
3. Alternatively, save the first output, load the model from scratch, and run the same inference. Even in this case, the first output will not be the same as the next time.
## Expected behavior
The prediction value should be deterministic. Note that it *is* deterministic when the model parameters are loaded from a TensorFlow file (with `from_tf=True`).
## Environment info
- `transformers` version: 2.10.0
- Platform: Linux-5.3.0-55-generic-x86_64-with-Ubuntu-19.10-eoan
- Python version: 3.7.5
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): 2.0.0 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| 06-04-2020 15:36:50 | 06-04-2020 15:36:50 | Hi, as a quick note, all the instructions you've shown could be resumed in the simpler `BertForSequenceClassification.from_pretrained`.
Which checkpoint are you trying to load? How did you obtain it?<|||||>Hi, thanks for the tip.
The checkpoint is from an own finetuned model. But would that matter? I would expect that the model behaves deterministically, even if I put random tensors with the correct shape into the `state_dict`.<|||||>Well, it depends. A few things may be responsible here:
- Your model is not in eval mode (`model.eval()`), resulting in dropout layers affecting your results
- Your fine-tuned model is lacking some layers, which are therefore initialized randomly.
Can you check the logs by putting the following two lines above your model load?
```py
import logging
logging.basicConfig(level=logging.INFO)
```
Can you also try by using the `from_pretrained` method (given that your model filename is `pytorch_model.bin`)?
```py
config = BertConfig.from_json_file(config_filename)
model = BertForSequenceClassification.from_pretrained(model_dir, config=config)
```
Or, simpler, if the configuration is in the same folder as your model filename:
```py
model = BertForSequenceClassification.from_pretrained(model_dir)
```<|||||>Thanks, @LysandreJik , you were exactly right: After setting `model.eval()`, the PyTorch model also behaves deterministically. Rookie mistake :smile:
Since you provided the alternative methods, I checked them, too. The logging does not tell me whether or not the model is in eval mode. It just lists some hyperparameters of the model. At least I can see there that there seem to be at least two (`"attention_probs_dropout_prob"` and `"hidden_dropout_prob"`) that make a difference between train and eval mode.
And finally I tried the loading from one line. That also serves to resolve the issue: If you load like that it seems to be set to eval mode automatically. So not the PyTorch variant was to blame, but the mode of loading (and not explicitly setting to eval mode afterwards). Or ultimately me :wink:
Thanks for the quick and competent response!<|||||>The logging is useful when you're loading using `from_pretrained` as it tells you which layers were not initialized with the model. For example if your checkpoint is a base BERT model that you try to load in the sequence classification model, it will load it but the classifier layer would be randomly initialized. The logging would have told you :smile:.
Glad we could resolve your problem! |
transformers | 4,768 | closed | Codecov setup | Setup codecov | 06-04-2020 15:17:00 | 06-04-2020 15:17:00 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4768?src=pr&el=h1) Report
> Merging [#4768](https://codecov.io/gh/huggingface/transformers/pull/4768?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2b8b6c929e282958a920ba2aa26ee59106986ec3&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4768?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4768 +/- ##
==========================================
+ Coverage 77.31% 77.33% +0.01%
==========================================
Files 128 128
Lines 21059 21059
==========================================
+ Hits 16282 16285 +3
+ Misses 4777 4774 -3
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4768?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.03% <0.00%> (+1.35%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4768?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4768?src=pr&el=footer). Last update [2b8b6c9...fc6d7f5](https://codecov.io/gh/huggingface/transformers/pull/4768?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,767 | closed | Model is running on special characters and word pieces for token classification | ['last', 'completed', 'interactions', 'on', 'wed', '##nes', '##day']
and it will return for 9 labels, but should really on return 5 | 06-04-2020 15:16:53 | 06-04-2020 15:16:53 | Hi @andrster , it's not clear what you mean here. Can you please provide more explanation<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,766 | closed | Issue with HANS evaluation | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Albert, Bert
Language I am using the model on (English, Chinese ...): GLUE (MNLI), HANS
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
I have tried to evaluate ALBERT and BERT (pretrained, trained on MNLI) on HANS. I'm using the official example.
Everytime I run evaluation, I'm getting
```
Heuristic entailed results:
lexical_overlap: 0.0
subsequence: 0.0
constituent: 0.0
Heuristic non-entailed results:
lexical_overlap: 1.0
subsequence: 1.0
constituent: 1.0
```
Results can be reproduced by:
```
python3 hans/test_hans.py --task_name hans --model_type bert/albert --do_eval --data_dir $HANS_DIR --model_name_or_path $MODEL_PATH --max_seq_length 128 --output_dir $MODEL_PATH --per_gpu_eval_batch_size 1024 --overwrite_cache
```
## Expected behavior
It shouldn't be exactly 0 and 1 always and entailment score should be higher than non entailment.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux 5.4.0-29-generic
- Python version: 3.8.2
- PyTorch version (GPU?): 1.5.0+cu101
- Tensorflow version (GPU?): NA
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 06-04-2020 13:58:59 | 06-04-2020 13:58:59 | Could you please provide some possible areas to look at ? I can look at it and send a PR.<|||||>@sgugger @julien-c Thank you for the [PR](https://github.com/huggingface/transformers/issues/4742). There is a problem associated with this PR, that is, if you run `python3 evaluate_heur_output.py /path_to_hans_predictions.txt`, I'm getting an error:
```
guess = guess_dict[key]
Keyerror: 'ex0'
```
which suggests that a key is missing, indicating that `hans_predictions.txt` is not being generated in the expected manner as [HANS repo](https://github.com/huggingface/transformers/issues/4742) indicates.
For fixing, it would great if evaluation is carried within transformers just like GLUE tasks and that user doesn't have to rely on external [repo](https://github.com/huggingface/transformers/issues/4742) to get predictions.
It would be good if you can check this, as this error makes the example unusable.<|||||>I forgot the header on that file, #5082 should fix.<|||||>Evaluation seems to be working fine now. Feel free to close this issue @sgugger . Thanks again. Would be great if evaluation method can be integrated.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,765 | closed | Cannot load pretrained model from repo. | Hi, I'm trying to load a model from the repository:
```
tokenizer = AutoTokenizer.from_pretrained("NeuML/bert-small-cord19-squad2")
model = AutoModelForQuestionAnswering.from_pretrained("NeuML/bert-small-cord19-squad2")
```
but I'm receiving the error:
`OSError: Model name 'NeuML/bert-small-cord19-squad2' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We assumed 'NeuML/bert-small-cord19-squad2' was a path or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
`
I've tried a few other models with the same result. The vocab.txt file is there in each of them. Am I missing something?
EDIT: Current libraries -
beautifulsoup4==4.9.1
boto3==1.10.30
botocore==1.13.30
certifi==2019.11.28
chardet==3.0.4
Click==7.0
docutils==0.15.2
fsspec==0.6.1
future==0.18.2
html2text==2020.1.16
idna==2.8
jmespath==0.9.4
joblib==0.14.0
nltk==3.4.5
numpy==1.17.4
pandas==0.25.3
Pillow==7.1.2
python-dateutil==2.8.0
pytz==2019.3
regex==2019.11.1
requests==2.22.0
s3fs==0.4.0
s3transfer==0.2.1
sacremoses==0.0.35
scikit-learn==0.21.3
scipy==1.3.2
sentencepiece==0.1.83
six==1.13.0
soupsieve==2.0.1
torch==1.5.0
torchvision==0.6.0
tqdm==4.40.0
transformers==2.2.0
urllib3==1.25.7
wikipedia==1.4.0 | 06-04-2020 13:57:06 | 06-04-2020 13:57:06 | Hello @jordaniac89 I just tried this, and it worked. Can you try this again ?<|||||>Yeah, nothing. My assumption was that from_pretrained() should reach out to s3 and download the model if it can't find it?<|||||>Figured it out. Just an fyi that `pip install transformers` installs v. 2.2.0. I needed to specify 2.5.1 with `pip install transformers==2.5.1`<|||||>Glad you could solve the issue! |
transformers | 4,764 | closed | GPT2TokenizerFast raises pad_token error even if not used | # 🐛 Bug
## Information
Model: GPT2
Language: English
Encoding with the `GPT2TokenizerFast` causes a `pad_token` error to be sent to stderr, despite not attempting to access that property.
## To reproduce
```import transformers
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained('gpt2')
res = tokenizer.encode("This is a sentence")
print(transformers.__version__)
```
Output:
```
Using pad_token, but it is not set yet.
2.11.0
```
## Expected behavior
I'm aware that GPT-2 doesn't include a pad token (#2630) - I haven't tried to use it. I would expect no error to be displayed until I try to access that property.
## Environment info
- `transformers` version: 2.11.0
- Platform: Ubuntu
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0 with CUDA
- Tensorflow version (GPU?): n/a
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| 06-04-2020 12:45:11 | 06-04-2020 12:45:11 | I can reproduce and think I've found the cause, working on this. |
transformers | 4,763 | closed | Model Card for RoBERTa trained on Sanskrit | 06-04-2020 11:42:56 | 06-04-2020 11:42:56 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4763?src=pr&el=h1) Report
> Merging [#4763](https://codecov.io/gh/huggingface/transformers/pull/4763?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5bf9afbf351f9419505eb1c9e0c5ab78883c3caf&el=desc) will **decrease** coverage by `0.31%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4763?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4763 +/- ##
==========================================
- Coverage 77.41% 77.09% -0.32%
==========================================
Files 128 128
Lines 21059 21059
==========================================
- Hits 16302 16236 -66
- Misses 4757 4823 +66
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4763?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `71.83% <0.00%> (-14.56%)` | :arrow_down: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-6.37%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.03% <0.00%> (-0.13%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.80% <0.00%> (-0.12%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.79% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4763/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4763?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4763?src=pr&el=footer). Last update [5bf9afb...30af45b](https://codecov.io/gh/huggingface/transformers/pull/4763?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>How can I solve this codecov/project checks?<|||||>It was a transient CI error. Thank you! |
|
transformers | 4,762 | closed | KeyError in Pipeline Question Answering with LongFormer | I'm trying to do QA with LongFormer in a Pipeline. First of all, I generate the pipeline:
`
MODEL_STR = "mrm8488/longformer-base-4096-finetuned-squadv2"
tokenizer = AutoTokenizer.from_pretrained(MODEL_STR)
model = AutoModelForQuestionAnswering.from_pretrained(MODEL_STR)
QA = pipeline('question-answering', model=model, tokenizer=tokenizer)
`
Then, I get the paper text from which I want the answer to come from, named my_article, that's a string containing the full body of the article (around 3000 words). Then, I try:
`
with torch.no_grad():
answer = QA(question=question, context=articles_abstract.body_text.iloc[0])
`
And it throws the following error:
`
eyError Traceback (most recent call last)
<ipython-input-53-b5f8dc0503c8> in <module>
1 with torch.no_grad():
----> 2 answer = QA(question=question, context=articles_abstract.body_text.iloc[0])
~/miniconda/envs/transformers_env/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
1225 ),
1226 }
-> 1227 for s, e, score in zip(starts, ends, scores)
1228 ]
1229
~/miniconda/envs/transformers_env/lib/python3.7/site-packages/transformers/pipelines.py in <listcomp>(.0)
1225 ),
1226 }
-> 1227 for s, e, score in zip(starts, ends, scores)
1228 ]
1229
KeyError: 382
`
How can I solve this issue? More importantly, what do you think is causing the issue?
Thanks in advance! :) | 06-04-2020 11:24:48 | 06-04-2020 11:24:48 | It seems that I have the same (or at least very similar) issue but using `ner` pipeline.
My model is a fine-tuned RoBERTa (`xlm-roberta-base`).
I can produce different predictions with different inputs, but all are way outside the range of the actual label IDs.
The error shows where the predicted label ID can't be found in the `id2label` map in the model config:
```
~/projects/env/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
920 filtered_labels_idx = [
921 (idx, label_idx)
--> 922 for idx, label_idx in enumerate(labels_idx)
923 if self.model.config.id2label[label_idx] not in self.ignore_labels
924 ]
~/projects/env/lib/python3.7/site-packages/transformers/pipelines.py in <listcomp>(.0)
921 (idx, label_idx)
922 for idx, label_idx in enumerate(labels_idx)
--> 923 if self.model.config.id2label[label_idx] not in self.ignore_labels
924 ]
925
KeyError: 741
```<|||||>Longformer isn't yet supported in the pipeline. For now you'll need to do this manually as given in the example or doc.
@patrickvonplaten <|||||>That's correct, adding Longformer to the QA pipeline is on the ToDo List :-) <|||||>Actually LongFormer isn't the only model that fails inside the Pipeline. I'm trying to use now 'ktrapeznikov/biobert_v1.1_pubmed_squad_v2' and it throws the same error: KeyError. <|||||>Anyone has an example of how to do QA without the Pipeline? That'd be really helpful for checking whether the models work or not, regardless of them having been added to the pipeline or not. <|||||>@alexvaca0
Please check which architecture you are using, and then go to the docs and find the doc for QA model, it contains the example on how to use it without pipeline. So if your architecture is BERT then there will be a model BertForQuestionAnswering. You'll find the example in the model's doc. Basically what you'll need to do is this
```python3
# import your model class, you can also use AutoModelForQuestionAnswering and AutoTokenizer
from transformers import BertTokenizer, BertForQuestionAnswering
import torch
# load the model and tokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
# encode the question and text
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
encoding = tokenizer.encode_plus(question, text)
input_ids, token_type_ids = encoding["input_ids"], encoding["token_type_ids"]
# do the forward pass, each qa model returns start_scores, end_scores
start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
# extract the span
all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])
assert answer == "a nice puppet"
```
Hope this helps you.<|||||>Also https://huggingface.co/transformers/usage.html#extractive-question-answering<|||||>> Actually LongFormer isn't the only model that fails inside the Pipeline. I'm trying to use now 'ktrapeznikov/biobert_v1.1_pubmed_squad_v2' and it throws the same error: KeyError.
Feel free to open a separate issue on this so that we can investigate more :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,761 | closed | [cleanup] PretrainedModel.generate: remove unused kwargs | `_generate_beam_search` and `_generate_no_beam_search` do not use `bos_token_id` or `decoder_start_token_id` | 06-04-2020 11:18:29 | 06-04-2020 11:18:29 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4761?src=pr&el=h1) Report
> Merging [#4761](https://codecov.io/gh/huggingface/transformers/pull/4761?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5bf9afbf351f9419505eb1c9e0c5ab78883c3caf&el=desc) will **decrease** coverage by `0.09%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4761?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4761 +/- ##
==========================================
- Coverage 77.41% 77.31% -0.10%
==========================================
Files 128 128
Lines 21059 21059
==========================================
- Hits 16302 16282 -20
- Misses 4757 4777 +20
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4761?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <ø> (ø)` | |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-6.37%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.80% <0.00%> (-0.12%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.79% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4761?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4761?src=pr&el=footer). Last update [5bf9afb...95a6ef1](https://codecov.io/gh/huggingface/transformers/pull/4761?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,760 | closed | Fine-tuning of RoBERTa | Good afternoon,
I'm trying to fine-tune RoBERTa on my own dataset, following the instruction provided here [https://huggingface.co/transformers/v1.2.0/examples.html](url). However, I cannot find the file `run_lm_finetuning.py` - could you please clarify, is the instruction valid or the file has really been deleted? Thanks in advance!
Be safe,
Akim
| 06-04-2020 11:04:25 | 06-04-2020 11:04:25 | Hello @Aktsvigun
To run the examples you'll need to clone the transformer repo. All examples can be found in examples directory. You can find `run_lm_finetuning` here.
run_language_modeling.py
https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py<|||||>@patil-suraj thank you for your response,
feel a bit dummy, is the function name changed to run_language_modeling? 😹
Cannot find the name `run_lm_finetuning` on the page itself.
<img width="1228" alt="Снимок экрана 2020-06-04 в 16 57 00" src="https://user-images.githubusercontent.com/36672861/83765910-6b0b0680-a684-11ea-8ff6-6ba29f833870.png">
Be safe,
Akim
<|||||>yes, the filename `run_lm_finetuning` is changed to `run_language_modeling.py`<|||||>Hi @Aktsvigun, the documentation you're linking is for `transformers` v1.2.0. If you want to run `v1.2.0` scripts, you should look at the [tag v1.2.0](https://github.com/huggingface/transformers/tree/1.2.0/examples).
Please note that the latest scripts are more stable. As @patil-suraj said, the script you're looking for was renamed, as you can see in the [current documentation](https://huggingface.co/transformers/examples.html). |
transformers | 4,759 | closed | Fix resize_token_embeddings for Transformer-XL | Fixes #3554
As discussed in the issue above, the fix ensures that per default the last layer of the `AdaptiveEmbedding` is resized. Otherwise the target layer can be passed to the `resize_token_embeddings()` method as the parameter `layer`.
After the resizing is done, the cutoffs are adjusted accordingly. | 06-04-2020 10:49:49 | 06-04-2020 10:49:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4759?src=pr&el=h1) Report
> Merging [#4759](https://codecov.io/gh/huggingface/transformers/pull/4759?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5bf9afbf351f9419505eb1c9e0c5ab78883c3caf&el=desc) will **decrease** coverage by `0.05%`.
> The diff coverage is `95.65%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4759?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4759 +/- ##
==========================================
- Coverage 77.41% 77.35% -0.06%
==========================================
Files 128 128
Lines 21059 21105 +46
==========================================
+ Hits 16302 16325 +23
- Misses 4757 4780 +23
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4759?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.65% <95.65%> (+1.69%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-6.37%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.80% <0.00%> (-0.12%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4759?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4759?src=pr&el=footer). Last update [5bf9afb...e856841](https://codecov.io/gh/huggingface/transformers/pull/4759?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks!
Yes sure, I was already thinking that a test could be useful here. I will try my best and add a test in [test_modeling_transfo_xl.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_transfo_xl.py), correct? Do I have to add a separate class for it or include it in [TransfoXLModelTester](https://github.com/huggingface/transformers/blob/2b8b6c929e282958a920ba2aa26ee59106986ec3/tests/test_modeling_transfo_xl.py#L42) ?
Is there any proper way to run/debug only one test so I can try out my test-code easily?<|||||>I was thinking of putting it in `TransfoXLModelTest` under the name `test_resize_tokens_embeddings` so that it overrides the parent class' method. Alongside the method `test_model_from_pretrained`, do you see what I mean?
You can run the test suite for only this file using the following command (requires `pytest` and `pytest-cov` installed, which gets you the best stacktrace):
```
python -m pytest -sv ./tests/*modeling_transfo* --cov
```
Let me know if you need any help!<|||||>Ok I added a test for the new `resize_token_embeddings` method.
P.S. running `isort --recursive examples templates tests src utils` reformats the file `examples\benchmarking\plot_csv_file.py` for me. Should I add this change too or ignore it?<|||||>@LysandreJik I'm happy to contribute :)
I have another question: Since my implementation supports resizing embedding layers other than the first, the added tokens have to be moved in the tokenizer as well. I also have a solution for this which I added into the `TransfoXLTokenizer`. Should I open a separate issue and PR for this (or just PR) or add it here, as it's somehow related?<|||||>@patrickvonplaten, since you've worked a bit with TransformerXL in the past, do you want to take a look before we merge?<|||||>I like it - looks very clean to me! |
transformers | 4,758 | closed | run_tf_ner.py output_dir/saved_model empty | # ❓ Questions & Help
https://github.com/huggingface/transformers/tree/master/examples/token-classification
I followed the example here and saw the logging message:
` Saving model in model/saved_model`
Then I went to the folder saved_model and found it is empty.
Is this expected?
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 06-04-2020 10:20:05 | 06-04-2020 10:20:05 | I got my answers here: https://github.com/huggingface/transformers/issues/3246
so I am closing this issue. |
transformers | 4,757 | closed | Add drop_last arg for data loader | Add an extra argument to `TrainingArguments` that would be passed on to `Trainer` for use in DataLoader.
I ran into a problem while using the `Trainer` this week and the GPU expecting the full batch size of vector inputs, and put a workaround in place in the dataset class I was using, but would be useful to have this as an optional argument. | 06-04-2020 05:28:27 | 06-04-2020 05:28:27 | Hi! This looks like a cool feature to add, indeed. I'm curious, what was the error you obtained because of an incomplete batch? It shouldn't raise an error if a batch is smaller than what it has previously seen.
I could see it being useful when the framework needs to trace with a given input size though, like with TPUs or with JAX.<|||||>Hey @LysandreJik, thanks for taking a look!
This error occurred on the last step of the epoch:
`RuntimeError: Gather got an input of invalid size: got [2, 1, 20, 256, 64], but expected [2, 2, 20, 256, 64] (gather at /AWS-PyTorch/torch/csrc/cuda/comm.cpp:231)`
Because of the nature of the error and it occurring on the last step, my suspicion was it was because of `drop_last`. I implemented a workaround for it and that stopped the error from re-appearing.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4757?src=pr&el=h1) Report
> Merging [#4757](https://codecov.io/gh/huggingface/transformers/pull/4757?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5bf9afbf351f9419505eb1c9e0c5ab78883c3caf&el=desc) will **decrease** coverage by `0.07%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4757?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4757 +/- ##
==========================================
- Coverage 77.41% 77.34% -0.08%
==========================================
Files 128 128
Lines 21059 21060 +1
==========================================
- Hits 16302 16288 -14
- Misses 4757 4772 +15
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4757?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4757/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <ø> (ø)` | |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/4757/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `76.66% <100.00%> (+0.26%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4757/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (-6.37%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4757/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.80% <0.00%> (-0.12%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4757/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.96% <0.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4757/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.03% <0.00%> (+1.35%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4757?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4757?src=pr&el=footer). Last update [5bf9afb...9a8fb7a](https://codecov.io/gh/huggingface/transformers/pull/4757?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I'm assuming you use GPU distribution – what method do you use for distribution? `nn.DataParallel`? <|||||>@julien-c: Correct, it is using `nn.DataParallel` under the hood.<|||||>It should work out of the box with torch.distributed instead of nn.DataParallel.
I have no objection to merging this though :)<|||||>Hmmm, this was on AWS SageMaker, so I'll double-check how it is implemented there.
Good recommendation on the change. Also, another question: Should the arg be separate for train and eval data loaders? I assumed not, but just wanted to confirm :).<|||||>No I can't think of a scenario where one would want to drop_last in train and not in eval (or inversely)
Thank you, merging |
transformers | 4,756 | closed | [WIP] feat(wandb): add logging to TFTrainer | Bring logging feature parity from `Trainer` to `TFTrainer`.
Code has been refactored to share logging utilities. | 06-04-2020 01:35:33 | 06-04-2020 01:35:33 | Note that there are still a few differences.
For example `TFTrainer` uses `args.eval_steps`.
It could make sense to refactor training args so both classes share the same ones when possible.<|||||>Here is an example run with `TFTrainer` on MRPC.

[Link to W&B run](https://app.wandb.ai/borisd13/huggingface/runs/1zngxzw0?workspace=user-borisd13)<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4756?src=pr&el=h1) Report
> Merging [#4756](https://codecov.io/gh/huggingface/transformers/pull/4756?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e80d6c689bd62f805a5c8d77ec0cc3b09f240d14&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `39.24%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4756?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4756 +/- ##
==========================================
- Coverage 77.10% 77.09% -0.01%
==========================================
Files 128 128
Lines 21723 21734 +11
==========================================
+ Hits 16749 16756 +7
- Misses 4974 4978 +4
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4756?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/4756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `19.06% <9.67%> (+0.02%)` | :arrow_up: |
| [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `69.76% <56.66%> (-30.24%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.10% <61.11%> (-0.15%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.38% <0.00%> (-0.41%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4756?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4756?src=pr&el=footer). Last update [e80d6c6...28342e9](https://codecov.io/gh/huggingface/transformers/pull/4756?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@jplu while I'm doing it, is the `step` variable from `_prediction_loop` used or can I remove it?<|||||>It is used for logging.<|||||>Ping @julien-c as there are some changes in the PT trainer.<|||||>> It is used for logging.
Ok, I don't see it being used anywhere in that function…<|||||>Hummm indeed... Can you still keep it, I will review this later to better check that part :)<|||||>I should have addressed your comments. I moved tensorboard specific logging directly within specific trainers and call it from shared `log_metrics`.
It could have also been the opposite with a public `trainer.log` method that calls `log_metrics` for everything non-tensorboard (wandb and stdout) but I thought this separation would be less obvious to users.
Let me know if you have any other suggestions.
The main differences now between the 2 trainers are the use of `args.debug` and `args.eval_steps` in `TFTrainer`.<|||||>Let me know if anything else is needed<|||||>The last CI error seems unrelated to this PR.<|||||>Ok it is fine for me for the TF part. I let @julien-c to review.<|||||>Can I have any feedback on any possible changes still required?
This is a pretty large refactor of logging so it's hard to keep up to date with the repo ;)<|||||>I merged master. Main change is logging for wandb & Tensorboard is applied only for world master now.
Need to figure out if the same should apply for `TFTrainer`. I left a comment on the code.
Once we finalize this, I'll run again logging with both `Trainer` and `TFTrainer` to make sure everything works and later I'll work on tests in a follow-up PR.<|||||>Thanks! Looks ok, about the world master there is no need for the TF part.<|||||>Based on above comments, I can just put all the wandb logic separately in their respective Trainer's (and not use `trainer_utils` anymore).
Could you confirm this is the way to go @jplu @julien-c <|||||>I made a new PR as I understood you prefer not to refactor logging into a single file like I did here.
Feel free to close this one if my understanding was correct.<|||||>Obsolete PR |
transformers | 4,755 | closed | run_ner.py crashes with RoBERTa because of incorrect sequence length | # 🐛 Bug
## Information
I'm running `examples/token-classification/run_ner.py` with RoBERTa. An assert statement fails:
```
assert len(input_ids) == max_seq_length
AssertionError
```
Looks like the cause is a mismatch between the value of roberta's `tokenizer.num_special_tokens_to_add()` and the number of special tokens that is actually added to the sequence in `utils_ner.py::convert_examples_to_features()`.
Specifically, `tokenizer.num_special_tokens_to_add()` is 2 (presumably for `<s>` and `</s>`). However, `convert_examples_to_features()` adds an extra `</s>` token at line 331, in addition to the `<s>` token and first `</s>` token. So the result is that there are three special tokens, and the sequence ends with `</s> </s>`.
`convert_examples_to_features()` relies on `num_special_tokens_to_add()` to determine how many content tokens from the sequence to use, but because of the mismatch above, you can end up with a sequence length of 129 even when the sequence length was set to a max of 128.
To reproduce this, just follow the instructions at `examples/token-classification/README.md` except use the flag `--model_name_or_path roberta-base`.
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## Environment info
- `transformers` version: 2.11.0
- Platform: Linux-4.15.0-101-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: fails in both cases
- Using distributed or parallel set-up in script?: no
| 06-04-2020 00:56:32 | 06-04-2020 00:56:32 | getting same error. any solution yet<|||||>The quick temp fix would be to change the num_special_tokens_to_add() to be 3 instead OR not write the extra sep tag since it's single sequence tagging.
I'm guessing the second option is more appropriate because the code that adds the extra sep tag is in utils_ner.py, and as far as I'm aware NER should never involve more than one sep tag.<|||||>I will prepare a fix for that soon! |
transformers | 4,754 | closed | bart-large-cnn model weights updated? | Hi,
Today, I noticed that the bart model path has been changed to 'facebook/bart-large-cnn' rather than 'bart-large-cnn'. And I run the demo below, but the result changed since then. (seems worse)
https://colab.research.google.com/drive/11hKBPfsfBXPKo-dK_gHsPklF4PcNflQZ#scrollTo=dyTJ_ZavDp1q
So, is that weights updated?
Thanks,
Max | 06-04-2020 00:22:22 | 06-04-2020 00:22:22 | Hello! The weights have not been updated. The path has been changed in the last version, as you can see in the [release notes](https://github.com/huggingface/transformers/releases/tag/v2.11.0) (cc @julien-c), but no change was done to these weights.
|
transformers | 4,753 | closed | Can I use TorchText Iterator output as the input_ids for Hugging Face Transformer? | Hello,
For a given text, `TorchText` iterator, such as the `BPTTIterator`, returns the string of the text after each token is converted to their respective integer ID.
So for example, if the integer ids are assigned as the following manner:
"I" = 53, "like"=753, "dogs" = 2
Then for the string "I like dogs", a `TorchText` iterator would return `[53, 753, 2]`.
Is it okay to use this type of TorchText Iterator output directly as an `input_ids` for Hugging Face Transformers, providing that the Transformer models I use is not the Hugging Face pre-trained ones?
Thank you,
| 06-03-2020 23:38:47 | 06-03-2020 23:38:47 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,752 | closed | Batching not speeding up Transformer-XL | I have modified the example `[run_generation.py](https://github.com/huggingface/transformers/blob/master/examples/text-generation/run_generation.py)` so that it can use batches. My code (pared down for the example) is below, called `batch_gen.py`:
```
#!/usr/bin/env python3
# coding=utf-8
import argparse
import logging
import numpy as np
import torch
from transformers import (
GPT2LMHeadModel,
GPT2Tokenizer,
TransfoXLLMHeadModel,
TransfoXLTokenizer,
)
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO,
)
logger = logging.getLogger(__name__)
MAX_LENGTH = int(10000) # Hardcoded max length to avoid infinite loop
MODEL_CLASSES = {
"gpt2": (GPT2LMHeadModel, GPT2Tokenizer),
"transfo-xl": (TransfoXLLMHeadModel, TransfoXLTokenizer),
}
# Convert a list of prompts (strings) into batches (lists of strings,
# where each list is of size batch_size). The final batch might be
# smaller than batch_size
def batchify_prompts(prompt_list, batch_size):
batches = []
this_batch = []
for prompt in prompt_list:
this_batch.append(prompt)
if len(this_batch) == batch_size:
batches.append(this_batch[:])
this_batch = []
if len(this_batch) > 0:
batches.append(this_batch)
return batches
parser = argparse.ArgumentParser()
parser.add_argument("--model_type",default=None,type=str,required=True,help="Model type selected in the list: " + ", ".join(MODEL_CLASSES.keys()),)
parser.add_argument("--model_name_or_path",default=None,type=str,required=True,help="Path to pre-trained model or shortcut name selected in the list: " + ", ".join(MODEL_CLASSES.keys()),)
parser.add_argument("--length", type=int, default=20)
parser.add_argument("--prompt_file", type=str, default=None, help="File of prompts, 1 prompt per line.")
parser.add_argument("--batch_size", type=int, default=10, help="Number of prompts to include in a batch.")
args = parser.parse_args()
args.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
args.n_gpu = torch.cuda.device_count()
# Create file to print to
output_filename = "_".join([str(x) for x in [args.model_type, args.prompt_file.split("/")[-1]]]) + ".generated"
fo = open(output_filename, "w", encoding="utf-8")
args.model_type = args.model_type.lower()
model_class, tokenizer_class = MODEL_CLASSES[args.model_type]
tokenizer = tokenizer_class.from_pretrained(args.model_name_or_path)
model = model_class.from_pretrained(args.model_name_or_path)
model.to(args.device)
# Read in prompts from file
prompt_file = open(args.prompt_file, "r", encoding="utf-8")
prompt_list = []
for prompt_line in prompt_file:
prompt_list.append(prompt_line);
prompt_batches = batchify_prompts(prompt_list, args.batch_size)
# Generate text for each prompt
for prompt_batch in prompt_batches:
tokenizer.pad_token = "<PADDINGTOKEN>"
tokenizer.padding_side = "left"
encoding = tokenizer.batch_encode_plus(prompt_batch, add_special_tokens=False, return_tensors="pt", pad_to_max_length=True, add_space_before_punct_symbol=True)
encoded_prompt = encoding["input_ids"]
# Attention mask is not automatically returned by batch_encode_plus, so here we generate it manually
attention_mask = 1 - (encoded_prompt == tokenizer.pad_token_id).type(torch.LongTensor)
encoded_prompt = encoded_prompt.to(args.device)
if encoded_prompt.size()[-1] == 0:
input_ids = None
else:
input_ids = encoded_prompt
output_sequences = model.generate(
input_ids=input_ids,
max_length=50 + len(encoded_prompt[0]),
min_length=50 + len(encoded_prompt[0]),
temperature=1.0,
top_k=40,
top_p=1,
repetition_penalty=1.0,
do_sample=True,
num_return_sequences=1,
attention_mask=attention_mask,
)
# Write the generations to the output file
for generated_sequence_idx, generated_sequence in enumerate(output_sequences):
fo.write("=== PROMPT ===\n")
generated_sequence = generated_sequence.tolist()
# Decode text
text = tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True)
# Add the prompt at the beginning of the sequence. Remove the excess text that was used for pre-processing
generated_sequence = (
text[len(tokenizer.decode(encoded_prompt[0], clean_up_tokenization_spaces=True)) :]
)
fo.write(prompt_batch[generated_sequence_idx] + "\n=== GENERATED ===\n")
fo.write(generated_sequence + "\n\n")
```
To test the speedup provided by batching, I use a text file called `prompts.txt` with the following prompts:
```
The accompanying music video , directed by Vaughan Arnell ,
Inspired by the Beach Boys , cult surfing films ,
Premiering worldwide on Vevo on 7 January 2013 , the
The video features scenes reminiscent of the films South Pacific
The music video garnered 10 @.@ 4 million views in
Despite a 34 % gain in weekly activity to their
191 @,@ 000 Twitter followers added contributed to their overall
Rebecca <unk> of E ! Online praised its " intentionally
Molly Chance , writing for Zap2it , was convinced that
Mikael Wood , the critic for Los Angeles Times ,
It is said that when he died in Osaka during
A variety of styles have been used in efforts to
As Burton Watson remarks in The Selected Poems of Du
The translators have had to contend with bringing out
One extreme on each issue is represented by Kenneth Rexroth
His are free translations , which seek to conceal the <unk>
Other translators have placed much greater weight on trying to
Vikram Seth in Three Chinese Poets uses English @-@ style
In The Selected Poems of Du Fu , Burton Watson follows the
Traditional Chinese literary criticism emphasized the life of the author
Since many of Du Fu 's poems feature morality and
Another reason , identified by the Chinese historian William Hung
For modern Western readers , " The less accurately we
Stephen Owen suggests a third factor particular to Du Fu
Most of what is known of Du Fu 's life
His paternal grandfather was Du <unk> , a noted politician
Du Fu was born in 712 ; the exact birthplace
In later life , he considered himself to belong to
He also had three half brothers and one half sister
The son of a minor scholar @-@ official , his
```
The following command is used to run the code with GPT-2:
```
python batch_gen.py --model_type=gpt2 --model_name_or_path=gpt2 --prompt_file prompts.txt --batch_size 10
```
With GPT-2, batching speeds up the runtime as expected: Each batch takes approximately 1 second, regardless of whether the batch size is 1, 5, or 10. However, with Transformer-XL, this is not the case. Here is the command to run with Transformer-XL:
```
python batch_gen.py --model_type=transfo-xl --model_name_or_path=transfo-xl-wt103 --prompt_file prompts.txt --batch_size 1
```
With a batch size of 1, each batch takes 3 seconds. With a batch size of 5, each batch takes 12 seconds. With a batch size of 10, each batch takes 21 seconds. Thus, batching is not providing much of a speedup compared to generating examples serially. (You can see the amount of time each batch takes by looking at the time stamps on the log messages that are printed out).
Therefore, I am wondering if there is a bug in the batching for Transformer-XL? Or is there some reason why the architecture cannot support efficient batching?
I am running this code on a p100 GPU through Ubuntu version 18.04 with PyTorch version 1.5.0 and Python version 3.7.7.
Thank you!
| 06-03-2020 21:38:41 | 06-03-2020 21:38:41 | Hey! I've also observed this with the CMU Transformer-XL codebase. The main difference with other Transformers is the adaptive softmax, so that's the first I'd look at; does an XL model with a normal projection layer also have problems with batching ?
I was actually planning to investigate a suspected bug in HF's Transformer-XL training performance tomorrow morning, so if it's not urgent I can also take a look at that at the same time.<|||||>It's encouraging to hear that someone else has observed this! Thanks for the suggestion - I just tried turning off the adaptive softmax (by changing the line `model = model_class.from_pretrained(args.model_name_or_path)` to `model = model_class.from_pretrained(args.model_name_or_path, adaptive=False)`), but that did not change the runtimes.
It's not urgent, so it would be much appreciated if you can take a look!<|||||>So here are my observations for now, running on my laptop's RTX 2070 (transformers 2.11.0, torch 1.5.0, python 3.6.9, CUDA 10.2, no mixed precision) at training time for that other bug hunt:
- passing `adaptive=False` does not actually do anything as far as I can tell, the `adaptive` attribute of `config` isn't used anywhere
- at training time, the XL model with adaptive softmax seems to be both quicker and more batch-friendly than GPT 2 and an XL model with a normal Linear projection layer.
| batch size | Adaptive XL | Linear XL | GPT-2 |
| --- | --- | --- | --- |
| 1 | 33.27 it/s | 29.16 it/s | 35.06 it/s |
| 2 | 31.06 it/s | 19.93 it/s | 24.86 it/s |
| 4 | 29.30 it/s | 13.63 it/s | 14.87 it/s |
| 8 | 23.03 it/s | 7.85 it/s | 8.49 it/s |
So that's pretty strange. What is your version of transformers ? I'll be looking at inference time now, as it may be different from training to inference. EDIT: also the case for me at inference time
| batch size | Adaptive XL | Linear XL | GPT-2 |
| --- | --- | --- | --- |
| 1 | 286.92 it/s | 197.25 it/s | 216.45 it/s |
| 2 | 264.54 it/s | 102.02 it/s | 109.74 it/s |
| 4 | 214.71 it/s | 56.27 it/s | 59.91 it/s |
| 8 | 148.69 it/s | 30.35 it/s | 31.97 it/s |
Another lead is the einsum function; it's used in transformer-XL but doesn't look like it is used in GPT-2, and I know that it can behave poorly sometimes especially in mixed-precision settings. Are you using apex?<|||||>Interesting!
I'm using transformers 2.10.0, and am not using apex.
If you're able to share the code you were using for inference time, that would be helpful, so I can try it & see if it's my code or my environment that's giving us different results.<|||||>I cleaned up the code a bit and uploaded it on [Google Drive](https://drive.google.com/file/d/1dpHwVdAcchb87ZOXoAi_qP5wmqTNEHtS/view?usp=sharing). It uses lightning and operates on real wt103 data (included in the zip) so it's not quite minimal though.
Another (more remote) possibility is an issue in batching, after looking again at my dataloader code it was a bit more complex than usual to support transfoXL memories.<|||||>Thanks for the code! The main difference I see between your code and mine is that I am using the ```generate``` function, whereas you don't. After looking into the ```generate``` function for Transformer-XL, I believe I have found a bug.
Here is code that uses greedy generation without the ```generate``` function:
```
from transformers import TransfoXLLMHeadModel, TransfoXLTokenizer
import torch
tokenizer = TransfoXLTokenizer.from_pretrained("transfo-xl-wt103")
model = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103')
generated = tokenizer.encode("The Manhattan Bridge")
context = torch.tensor([generated])
mems = None
for i in range(100):
print(i)
output, mems = model(context, mems=mems)[:2]
token = torch.argmax(output[..., -1, :])
generated += [token.tolist()]
context = token.unsqueeze(0).unsqueeze(0)
sequence = tokenizer.decode(generated)
print(sequence)
```
This generates the following text:
> The Manhattan Bridge, <eos> <eos> = = = = The Bridge = = = = <eos> <eos> The bridge over the Delaware River was built in the late 19th century by the Delaware and Hudson Canal Company. The bridge was built in the style of a drawbridge, with a single span of 1 @,@ 200 feet ( 370 m ). The bridge was designed by John Roebling, who also designed the Delaware River Bridge. The bridge was built in the style of a drawbridge, with a single span of 1 @,@ 200 feet ( 370 m
The code below should also generate the same text, just using the ```generate``` function:
```
from transformers import TransfoXLLMHeadModel, TransfoXLTokenizer
import torch
tokenizer = TransfoXLTokenizer.from_pretrained("transfo-xl-wt103")
model = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103')
model.to("cuda")
generated = tokenizer.encode("The Manhattan Bridge")
context = torch.tensor([generated]).to("cuda")
mems = None
print(context)
output_sequences = model.generate(
input_ids=context,
max_length=100 + len(generated),
min_length=100 + len(generated),
eos_token_id=267734,
#temperature=1.0,
#top_k=1,
#top_p=1.0,
#do_sample=True,
#num_return_sequences=1,
)
sequence = tokenizer.decode(output_sequences[0])
print(sequence)
```
However, it does not give the same output; instead, it generates:
> The Manhattan Bridge, <eos> the Brooklyn Bridge, <eos> the Brooklyn Bridge, <eos> the Brooklyn Bridge, <eos> the Brooklyn Bridge, <eos> the Brooklyn Bridge, <eos> the Brooklyn Bridge, <eos> the Brooklyn Bridge, <eos> the Brooklyn Bridge, <eos> <eos> = = = The Manhattan Bridge, <eos> <eos> =, " the <eos> <eos> <eos> the <eos> the the.. the <eos>, <eos>, The, <eos> The The, <eos> The New York Bridge, <eos> is a double @-@ A @-@ A @-@ The Manhattan Bridge, <eos> the Brooklyn Bridge,
I was able to fix the discrepancy by changing the ```prepare_inputs_for_generation``` function of Transformer-XL to the code below (similar to the code used for that function in GPT-2):
```
def prepare_inputs_for_generation(self, input_ids, past, **model_kwargs):
inputs = {}
# if past is defined in model kwargs then use it for faster decoding
if past:
inputs["mems"] = past
inputs["input_ids"] = input_ids[:, -1].unsqueeze(-1)
else:
inputs["input_ids"] = input_ids
return inputs
```
With this code, the ```generate``` function gives the same output as a for-loop. In addition, this also speeds up generation substantially: My use case is generating 500-token text from 512-token prompts, and that now takes about 30 seconds per prompt, while previously it was 3 minutes per prompt. Batching also is now more helpful than before - still not as helpful as I would expect, but that doesn't matter because it's now fast enough to be perfectly useful for me.
I've made a draft pull request here: https://github.com/huggingface/transformers/pull/4826. But I'm not sure if it's ready to be submitted (I've never submitted a pull request before): some of the tests in ```make test``` fail, and I'm not sure what is required for step 5 of the pull request checklist ("Add high-coverage tests.").
<|||||>Fixed by #4826 |
transformers | 4,751 | closed | Update encode documentation | closes #4750 | 06-03-2020 20:30:44 | 06-03-2020 20:30:44 | |
transformers | 4,750 | closed | Tokenizer.encode documentation not correct | # 🐛 Bug
## Information
Model I am using Bert (bert-base-german-cased):
Language I am using the model on German:
The problem arises when using:
* tokenizer of this model
The tasks I am working on is:
* encoding
## To reproduce
Steps to reproduce the behavior:
1. lang_model = 'bert-base-german-cased'
2. tokenizer = BertTokenizer.from_pretrained(lang_model)
3. test_sentence = 'Das war gut'
4. tokenizer.encode(test_sentence)
output: [3, 295, 185, 1522, 4]
5. tokenizer.convert_tokens_to_ids(tokenizer.tokenize(test_sentence))
output: [295, 185, 1522]
## Expected behavior
According to the documentation of the encoding method in
https://huggingface.co/transformers/main_classes/tokenizer.html
these two outputs should be the same. The problem is, that the _encode_ method adds special tokens [CLS] and [SEP] at the beginning and the end. However, the transformation in line 5. does not. This is not a problem at all but one should consider correcting the online-documentation. That is, the hint
**Same as doing self.convert_tokens_to_ids(self.tokenize(text))**
in the encode method is misleading. Instead, one could also add, that these two commands are the same if _add_special_tokens_ is set _False_ in the _encode_ method.
## Environment info
- `transformers` version: 2.8.0
- Platform: Windows 10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.5.0+cpu
- Tensorflow version (GPU?): None
- Using GPU in script?:No
- Using distributed or parallel set-up in script?:No
| 06-03-2020 20:08:29 | 06-03-2020 20:08:29 | You're right! The documentation should be updated.<|||||>Done! Thanks for raising an issue :)<|||||>You are welcome. Thank you for the transformers package -- great work!<|||||>Hi, I was checking the documentation page however, could not find the documentation for encode and encode_plus.
[Documentation Link](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.encode)
Can someone point me to the right documentation for these methods?<|||||>Hi, I think, in the newer version of the transformer package, the encode_plus method has been consumed by the __call__ method which yields (as batch_encode_plus, and encode_plus) a BatchEncoding object. The preamble of the Tokenizer documentation contains this information. <|||||>@ank-shukla the response given by @lubok-dot is correct, the `__call__` method should be used instead of `encode` and `encode_plus` in newer versions. If you're on an older version or would still like to use said methods, please check an older version of the documentation, for example [v2.11.0](https://huggingface.co/transformers/v2.11.0/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.encode_plus)<|||||>@lubok-dot @LysandreJik Thanks for the clarification. I am on the new version and the said methods encode and encode_plus still work as expected. I understand now that __call__ method can carry out the same process based on the parameters. |
transformers | 4,749 | closed | Hugging Face GPT-2 Tokenizer | Hello,
I know that if I choose to add any new "special token" onto the pre-made GPT-2 tokenizer, and if I want to use the pre-trained GPT-2 model for my analysis, I will need to re-train the pre-trained GPT-2 to make the model learn that new special token.
But what if I just add an extra non-special token? for example, a word "paradox" is not included in the existing GPT-2 tokenizer, so say I add the word "paradox" to the existing set of GPT-2 vocabulary, like below:
```python
# load the pre-trained GPT2-tokenizer
gpt2_tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
# adding a new word (not special token) to the existing vocabulary,
# but I am not making any changes to the pre-assigned special tokens
gpt2_tokenizer.add_tokens("paradox")
# get the pre-trained HuggingFace GPT2DoubleHeadsModel
model_gpt2DoubleHeadsModel = GPT2DoubleHeadsModel.from_pretrained('gpt2', output_hidden_states = True)
# resize the token embeddings
# (not sure what this function does)
model_gpt2DoubleHeadsModel.resize_token_embeddings(len(gpt2_tokenizer))
```
Given that I didn't make any changes to the special tokens in the GPT-2-tokenizer, do I still need to train the already pre-trained `GPT2DoubleHeadsModel` before I start using it, just because I added a new word to the set of vocabulary?
Thank you, | 06-03-2020 19:56:37 | 06-03-2020 19:56:37 | You would need to fine-tune your GPT-2 model on a dataset containing the word, yes. The reason being that your model needs to understand in which context is the word used, what it means, etc.<|||||>Hello,
Thank you for your reply.
So I have a set of multiple choice questions, and when I use the `add_tokens` function to add whichever the tokens from the dataset that are not originally included in the GPT-2 tokenizer, the length of my GPT-2 tokenizer jumps up by ~3000 (so my dataset contains 3000 new tokens)
even if I fine tune the pre-trained GPT-2 model on a portion of my dataset (say), I won't be able to train the pre-trained model on all of the new tokens. So I am not sure how I will go about this.
What I am trying to do is though, I want the pre-trained `GPT2DoubleHeadsModel` to solve a set of multiple-choice questions, and I want to compare the error rates generated by the hidden outputs of each of the 12 layers when they are fed directly into the multiple-choice head of the model. That is, my goal is not to minimize the overall error rate of the GPT-2 model, my goal is to simply compare the error rates generated by the different layers of the model. Given this information, do I still need to fine-tune my GPT-2 model on all of the new tokens that I am adding?
Thank you,<|||||>If you don't fine-tune your model on the new tokens you're adding, then when the model sees it at inference it will be a completely unknown token, and the model probably won't handle it correctly.
If you don't have any training data to fine-tune the model, why don't you keep the tokens as they are? The GPT-2 tokenizer should be able to correctly tokenize them, as it's a byte level BPE. <|||||>Hello,
Thank you again for your reply.
I think I want to add the word as a new token mainly because I do not want the word to be treated as the mere `<unk>` token.
I am not sure what byte level BPE means, but if I do not add the new words as extra tokens, would it really work just because the tokenizer is a byte level BPE?
Thank you :s<|||||>Byte level BPEs should be able to tokenize everything. The GPT-2 tokenizer has no unknown token for that reason.
You should try to tokenize your tokens to see if some come back as unknown, but it shouldn't with GPT-2 (and RoBERTa for that matter)!<|||||>Thank you! I was able to confirm that what you mentioned in your previous post also works for my case. |
transformers | 4,748 | closed | QuestionAnsweringPipeline query performance | This is my first issue posted here, so first off thank you for building this library, it's really pushing NLP forward.
The current [QuestionAnsweringPipeline](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py#L1187) relies on the method [squad_convert_examples_to_features](https://github.com/huggingface/transformers/blob/ed4df85572924871758ca32133b46116121c706f/src/transformers/data/processors/squad.py#L269) to convert question/context pairs to SquadFeatures. In reviewing this method, it looks like it spawns a process for each example.
This is causing performance issues when looking to support near real-time queries or bulk queries. As a workaround, I can directly issue the queries against the model but the pipeline has a lot of nice logic to help format answers properly and pulling the best answer vs start/end argmax.
Please see the results of a rudimentary performance test to demonstrate:
```python
import time
from transformers import pipeline
context = r"""
The extractive question answering process took an average of 36.555 seconds using pipelines and about 2 seconds when
queried directly using the models.
"""
question = "How long did the process take?"
nlp = pipeline("question-answering", model="distilbert-base-cased-distilled-squad", tokenizer="distilbert-base-cased-distilled-squad")
start = time.time()
for x in range(100):
answer = nlp(question=question, context=context)
print("Answer", answer)
print("Time", time.time() - start, "s")
```
```
Answer {'score': 0.8029816785368773, 'start': 62, 'end': 76, 'answer': '36.555 seconds'}
Time 36.703474044799805 s
```
```python
import torch
from transformers import pipeline, AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("distilbert-base-cased-distilled-squad")
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased-distilled-squad")
start = time.time()
for x in range(100):
inputs = tokenizer.encode_plus(question, context, add_special_tokens=True, return_tensors="pt")
input_ids = inputs["input_ids"].tolist()[0]
text_tokens = tokenizer.convert_ids_to_tokens(input_ids)
answer_start_scores, answer_end_scores = model(**inputs)
answer_start = torch.argmax(
answer_start_scores
) # Get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
print("Answer", answer)
print("Time", time.time() - start, "s")
```
```
Answer 36 . 555 seconds
Time 2.1718859672546387 s
```
I believe the 10x slowdown is that the first example had to spawn 100 processes. I also tried passing a list of 100 question/context pairs to see if that was better and that took ~28s. But for this use case, all 100 questions wouldn't be available at once.
The additional logic for answer extraction doesn't come for free but it doesn't add much overhead. The third test below uses a [custom pipeline component](https://github.com/neuml/cord19q/blob/master/src/python/cord19q/pipeline.py) to demonstrate.
```python
from cord19q.pipeline import Pipeline
pipeline = Pipeline("distilbert-base-cased-distilled-squad", False)
start = time.time()
for x in range(100):
answer = pipeline([question], [context])
print("\nAnswer", answer)
print("Time", time.time() - start, "s")
```
```
Answer [{'answer': '36.555 seconds', 'score': 0.8029860216482803}]
Time 2.219379186630249 s
```
It would be great if the QuestionAnsweringPipeline could either not use the squad processor or the processor is changed to have an argument to not spawn processes. | 06-03-2020 19:27:29 | 06-03-2020 19:27:29 | Hi! Thanks for the detailed report. Indeed, it would be nice to keep the performance high, especially if it's due to something annex than pure inference. I'm looking into it.<|||||>Great, thank you for the quick response!<|||||>After looking into it, it seems that the threading is only part of the problem. Removing it results in 24 seconds instead of 36 seconds, which is still 10x slower than pure inference.
I believe this is mostly due to the `squad_convert_example_to_features`, which is made to be very robust. By doing so, it slows things down by quite a big factor.
There's probably a few things that are overkill for the pipeline when compared to a SQuAD training.<|||||>Thanks once again for the quick response. I did notice that the tokenizer in squad_convert_example_to_features was also padding to the max sequence length, which makes sense for batch inputs. My guess is that the value add was in how the squad processor can robustly extract answers. It's tricky to find the match in the original text when all you have are model tokens.
The custom example referenced above builds a regular expression joining the tokens on \s? and handles BERT subwords but I'm not sure how that would work for all models.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi @davidmezzetti, just to let you know we're working towards a bigger pipeline refactor, with a strong focus on performance. Let's keep this issue open while it's still in the works in case more is to be said on the matter.<|||||>Thank you for following up, sounds great, thank you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@LysandreJik has there been any update in the library with respect to this issue ?<|||||>I know this is an old issue but just to close the loop - v4.0.0 improved pipeline qa performance on par with the methods referenced above. Thank you!<|||||>Glad to hear it! |
transformers | 4,747 | closed | No silent error when d_head already in the configuration | closes #4696 | 06-03-2020 17:24:00 | 06-03-2020 17:24:00 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4747?src=pr&el=h1) Report
> Merging [#4747](https://codecov.io/gh/huggingface/transformers/pull/4747?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ed4df85572924871758ca32133b46116121c706f&el=desc) will **increase** coverage by `0.09%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4747?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4747 +/- ##
==========================================
+ Coverage 77.12% 77.22% +0.09%
==========================================
Files 128 128
Lines 21061 21063 +2
==========================================
+ Hits 16243 16265 +22
+ Misses 4818 4798 -20
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4747?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `94.00% <100.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.59% <0.00%> (+0.11%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `35.03% <0.00%> (+6.36%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4747?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4747?src=pr&el=footer). Last update [ed4df85...fe85f3e](https://codecov.io/gh/huggingface/transformers/pull/4747?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,746 | closed | Why I can't generate phrases in batches if I include an attention mask? (GPT2) | Assuming these are my input phrases and model:
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<PAD>')
prompt_text = [
"are there any good coaching institutes for civil services preparations in bangalore? ->"]
```
If I try to generate phrases in batches with the corresponding attention mask it doesn't work. It outputs the input phrase without any new words on it:
```
# encode plus batch handles multiple batches and automatically creates attention_masks
seq_len = 100
encodings_dict = tokenizer.batch_encode_plus(prompt_text, max_length=seq_len, pad_to_max_length=True)
input_ids = torch.tensor(encodings_dict['input_ids'])
attn_mask = torch.tensor(encodings_dict['attention_mask'])
encoded_result = model.generate(input_ids, attention_mask=attn_mask, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, num_return_sequences=10, top_k=50, top_p=0.95, do_sample=True, max_length=100)
for er in encoded_result:
print(tokenizer.decode(er, skip_special_tokens=True))
```
However, if I generate phrases one by one (without batches) then it works:
```
encoded_text = tokenizer.encode(prompt_text[0], return_tensors='pt')
encoded_result = model.generate(encoded_text,eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, num_return_sequences=10, top_k=50, top_p=0.95, do_sample=True, max_length=100)
print(tokenizer.decode(encoded_result[0], skip_special_tokens=True))
```
## Details
Any ideas what could be causing this problem?
Thanks!! | 06-03-2020 16:36:06 | 06-03-2020 16:36:06 | Probably of interest to @patrickvonplaten <|||||>Hi @Barbara931120,
Batch generation is sadly currently not implemented in the `.generate()` method. Also, see https://github.com/huggingface/transformers/issues/3021 for reasons why. It's on our roadmap to implement this functionality soon :-) |
transformers | 4,745 | closed | [Generation Beam Search] Fix bug when changing the <EOS> token for generate | This PR fixes https://github.com/huggingface/transformers/issues/4121 . When comparing int `!=` should be used here instead of `is not`. | 06-03-2020 15:34:12 | 06-03-2020 15:34:12 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4745?src=pr&el=h1) Report
> Merging [#4745](https://codecov.io/gh/huggingface/transformers/pull/4745?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/47a551d17b6ed2eaf03301f049006d559fca5cf3&el=desc) will **increase** coverage by `0.22%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4745?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4745 +/- ##
==========================================
+ Coverage 77.14% 77.36% +0.22%
==========================================
Files 128 128
Lines 21073 21073
==========================================
+ Hits 16256 16304 +48
+ Misses 4817 4769 -48
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4745?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <ø> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.29% <ø> (+0.35%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.21% <0.00%> (+14.10%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4745?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4745?src=pr&el=footer). Last update [47a551d...491f4e2](https://codecov.io/gh/huggingface/transformers/pull/4745?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,744 | closed | How to use pretrained model for inference? | I have a bert model finetuned for pytorch and have trouble actually using it. Like how can I get to model("<some sentance>") outputting results for the tokens? | 06-03-2020 15:26:36 | 06-03-2020 15:26:36 | Hello @andrster
Which model are you trying ? for ex, for QA, token classification, sentence classification etc.
Please elaborate more. <|||||>Sorry, token classification <|||||>@andrster have you checked out the pipelines section of the README?<|||||>doesn't have one on NER<|||||>@andrster
ner pipeline is available in Transformers. Check here https://huggingface.co/transformers/usage.html#named-entity-recognition<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,743 | closed | Create README.md | Main issue is to refer to the fairseq website. | 06-03-2020 15:18:47 | 06-03-2020 15:18:47 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4743?src=pr&el=h1) Report
> Merging [#4743](https://codecov.io/gh/huggingface/transformers/pull/4743?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1b5820a56540a2096daeb43a0cd8247c8c94a719&el=desc) will **increase** coverage by `0.22%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4743?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4743 +/- ##
==========================================
+ Coverage 77.11% 77.34% +0.22%
==========================================
Files 128 128
Lines 21061 21061
==========================================
+ Hits 16242 16290 +48
+ Misses 4819 4771 -48
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4743?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4743/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.63% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4743/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4743/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (+0.24%)` | :arrow_up: |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4743/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.39% <0.00%> (+14.55%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4743?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4743?src=pr&el=footer). Last update [1b5820a...24a7fe1](https://codecov.io/gh/huggingface/transformers/pull/4743?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,742 | closed | Perform evaluation on HANS with Trainer (like GLUE example) | Current [HANS](https://github.com/huggingface/transformers/tree/master/examples/adversarial) evaluation implementation is carried out in the old way. It'd be good to do it in the same manner as other examples are implemented now with the Trainer class.
| 06-03-2020 15:08:49 | 06-03-2020 15:08:49 | Maybe @sgugger would be interested in taking a look at this<|||||>There is an evaluation issue that I faced with HANS evaluation. Issue [here](https://github.com/huggingface/transformers/issues/4766). I tried to run in the exact same manner as listed in `examples/adversarial`. But the results obtained seem skewed for some reason. If @sgugger can point some possible reason out, I can work on it and send a PR. Thanks. |
transformers | 4,741 | closed | Implemented resizing of token embeddings for TensorFlow models | As mentioned in [this issue](https://github.com/huggingface/transformers/issues/1838) transformers currently does not support resizing token embeddings with TensorFlow. I have implemented this functionality for ALBERT, BERT, DistilBERT, and GPT2.
**Note**
All of the respective TF[...]MainLayer[s] inherit from a utility class (TFLayerUtilsMixin; similar to the already existing TFModelUtilsMixin; both of them live in `modeling_tf_utils.py`) to avoid code duplication. This has to be done because TensorFlow models are structured differently than the corresponding PyTorch models - i.e., all the TF{ModelName} classes have a corresponding TF{ClassName}MainLayer which itself inherits from tf.keras.layers.Layer, whereas the PyTorch {ModelName} classes implement all the functionality themselves.
**Usage**
The usage is exactly the same as for the PyTorch models.
```
import transformers
bert = transformers.TFAutoModelForSequenceClassification.from_pretrained("bert-base-uncased")
tokenizer = transformers.AutoTokenizer.from_pretrained("bert-base-uncased")
tokenizer.add_tokens("[E1]"); tokenizer.add_tokens("[/E1]");
bert.resize_token_embeddings(len(tokenizer))
```
**Tests**
There are no tests in the existing transformers code for the `resize_token_embeddings` methods of the PyTorch models (as far as I can tell). This might be a separate issue. | 06-03-2020 13:52:40 | 06-03-2020 13:52:40 | Hello !
Thanks a lot for this PR!! Can you rebase on master and push force in order to be able to review :)<|||||>Hi,
I've rebased on master. Are you able to review the PR like this?<|||||>Hello! Thanks a lot for your contribution, unfortunately we've decided to move on with https://github.com/huggingface/transformers/pull/4351 that was contributed earlier.
Your contribution is still valuable, and we would have went with it had another PR not done it already. We look forward to your future PRs! |
transformers | 4,740 | closed | Can't find config.json | # ❓ Questions & Help
Hello!When I use transformers I get this error Make sure that 'mrm8488/bert-multi-cased-finetuned-xquadv1' is the correct path to a directory containing a config.json file.How to solve it? | 06-03-2020 13:12:26 | 06-03-2020 13:12:26 | Hi @mustafameruyert, please poste a codesnippet so that we can reproduce the error<|||||>> Hi @mustafameruyert, please poste a codesnippet so that we can reproduce the error


I am using dockers and this is error<|||||>> Hi @mustafameruyert, please poste a codesnippet so that we can reproduce the error
Sorry I forgot to uncomment first two lines of nevertheless it shows same error and instead of distilbert-base-cased I use bert-multi-cased-finetuned-xquadv1.I have downloaded all files using :
model.save_pretrained(path)
tokenizer.save_pretrained(path)
My code written in setup_qa.py file and all downloaded files are stored with setup_qa.py in one folder<|||||>It would be great if you could copy-paste code like this:
```python
from transformers import pipeline
answerer = pipeline("question-answering", model="mrm8488/bert-multi-cased-finetuned-xquadv1", tokenizer="mrm8488/bert-multi-cased-finetuned-xquadv1")
answerer(context="The dog is blue", question="Which color is the dog?")['answer']
```
so that one can just copy-paste it and does not have to type it manually from a screenshot.
The following code works as well:
```python
from transformers import pipeline, AutoModelForQuestionAnswering, AutoTokenizer
answerer = pipeline("question-answering", model=AutoModelForQuestionAnswering.from_pretrained("mrm8488/bert-multi-cased-finetuned-xquadv1"), tokenizer=AutoTokenizer.from_pretrained("mrm8488/bert-multi-cased-finetuned-xquadv1"))
answerer(context="The dog is blue", question="Which color is the dog?")['answer']
```
Let me know if this does not fix your problem :-)
<|||||>@patrickvonplaten I tried to run your code but it also show this error

<|||||>It looks like the problem is that you cannot create a folder called `/.cache` , which has nothing to do with the pipeline. You should have sudo rights from your home folder.
To solve this you could:
```
sudo mkdir /.cache
```
and then make sure that `/.cache` has the correct permission rights. The error is a bit out of scope for this issue.
It would be important to make sure that you have the correct sudo rights.
<|||||>> It looks like the problem is that you cannot create a folder called `/.cache` , which has nothing to do with the pipeline. You should have sudo rights from your home folder.
>
> To solve this you could:
>
> ```
> sudo mkdir /.cache
> ```
>
> and then make sure that `/.cache` has the correct permission rights. The error is a bit out of scope for this issue.
>
> It would be important to make sure that you have the correct sudo rights.
Thank you for response I will try to fix it |
transformers | 4,739 | closed | Extending run_language_modeling.py for XLNet | # 🚀 Feature request
The run_language_modeling.py script in examples/language-modeling/ currently works for BERT, RoBERTa, GPT and related models. It would be helpful if it also allowed XLNet. I believe this would involve writing functions to generate `perm_mask`, `target_mapping` and `labels` for the inputs sequences as per the paper and `https://github.com/zihangdai/xlnet/blob/master/data_utils.py`, but I'm not a 100% sure about this.
## Motivation
I have to adapt XLNet onto a specialized domain (finance) and am yet to find a decent guide or implementation which discusses how to do this. I've decided to do this myself (with the help of this library), and would like to share my code in case others find it useful.
## Your contribution
Since I will be trying to do this anyway, I would like to submit a relevant PR when it is done. I was also hoping to receive guidance or feedback as appropriate by other contributors to ensure correctness and utility. | 06-03-2020 13:09:14 | 06-03-2020 13:09:14 | See also https://github.com/huggingface/transformers/issues/2008<|||||>@shngt can you please share your code for adapting xlnet to the domain specific data ? ( the cli command or the code you used after the merged PR )
I was following the latest changes to transformers but I still don't understand how to continue training xlnet on the domain specific data.
thanks <|||||>@shngt @krannnn Are there any updates on this? I'm interested as well (: Thanks! <|||||>@matthiaslmz @krannnn I'm afraid I no longer have the cli command I used to run the example script, but I think it was quite similar to what's given in the docs. Are you facing some specific issue? |
transformers | 4,738 | closed | Question Answering Modeling through Hugging Face Models | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hello everyone, I have large number of documents and I need to extract specific information through Hugging Face Question Answering Model. First issue I faced was the document size was very large so it gave me token error and afterwards, I divided the data into small paragraphs, then I applied the given model. But this time, answer was not accurate. So, I just want to know, is there any alternative method or model to do this.
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 06-03-2020 12:41:15 | 06-03-2020 12:41:15 | How long is your document, you may wanna try longformer model which can handle sequences upto 4096 tokens. Here's a longformer model trained for QA https://huggingface.co/valhalla/longformer-base-4096-finetuned-squadv1
Also, take a loot at this https://github.com/deepset-ai/haystack. This might help you a lot<|||||>> How long is your document, you may wanna try longformer model which can handle sequences upto 4096 tokens. Here's a longformer model trained for QA https://huggingface.co/valhalla/longformer-base-4096-finetuned-squadv1
>
> Also, take a loot at this https://github.com/deepset-ai/haystack. This might help you a lot
I looked into it. That is a great help. Thanks. Can we also decide the output length with these type of pretrained models?<|||||>Theses QA models aren't generative. So there's no output length constraint<|||||>@AishwaryaVerma - For QA the output length is usually very small (only a couple of words). It is very rare that the answer of `AutoModelForQuestionAnswering` is longer than 3,4 words.
You might also want to take a look at: https://huggingface.co/allenai/longformer-large-4096-finetuned-triviaqa<|||||>And this notebook: https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing |
transformers | 4,737 | closed | Create model card for T5-base fine-tuned for Sentiment Span Extraction | 06-03-2020 11:02:25 | 06-03-2020 11:02:25 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4737?src=pr&el=h1) Report
> Merging [#4737](https://codecov.io/gh/huggingface/transformers/pull/4737?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3e5928c57d57db3071638e6beaec9349a75b6a22&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4737?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4737 +/- ##
=======================================
Coverage 77.29% 77.29%
=======================================
Files 128 128
Lines 21004 21004
=======================================
Hits 16234 16234
Misses 4770 4770
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4737?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4737?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4737?src=pr&el=footer). Last update [3e5928c...d41d001](https://codecov.io/gh/huggingface/transformers/pull/4737?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks Manuel:)<|||||>My pleasure. Coming soon model card for https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news and RuPERTa ofc 😉 |
|
transformers | 4,736 | closed | BertModel Inputs | # ❓ Questions & Help
Hi,
I have been using BertModel for Question and Answering . In examples - I see there are no start_position , end_position being provided to the Model .
How is the Model able to train in this case using Input_ids , Mask and Attention Head ?
Might be a naive question , but I have dig into the source codes and referred to runsquad.py - could not gain clarity.
Can anybody suggest?
Thanks
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 06-03-2020 09:32:57 | 06-03-2020 09:32:57 | Can you please link the example you mean? :-) <|||||>Hi ,
In the example code below; I have highlighted the inputs** being provided to the model i.e. ids , attention_mask and token_type_ids.
If I add start_positions and end_positions as inputs here - I get an error saying unknown inputs : start_positions and end_positions.
So, my question is - how is the model able to train when starts and ends not being provided as inputs to the model training part?
Thanks
```
class QAModel (transformers.BertPreTrainedModel):
def __init__(self, conf):
super(QAModel, self).__init__(conf)
self.bert = transformers.BertModel.from_pretrained(config.BERT_PATH, config=conf)
self.drop_out = nn.Dropout(0.1)
self.l0 = nn.Linear(768 * 2, 2)
torch.nn.init.normal_(self.l0.weight, std=0.02)
def forward(self, ids, mask, token_type_ids):
**_, _, out = self.bert(
ids,
attention_mask=mask,
token_type_ids=token_type_ids**
)
out = torch.cat((out[-1], out[-2]), dim=-1)
out = self.drop_out(out)
logits = self.l0(out)
start_logits, end_logits = logits.split(1, dim=-1)
start_logits = start_logits.squeeze(-1)
end_logits = end_logits.squeeze(-1)
return start_logits, end_logits
```
<|||||>Hey @pn12,
I'm not sure if this answers you question: If you want to fine-tune a `Bert` model on Question Answering you have to prove both the `start_positions` and the `end_positions`, see this line: https://github.com/huggingface/transformers/blob/f9414f7553d3f1872b372990ef03205c0d1141df/src/transformers/modeling_bert.py#L1405
Only for validation and evaluation, you don't have to provide those so that the model can predict them.
Thanks to @patil-suraj, you can also take a look at this notebook to check out how to fine-tune a Bert-Like model on Squad:
https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb
Check out his `DummyDataCollator` to see that he passes those two arguments for training. |
transformers | 4,735 | closed | Bart model for textinfilling | # ❓ Questions & Help
## Details
<!-- Description of your issue -->
@sshleifer
I was wandering why you used a registered_buffer to define the ``final_logits_bias`` in the BartForConditionalGeneration. In my understanding a registered_buffer would have not gradient so if I fine-tune the a Bart model for generation these biases would not be updated (remain to 0). Should we use register_parameter or remove the bias completely or there is something I haven't understand ?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 06-03-2020 07:57:49 | 06-03-2020 07:57:49 | it is a hack to make `MarianMTModel`, which inherits from Bart work. For the Bart models that parameter does nothing, as you suggest. You can remove or ignore.<|||||>Thanks got it |
transformers | 4,734 | closed | TFTrainer: Checkpoints not getting saved in `output_dir` but in {cwd}/checkpoint | I am using TFTrainer for the SQuAD task.
Checkpoints are being created in cwd/checkpoint insted of output_dir.
**Potential Cause:**
https://github.com/huggingface/transformers/blob/9ca485734aea269961d63a040ff194365d151fd1/src/transformers/trainer_tf.py#L156
Instead of PREFIX_CHECKPOINT_DIR we need to have
```python
os.path.join(self.args.output_dir, PREFIX_CHECKPOINT_DIR)
``` | 06-03-2020 05:34:44 | 06-03-2020 05:34:44 | Pinging @jplu as it might be of interest.<|||||>@0dust This is the intent behavior :)
A solution would be to add a parameter to the arguments to select the checkpoint folder location you want<|||||>@jplu Sorry if i am missing something but isn't 'output_dir' the folder to save the checkpoint?
https://github.com/huggingface/transformers/blob/ed4df85572924871758ca32133b46116121c706f/src/transformers/training_args.py#L41-L43<|||||>Not for the TF one, it is one of the few difference between the both trainers.<|||||>Ohh, I see! Thanks for the clarification. Just a quick question before i close the issue, Is there any specific reason for this? Or it's just a matter of time before it starts to behave similar to pytorch trainer. <|||||>It is just matter of time :) |
transformers | 4,733 | closed | When I use TFBertEncoder in my laptop, I get an error.I can not build a model. Here is a simple examples. | # 🐛 Bug
## Information
Model I am using TFBertEncoder:
Language I am using the model on English:
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. When I use, TFBertEncoder, I get an error.
Here is my code.
```py
import tensorflow as tf
import numpy as np
from transformers.modeling_tf_bert import BertConfig, TFBertEncoder
print(tf.__name__, tf.__version__)
input_a = tf.keras.layers.Input(shape=(91, 128))
config = BertConfig()
config.hidden_size = 128
config.num_attention_heads = 4
# config.output_attentions = False
# config.output_hidden_states = False
head_mask = [None for _ in range(config.num_hidden_layers)]
encoder_output = TFBertEncoder(config=config)([input_a, None, head_mask])[0]
print(encoder_output.shape)
test_out = tf.keras.layers.Dense(128)(encoder_output)
print(test_out.shape)
```
## Expected behavior
Here is the error:
```
(None, 91, 128)
2020-06-03 11:18:10.160647: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Failed precondition: Error while reading resource variable _AnonymousVar189 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar189/class tensorflow::Var does not exist.
[[{{node output_23/dense/BiasAdd/ReadVariableOp}}]]
Traceback (most recent call last):
File "D:/python/tx/TEST.py", line 16, in <module>
a = tf.keras.layers.Dense(128)(encoder_output)
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 720, in __call__
base_layer_utils.create_keras_history(inputs)
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 187, in create_keras_history
_, created_layers = _create_keras_history_helper(tensors, set(), [])
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 249, in _create_keras_history_helper
layer_inputs, processed_ops, created_layers)
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 249, in _create_keras_history_helper
layer_inputs, processed_ops, created_layers)
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 249, in _create_keras_history_helper
layer_inputs, processed_ops, created_layers)
[Previous line repeated 5 more times]
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\base_layer_utils.py", line 247, in _create_keras_history_helper
constants[i] = backend.function([], op_input)([])
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\keras\backend.py", line 3727, in __call__
outputs = self._graph_fn(*converted_inputs)
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 1551, in __call__
return self._call_impl(args, kwargs)
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 1591, in _call_impl
return self._call_flat(args, self.captured_inputs, cancellation_manager)
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 1692, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py", line 545, in call
ctx=ctx)
File "D:\Anaconda3\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable _AnonymousVar189 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar189/class tensorflow::Var does not exist.
[[node output_23/dense/BiasAdd/ReadVariableOp (defined at /python/tx/TEST.py:16) ]] [Op:__inference_keras_scratch_graph_5205]
Function call stack:
keras_scratch_graph
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.3.0 (in conda list)
- Platform:
- Python version:3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):TF2.1.0(GPU)
- Using GPU in script?:
- Using distributed or parallel set-up in script?:No
| 06-03-2020 03:39:33 | 06-03-2020 03:39:33 | I also meet this problem? Do you solve it? @shange1996 <|||||>> I also meet this problem? Do you solve it? @shange1996
No. The problem bothers me.<|||||>Me too. When I change use keras not tf.keras, it has another problem...<|||||>It bothers me, too.<|||||>> It bothers me, too.
Now I solve it. NOT import it, instead of copying the layer code in your main code.
Like this:
`class TFBertSelfAttention(tf.keras.layers.Layer):
def __init__(self, config, **kwargs):
super().__init__(**kwargs)
if config.hidden_size % config.num_attention_heads != 0:
raise ValueError(
"The hidden size (%d) is not a multiple of the number of attention "
"heads (%d)" % (config.hidden_size, config.num_attention_heads)
)
self.output_attentions = config.output_attentions
self.num_attention_heads = config.num_attention_heads
assert config.hidden_size % config.num_attention_heads == 0
self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
self.all_head_size = self.num_attention_heads * self.attention_head_size
self.query = tf.keras.layers.Dense(
self.all_head_size, kernel_initializer=tf.keras.initializers.TruncatedNormal(config.initializer_range), name="query"
)
self.key = tf.keras.layers.Dense(
self.all_head_size, kernel_initializer=tf.keras.initializers.TruncatedNormal(config.initializer_range), name="key"
)
self.value = tf.keras.layers.Dense(
self.all_head_size, kernel_initializer=tf.keras.initializers.TruncatedNormal(config.initializer_range), name="value"
)
self.dropout = tf.keras.layers.Dropout(config.attention_probs_dropout_prob)
def transpose_for_scores(self, x, batch_size):
x = tf.reshape(x, (batch_size, -1, self.num_attention_heads, self.attention_head_size))
return tf.transpose(x, perm=[0, 2, 1, 3])
def call(self, inputs, training=False):
hidden_states, attention_mask, head_mask = inputs
batch_size = tf.shape(hidden_states)[0]
mixed_query_layer = self.query(hidden_states)
mixed_key_layer = self.key(hidden_states)
mixed_value_layer = self.value(hidden_states)
query_layer = self.transpose_for_scores(mixed_query_layer, batch_size)
key_layer = self.transpose_for_scores(mixed_key_layer, batch_size)
value_layer = self.transpose_for_scores(mixed_value_layer, batch_size)
# Take the dot product between "query" and "key" to get the raw attention scores.
attention_scores = tf.matmul(
query_layer, key_layer, transpose_b=True
) # (batch size, num_heads, seq_len_q, seq_len_k)
dk = tf.cast(tf.shape(key_layer)[-1], tf.float32) # scale attention_scores
attention_scores = attention_scores / tf.math.sqrt(dk)
if attention_mask is not None:
# Apply the attention mask is (precomputed for all layers in TFBertModel call() function)
attention_scores = attention_scores + attention_mask
# Normalize the attention scores to probabilities.
attention_probs = tf.nn.softmax(attention_scores, axis=-1)
# This is actually dropping out entire tokens to attend to, which might
# seem a bit unusual, but is taken from the original Transformer paper.
attention_probs = self.dropout(attention_probs, training=training)
# Mask heads if we want to
if head_mask is not None:
attention_probs = attention_probs * head_mask
context_layer = tf.matmul(attention_probs, value_layer)
context_layer = tf.transpose(context_layer, perm=[0, 2, 1, 3])
context_layer = tf.reshape(
context_layer, (batch_size, -1, self.all_head_size)
) # (batch_size, seq_len_q, all_head_size)
outputs = (context_layer, attention_probs) if self.output_attentions else (context_layer,)
return outputs`<|||||>@shange1996 just copy `TFBertSelfAttention` class?<|||||>> @shange1996 just copy `TFBertSelfAttention` class?
Yes! Just copy, no import.<|||||>Hey guys,
I looked into the issue and I think the best solution is to use a keras layer wrapper as follows:
```python
#!/usr/bin/env python3
import tensorflow as tf
import numpy as np
from transformers.modeling_tf_bert import BertConfig, TFBertEncoder
print(tf.__name__, tf.__version__)
config = BertConfig()
config.hidden_size = 128
config.num_attention_heads = 4
class NewTFBertEncoder(tf.keras.layers.Layer):
def __init__(self, config):
super(NewTFBertEncoder, self).__init__()
# self.inputs = tf.keras.layers.Input(input_shape) # not really needed here IMO.
self.encoder = TFBertEncoder(config=config)
self.dense = tf.keras.layers.Dense(config.hidden_size)
def call(self, inputs):
head_mask = [None for _ in range(config.num_hidden_layers)]
output = self.encoder([inputs, None, head_mask])[0]
dense_output = self.dense(output)
return dense_output
new_bert_encoder = NewTFBertEncoder(config)
output = new_bert_encoder(np.ones((2, 91, 128))) # batch size , sequence length, hidden size
```<|||||>Two things:
- If a customized layer is to be used with standard keras layers (as is the case here), as far as I know it is recommended to use a `keras.layers` wrapper class. Also see here: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Lambda#variables_2
- I don't think that the tf.keras.inputs class is needed here (or do you have a speciifc use case in mind?). Keras usually creates such an instance under the hood anyways, see: https://www.tensorflow.org/api_docs/python/tf/keras/layers/InputLayer
Also pinging @jplu to check if my code proposal is the right choice here.<|||||>@shange1996 @etveritas - Let me know if the proposed solution works for you. If not feel free to re-open the issue :-)
Also linking this issue to: https://github.com/huggingface/transformers/issues/5046. <|||||>Both work for me , thanks!<|||||>Hey ! This error appears when some variable are initialized elsewhere than in the Layer itself. The solution that @patrickvonplaten proposes is a good one!! Good job people :) |
transformers | 4,732 | closed | Adding notebooks for Fine Tuning [Community Notebook] | Hi @patrickvonplaten, Adding 3 documented notebooks to fine tune transformers to downstream NLP tasks with PyTorch:
- Multi-class classification: Using DistilBert
- Multi-label classification: Using Bert
- Summarization: Using T5 - **Model Tracking with WandB**
These notebooks are pulled from the git repo: https://github.com/abhimishra91/transformers-tutorials | 06-03-2020 01:44:16 | 06-03-2020 01:44:16 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4732?src=pr&el=h1) Report
> Merging [#4732](https://codecov.io/gh/huggingface/transformers/pull/4732?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9ca485734aea269961d63a040ff194365d151fd1&el=desc) will **increase** coverage by `1.42%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4732?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4732 +/- ##
==========================================
+ Coverage 75.64% 77.07% +1.42%
==========================================
Files 128 128
Lines 20996 20996
==========================================
+ Hits 15883 16182 +299
+ Misses 5113 4814 -299
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4732?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `72.02% <0.00%> (-14.15%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.34% <0.00%> (-1.38%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.91% <0.00%> (-0.25%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.79% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.94% <0.00%> (+0.18%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.49% <0.00%> (+6.36%)` | :arrow_up: |
| [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <0.00%> (+25.00%)` | :arrow_up: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4732/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.43% <0.00%> (+75.91%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4732?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4732?src=pr&el=footer). Last update [9ca4857...9d50901](https://codecov.io/gh/huggingface/transformers/pull/4732?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Hey @abhimishra91,
The notebooks are very clean and well-written! Thanks a lot! :-) Just did some renaming in the explanations. |
transformers | 4,731 | closed | [DOT NOT MERGE] Tokenizers Shape Polymorphism - Introduce pad_to_next_multiple_of parameters | Needs new release of tokenizers (cc @n1t0) | 06-02-2020 22:51:35 | 06-02-2020 22:51:35 | That's a really cool feature. Looking forward to it!<|||||>Is the bucketization size not going to be too linear? Shouldn't we rather do `pad_to_next_power_of_two` or similar?<|||||>@julien-c That will be linear yet, by using a `power_of` growth we might rapidly increase the number of padding tokens to add and then fall into the opposite situation where most of the computation will be wasted on padding tokens. |
transformers | 4,730 | closed | bert-small-cord19 model cards | Adds model cards for the bert-small-cord19 series of models. | 06-02-2020 21:24:14 | 06-02-2020 21:24:14 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4730?src=pr&el=h1) Report
> Merging [#4730](https://codecov.io/gh/huggingface/transformers/pull/4730?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9ca485734aea269961d63a040ff194365d151fd1&el=desc) will **increase** coverage by `1.42%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4730?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4730 +/- ##
==========================================
+ Coverage 75.64% 77.07% +1.42%
==========================================
Files 128 128
Lines 20996 20996
==========================================
+ Hits 15883 16182 +299
+ Misses 5113 4814 -299
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4730?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `72.02% <0.00%> (-14.15%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.34% <0.00%> (-1.38%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.91% <0.00%> (-0.25%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.94% <0.00%> (+0.18%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+0.23%)` | :arrow_up: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.49% <0.00%> (+6.36%)` | :arrow_up: |
| [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <0.00%> (+25.00%)` | :arrow_up: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.43% <0.00%> (+75.91%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4730?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4730?src=pr&el=footer). Last update [9ca4857...c9d87f1](https://codecov.io/gh/huggingface/transformers/pull/4730?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,729 | closed | [Feature request] Support batched conditional generation from GPT-2 | # 🚀 Feature request
Support batched conditional generation from GPT-2
## Motivation
Currently the [method](https://github.com/huggingface/transformers/blob/9ca485734aea269961d63a040ff194365d151fd1/src/transformers/modeling_utils.py#L802) to generate text from GPT-2 conditioned on an input sequence only supports either 1) a single input at a time, or 2) a batch of inputs where the conditioning input sequence is the same length. It would be great (for efficiency) if this method could be updated to support a batch with conditional inputs of varying length, done by ignoring padding in the input_ids.
## Your contribution
Unlikely to have time to code this, but will submit a PR if I do.
| 06-02-2020 21:21:23 | 06-02-2020 21:21:23 | Also see: https://github.com/huggingface/transformers/issues/3021<|||||>This is known to not work at the moment with `generate()`. I have to think a bit about the cleanest way to implement it :-) Code suggestions are very welcome!
<|||||>Very interested in this! Came here from #3021 (many hours after wondering why my batch generation was not working...)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,728 | closed | Possible fix to make AMP work with DDP in the trainer | closes https://github.com/huggingface/transformers/issues/4657
Using multiple GPUs in PyTorch (with DistrubutedDataParallel) speeds up performance drastically. To get even more speed out of it, the example scripts often - if not always - allow the use of apex for automatic mixed precision. This is great. However, some issues can arise, particularly the infamous "illegal memory access" error which seems to have an easy, one-line solution.
Currently, we assume that by using things like `.to(args.device)` we solve any and all issues of where our data or model should go. However, the author of AMP @mcarilli seems to [suggest](https://github.com/NVIDIA/apex/issues/319#issuecomment-503372924) that it is recommended to always set the current process' default device, too, to ensure no further issues. This suggestion also [helped](https://github.com/huggingface/transformers/issues/4657#issuecomment-637703146) the aforementioned issue. Therefore it seems a good idea to also implement this. In fact, some examples such as Hans already do this.
https://github.com/huggingface/transformers/blob/b231a413f5d58592bb4d98304c3d3b668c5d4a42/examples/adversarial/test_hans.py#L518
To avoid DRY issues, I would suspect that the trainer_args file is the best place to do this _only once_ but other suggestions are welcome (this is different from what I suggested in the linked issue, though I think `trainer_args` is the better place). I am not sure which examples do not use trainer_args, but those would need to be checked and updated as well. If anyone can give a quick rundown of which examples do NOT use the new trainer, I can have a look quickly. Otherwise I'll have to go over the examples another time.
**Side note**: I am not sure how and if this works with TPUs so to be sure that this only involves CUDA devices, I first check whether the device is a CUDA device. | 06-02-2020 18:29:49 | 06-02-2020 18:29:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4728?src=pr&el=h1) Report
> Merging [#4728](https://codecov.io/gh/huggingface/transformers/pull/4728?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b231a413f5d58592bb4d98304c3d3b668c5d4a42&el=desc) will **decrease** coverage by `1.65%`.
> The diff coverage is `33.33%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4728?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4728 +/- ##
==========================================
- Coverage 77.27% 75.62% -1.66%
==========================================
Files 128 128
Lines 20980 20982 +2
==========================================
- Hits 16213 15868 -345
- Misses 4767 5114 +347
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4728?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `75.82% <33.33%> (-0.59%)` | :arrow_down: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `17.51% <0.00%> (-75.92%)` | :arrow_down: |
| [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.13% <0.00%> (-6.37%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.63% <0.00%> (-0.33%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.75% <0.00%> (-0.19%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.15% <0.00%> (-0.13%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.27% <0.00%> (+0.41%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.72% <0.00%> (+1.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4728?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4728?src=pr&el=footer). Last update [b231a41...8af2fd8](https://codecov.io/gh/huggingface/transformers/pull/4728?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LGTM, thanks for the detailed write-up and research @BramVanroy
I think this was here before but I removed it when refactoring, assuming that it was redundant – looks like it wasn't really:)
As for which examples do NOT use the new trainer, you should refer to the table at https://github.com/huggingface/transformers/tree/master/examples – we expect all of them to use Trainer/TFTrainer eventually.
Thank you! |
transformers | 4,727 | closed | Albert pretraining loss not decreasing |
I am training Albert from scratch using run_language_modeling.py.
Doing training on 8 V100.p3dn8x.
Launching with this parameters.
```
python transformers/examples/language-modeling/test.py --train_data_file x.txt --output_dir albert_model --model_type albert --mlm --config_name test --tokenizer_name test --do_train --line_by_line --learning_rate 0.00088 --num_train_epochs 3 --save_total_limit 50 --save_steps 5000 --per_gpu_train_batch_size 150 --seed 42 --overwrite_output_dir --max_steps 200000 --fp16
```

loss is not decreasing. The above plot contains training with and without warmup steps training curve. loss is stuck on `7.27` in both case.
Also, weird thing is while I am setting the number of epochs 3 but in training its showing 9
```
was: ModuleNotFoundError("No module named 'amp_C'",)
06/02/2020 14:45:14 - INFO - transformers.trainer - ***** Running training *****
06/02/2020 14:45:14 - INFO - transformers.trainer - Num examples = 28236463
06/02/2020 14:45:14 - INFO - transformers.trainer - Num Epochs = 9
06/02/2020 14:45:14 - INFO - transformers.trainer - Instantaneous batch size per device = 150
06/02/2020 14:45:14 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 1200
06/02/2020 14:45:14 - INFO - transformers.trainer - Gradient Accumulation steps = 1
06/02/2020 14:45:14 - INFO - transformers.trainer - Total optimization steps = 200000
```
Can anyone suggest what could be reasons for such behavior?
| 06-02-2020 18:29:03 | 06-02-2020 18:29:03 | Not certain, but looks like maybe nvidia apex was not installed correctly?
"was: ModuleNotFoundError("No module named 'amp_C'",)"<|||||>thats warning.
Have followed `pip install -v --no-cache-dir ./` for apex installation.
Changing LR to 5e-5 reducing the loss.<|||||>@008karan, setting the loss to 5e-5 led your model to convergence?<|||||>@LysandreJik loss is decreasing as of now
> Also, weird thing is while I am setting the number of epochs 3 but in training its showing 9
can you comment on this?<|||||>I'll have a look.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,726 | closed | TFRobertaModelIntegrationTest requires tf | 06-02-2020 16:50:42 | 06-02-2020 16:50:42 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4726?src=pr&el=h1) Report
> Merging [#4726](https://codecov.io/gh/huggingface/transformers/pull/4726?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d976ef262e0b2c52363d201b2e14e5ecc42abbb3&el=desc) will **increase** coverage by `0.82%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4726?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4726 +/- ##
==========================================
+ Coverage 75.63% 76.46% +0.82%
==========================================
Files 128 128
Lines 20979 20979
==========================================
+ Hits 15867 16041 +174
+ Misses 5112 4938 -174
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4726?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `31.51% <0.00%> (-54.67%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (-0.42%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.91% <0.00%> (-0.25%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.63% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.94% <0.00%> (+0.18%)` | :arrow_up: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.49% <0.00%> (+6.36%)` | :arrow_up: |
| [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <0.00%> (+25.00%)` | :arrow_up: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.43% <0.00%> (+75.91%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4726?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4726?src=pr&el=footer). Last update [d976ef2...49ce6ad](https://codecov.io/gh/huggingface/transformers/pull/4726?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 4,725 | closed | Save & load sparse models from the models database | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Handle saving and loading pipeline for sparse models such as PruneBERT.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
New sparse models will be extremely useful if we can use them by downloading the compressed. version with the `.from_pretrained` functions
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
An example is already part of the current repo so I can try to create a PR later: https://github.com/huggingface/transformers/blob/master/examples/movement-pruning/Saving_PruneBERT.ipynb
| 06-02-2020 16:27:37 | 06-02-2020 16:27:37 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,724 | closed | Fix CI after killing archive maps | 06-02-2020 14:20:55 | 06-02-2020 14:20:55 | ||
transformers | 4,723 | closed | never_split on slow tokenizers should not split | I'm actually not sure if it's the right behavior, but when using `do_basic_tokenization` on `BertTokenizer` the parameter `never_split` is not used to determine if a token should be sent to wordpiece tokenizer.
This PR checks, for each tokens returned by `basic_tokenizer`, if the token is not in the `never_split: set` before sending to wordpiece. If the token is found in `never_split` then it is added as-it in the returned list of tokens.
Updated `never_split: List` -> `never_split: Set` as we're always testing for membership in the set and not for a specific index in the collection. [Set are 10x faster for membership operations than list](https://stackoverflow.com/a/17945009)
**Before:**
```python
tokenizer = BertTokenizer.from_pretrained(
"bert-base-cased",
use_fast=False,
never_split=['lol'],
do_basic_tokenize=True
)
tokenizer.tokenize("lol")
Out[4]: ['lo', '##l']
```
**After**
```python
tokenizer = BertTokenizer.from_pretrained(
"bert-base-cased",
use_fast=False,
never_split=['lol'],
do_basic_tokenize=True
)
tokenizer.tokenize("lol")
Out[5]: ['lol']
```
Related to #3518 | 06-02-2020 12:39:52 | 06-02-2020 12:39:52 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4723?src=pr&el=h1) Report
> Merging [#4723](https://codecov.io/gh/huggingface/transformers/pull/4723?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/76779363160a598f130433209a77f8a747351b61&el=desc) will **increase** coverage by `0.36%`.
> The diff coverage is `80.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4723?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4723 +/- ##
==========================================
+ Coverage 77.38% 77.74% +0.36%
==========================================
Files 128 128
Lines 21071 21071
==========================================
+ Hits 16305 16381 +76
+ Misses 4766 4690 -76
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4723?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.25% <80.00%> (-3.75%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.41% <0.00%> (-1.38%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4723?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4723?src=pr&el=footer). Last update [7677936...b0fd2b3](https://codecov.io/gh/huggingface/transformers/pull/4723?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,722 | closed | Unify label args | Following up from #4711, this is a proposal to deprecate any argument that's not `labels` (like `madked_lm_labels`, `lm_labels`, etc.) to `labels`.
I've only done one model for now to get feedback on the design, once we have something you like, I can do them all (or have separate PRs if you think that's best). | 06-02-2020 11:30:52 | 06-02-2020 11:30:52 | Like it!<|||||>Added a tentative documentation for the kwargs, not sure if we want it or not.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4722?src=pr&el=h1) Report
> Merging [#4722](https://codecov.io/gh/huggingface/transformers/pull/4722?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/47a551d17b6ed2eaf03301f049006d559fca5cf3&el=desc) will **increase** coverage by `0.05%`.
> The diff coverage is `95.34%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4722?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4722 +/- ##
==========================================
+ Coverage 77.14% 77.19% +0.05%
==========================================
Files 128 128
Lines 21073 21130 +57
==========================================
+ Hits 16256 16311 +55
- Misses 4817 4819 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4722?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.64% <ø> (ø)` | |
| [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `27.27% <ø> (ø)` | |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `87.94% <ø> (ø)` | |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `71.92% <57.14%> (-0.20%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.69% <90.90%> (+0.16%)` | :arrow_up: |
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `77.67% <100.00%> (+0.46%)` | :arrow_up: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.81% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.17% <100.00%> (+0.21%)` | :arrow_up: |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `98.18% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `76.89% <100.00%> (+0.46%)` | :arrow_up: |
| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/4722/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4722?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4722?src=pr&el=footer). Last update [47a551d...68fddbd](https://codecov.io/gh/huggingface/transformers/pull/4722?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Made the same for all models since @julien-c liked it. A few comments as I was reading and deprecating.
- I found a few more wrong docstrings (mentioning `lm_label` when the arg was called `labels`) so fixed them.
- As @patrickvonplaten mentioned on #4711, `BertForMaskedLM` should be split in two (and add a `BertWithLMHead`) to remove the `lm_labels` argument. I made this a TODO to avoid this PR become too big.
- The GPT2 and openai models also have a version with two labels (`GPT2DoubleHeadsModel` and `OpenAIDoubleHeadsModel`), I renamed `lm_labels` to `labels` there but there may still be a need for a second label. Can revert the change on those models if we want each labels arg to have a useful name.
- In `LongformerModel`, the `label` argument is not used, should it be dropped?
Also, I note that quite a few docstrings have example that don't match the model they document (for instance an electra model was using Bert as an example, but there are a few instances). |
transformers | 4,721 | closed | Faster bert basic tokenizer | In this PR I tried 2 things:
* First I change some comparisons in the form of `a == sth or a == otherthing` for `a in {sth, otherthing}`. It is faster and more readable.
* I noticed that some methods could actually be functions because they did not use anything from the class.
I am a bit new to this, so I will accept any feedback. | 06-02-2020 11:26:19 | 06-02-2020 11:26:19 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4721?src=pr&el=h1) Report
> Merging [#4721](https://codecov.io/gh/huggingface/transformers/pull/4721?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/47a551d17b6ed2eaf03301f049006d559fca5cf3&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4721?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4721 +/- ##
=======================================
Coverage 77.14% 77.14%
=======================================
Files 128 128
Lines 21073 21072 -1
=======================================
Hits 16256 16256
+ Misses 4817 4816 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4721?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `94.97% <100.00%> (-0.03%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4721/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.83% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4721?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4721?src=pr&el=footer). Last update [47a551d...6f7ff85](https://codecov.io/gh/huggingface/transformers/pull/4721?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Rather than making them regular functions I'd make them into static methods since they're still very much related to the class. You just don't need `self`.<|||||>> Rather than making them regular functions I'd make them into static methods since they're still very much related to the class. You just don't need `self`.
Done<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,720 | closed | [Reformer] Improved memory if input is shorter than chunk length | This PR improves memory and speed of Reformer for language generatinon.
Reformer is based on chunked self attention. This means that for an input length which is not a multiple of the chunk length, the input has to be padded to be a multiple of the chunk length. This is not the case though when the input length is less than the chunk length (happens in language generation). In this case, normal self attention should be applied to save memory.
Code is updated for both LSH and Local self attention and a test is added.
| 06-02-2020 11:04:51 | 06-02-2020 11:04:51 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4720?src=pr&el=h1) Report
> Merging [#4720](https://codecov.io/gh/huggingface/transformers/pull/4720?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/47a551d17b6ed2eaf03301f049006d559fca5cf3&el=desc) will **decrease** coverage by `0.56%`.
> The diff coverage is `96.92%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4720?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4720 +/- ##
==========================================
- Coverage 77.14% 76.57% -0.57%
==========================================
Files 128 128
Lines 21073 21089 +16
==========================================
- Hits 16256 16149 -107
- Misses 4817 4940 +123
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4720?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.94% <ø> (ø)` | |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/4720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.21% <96.92%> (+0.26%)` | :arrow_up: |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `31.73% <0.00%> (-40.39%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4720?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4720?src=pr&el=footer). Last update [47a551d...6fe9553](https://codecov.io/gh/huggingface/transformers/pull/4720?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,719 | closed | About `do_basic_tokenize` behavior in BertTokenizer | # ❓ Questions & Help
In the [docs](https://huggingface.co/transformers/model_doc/bert.html?highlight=basic%20tokenization#berttokenizer) it just says that it performs a basic tokenization before the word piecer. But what is actually a "basic tokenization"? I would really appreciate a little more of information on that.
Diving into the code I found out that the basic tokenization removes control characters from the text. I did not expect that behavior from what I read on the docs. That gave us problems because some characters like `` weren't tokenizing and we didn't know why.
More generally I think that transformers has a problem with the docs. Most of the times that something does not work for me, I don't bother to look at them and I directly dive into the source code in order to understand what's happening. I am used to scikit-learn and maybe I am a bit bisaed, but I really think that these kind of things can be a barrier for new people wanting to use transformers.
If there is something I can do to help, I am happy to send a PR. | 06-02-2020 10:52:15 | 06-02-2020 10:52:15 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>there is a `BasicTokenizer` in previous `pytorch_transformers` package:

This must be what "basic tokenize" does |
transformers | 4,718 | closed | Replace pad_token with -100 for LM loss calculation | The docs for both GPT and [GPT2](https://huggingface.co/transformers/model_doc/gpt2.html#gpt2lmheadmodel) specify that labels that are not -100 will be used for the calculation of the loss. So, the padding for the labels should be `-100`, not `tokenizer.pad_token_id`. | 06-02-2020 08:49:01 | 06-02-2020 08:49:01 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4718?src=pr&el=h1) Report
> Merging [#4718](https://codecov.io/gh/huggingface/transformers/pull/4718?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e80d6c689bd62f805a5c8d77ec0cc3b09f240d14&el=desc) will **decrease** coverage by `0.64%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4718?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4718 +/- ##
==========================================
- Coverage 77.10% 76.45% -0.65%
==========================================
Files 128 128
Lines 21723 21725 +2
==========================================
- Hits 16749 16610 -139
- Misses 4974 5115 +141
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4718?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `89.55% <100.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.56% <0.00%> (-2.58%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.51% <0.00%> (-1.39%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.08% <0.00%> (-1.17%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.49% <0.00%> (-0.78%)` | :arrow_down: |
| [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `72.80% <0.00%> (-0.30%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.79% <0.00%> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.61% <0.00%> (+0.11%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4718?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4718?src=pr&el=footer). Last update [e80d6c6...6cceabd](https://codecov.io/gh/huggingface/transformers/pull/4718?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Closed the PR by mistake... Re-opening it.<|||||>Just a quick question for @mfuntowicz, is `copy.deepcopy` a performant way to clone a tensor? (given that this is called at each training step)<|||||>I saw `deepcopy` being used elsewhere in the code, so added it as is. Just looking at the new tensor [documentation](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.new_tensor), and they recommend using `.clone().detach()`. Happy to change it to that.<|||||>Bumping this since I haven't seen any activity in a few days.<|||||>Yes `.clone().detach()` sounds good.<|||||>LGTM but let's let @LysandreJik have a last check and merge this<|||||>Thanks @julien-c!<|||||>Hey @LysandreJik, can you please review when you have a chance? Thanks!<|||||>Thanks @setu4993! |
transformers | 4,717 | closed | Override get_vocab for fast tokenizer. | 06-02-2020 08:32:47 | 06-02-2020 08:32:47 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4717?src=pr&el=h1) Report
> Merging [#4717](https://codecov.io/gh/huggingface/transformers/pull/4717?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/76779363160a598f130433209a77f8a747351b61&el=desc) will **decrease** coverage by `1.23%`.
> The diff coverage is `50.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4717?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4717 +/- ##
==========================================
- Coverage 77.38% 76.14% -1.24%
==========================================
Files 128 128
Lines 21071 21073 +2
==========================================
- Hits 16305 16046 -259
- Misses 4766 5027 +261
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4717?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.59% <50.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `18.70% <0.00%> (-74.83%)` | :arrow_down: |
| [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/4717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `76.92% <0.00%> (-23.08%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/4717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.18% <0.00%> (-6.35%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.44% <0.00%> (-0.42%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/4717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `86.77% <0.00%> (-0.19%)` | :arrow_down: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4717?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4717?src=pr&el=footer). Last update [7677936...8217ee0](https://codecov.io/gh/huggingface/transformers/pull/4717?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 4,716 | closed | Can I save a word embedding from BERT and used again later for computational purpose(as BERT takes much more time). Just like in Glove. If yes then How? and is this good idea to do so? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 06-02-2020 08:09:23 | 06-02-2020 08:09:23 | As per my understanding, you can!
If you read the BERT paper by Devlin et.al., you can see that two suggested ways to extract said word embeddings would be to concatenate the last four hidden layers (9 to 12), generating a 4*768=3072 sized embedding for each token. Alternatively you can also sum or average out the last 4 layers to generate vectors of size 768.
You can also instead prefer storing sentence embeddings, then the CLS token serves as averaged representation of the sentence embedding (not a very good representation though, unless the model was fine-tuned well on the data or the LM was pretrained on similar data, and even then the sentence embeddings through that CLS token could be substandard). Although, averaging out the second to last hidden layer (i.e. averaging out token embeddings for the sentence at the 11th hidden layer) seems to generate decent sentence embeddings (of size 768, the same as one token, since its averaged and not concatenated).
Hope this helps!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,715 | closed | how to make a multi-task deep neural network baseline using huggingface transformers? | I was trying to build a [multi-task deep neural network][1] using [xlm roberta large model][2] for a multilingual classification problem. my training dataset contains 4 columns :
1. ID
2. comment_text (according to id number,each users english comment is stored in this column. example comment : "you are a loser")
3. toxic (this column contains 1/0,0 means not toxic,1 means toxic)
4. personal_attack(this column also contains 0/1,,0 means the comment is not a personal attack type comment and 1 means opposite)
here is my models code :
def build_model(transformer, max_len=512):
input_word_ids = Input(shape=(max_len,), dtype=tf.int32, name="input_word_ids")
sequence_output = transformer(input_word_ids)[0]
cls_token = sequence_output[:, 0, :]
out = Dense(1, activation='sigmoid',name = 'y_train')(cls_token)
out1 = Dense(1, activation='sigmoid',name = 'y_aux')(cls_token)
model = Model(inputs=input_word_ids, outputs=[out, out1])
model.compile(Adam(lr=1e-5), loss='binary_crossentropy', metrics=['accuracy'])
return model
here is the code for train and test dataset :
train_dataset = (
tf.data.Dataset
.from_tensor_slices((x_train,
{ 'y_train':train.toxic.values,
'y_aux':train.identity_attack.values}
))
.repeat()
.shuffle(2048)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
test_dataset = (
tf.data.Dataset
.from_tensor_slices(x_test)
.batch(BATCH_SIZE)
)
then for training model i used this code :
EPOCHS = 3
n_steps = x_train.shape[0] // BATCH_SIZE
train_history = model.fit(
train_dataset,
steps_per_epoch=n_steps,
epochs=EPOCHS
)
i don't wish to perform validation so just train_dataset was given for model.fit()
after 3 epoch i get performance like this :
Epoch 3/3
1658/1658 [==============================] - 887s 535ms/step - loss: 0.0591 - y_train_loss: 0.0175 - y_aux_loss: 0.0416 - y_train_accuracy: 0.9940 - y_aux_accuracy: 0.9821
now in my test set i have 1 columns :
1. comments( this column contains comments of non english language (remember in train set we only had English comments and here in test set all comments are non english)
so i expect my model to predict on these test set whether the given test set comment is toxic or not?
as you can see from 3rd epoch result that i am calculating y_train_accuracy: 0.9940 - y_aux_accuracy: 0.9821
now i want my model to predict y_test or toxic/not toxic only
for that i tried :
sub['toxic'] = model.predict(test_dataset, verbose=1)
sub is a dataframe that contains all the id of test set and using **test_dataset** i was trying to predict each and every test set comments but i get this error :
499/499 [==============================] - 126s 253ms/step
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-1dc84858379e> in <module>
----> 1 sub['toxic'] = model.predict(test_dataset, verbose=1)
2 sub.to_csv('submission.csv', index=False)
/opt/conda/lib/python3.7/site-packages/pandas/core/frame.py in __setitem__(self, key, value)
2936 else:
2937 # set column
-> 2938 self._set_item(key, value)
2939
2940 def _setitem_slice(self, key, value):
/opt/conda/lib/python3.7/site-packages/pandas/core/frame.py in _set_item(self, key, value)
2998
2999 self._ensure_valid_index(value)
-> 3000 value = self._sanitize_column(key, value)
3001 NDFrame._set_item(self, key, value)
3002
/opt/conda/lib/python3.7/site-packages/pandas/core/frame.py in _sanitize_column(self, key, value, broadcast)
3634
3635 # turn me into an ndarray
-> 3636 value = sanitize_index(value, self.index, copy=False)
3637 if not isinstance(value, (np.ndarray, Index)):
3638 if isinstance(value, list) and len(value) > 0:
/opt/conda/lib/python3.7/site-packages/pandas/core/internals/construction.py in sanitize_index(data, index, copy)
609
610 if len(data) != len(index):
--> 611 raise ValueError("Length of values does not match length of index")
612
613 if isinstance(data, ABCIndexClass) and not copy:
ValueError: Length of values does not match length of index
now i have 4 questions :
1. is my implementation correct?
2. why i am getting that error? if i treat this problem as simple multilingual classification task like compute 1 loss for y true then i get no error at all,so where i am having trouble?
3. how can i solve the issue?
4. as it is my first time with multi task learning using huggingface transformers,what are your suggestions for updating my model so that it can generalize better?
[1]: https://arxiv.org/abs/1706.05098
[2]: https://huggingface.co/jplu/tf-xlm-roberta-large
| 06-02-2020 06:27:17 | 06-02-2020 06:27:17 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,714 | closed | ImportError: cannot import name 'MODEL_WITH_LM_HEAD_MAPPING' | ImportError: cannot import name 'MODEL_WITH_LM_HEAD_MAPPING' from 'transformers' (C:\Users\<username>\anaconda3\envs\tensorflow\lib\site-packages\transformers\__init__.py)
Model I am using (Bert, XLNet ...): GPT2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
[x] the official example scripts: (give details below)
a necessary package does not exist for import
The tasks I am working on is:
[x] my own task or dataset: (give details below)
I am trying to finetune the GPT2 library to work with recipes
## To reproduce
Steps to reproduce the behavior:
1. create conda virtualenv
2. install latest version of Tensorflow GPU
3. install transformers library
4. navigate to the 'language-modeling' folder
5. run 'python run_language_modeling.py' in conda virtualenv
LOG:
```
python run_language_modeling.py
2020-06-01 23:53:43.603759: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
Traceback (most recent call last):
File "run_language_modeling.py", line 29, in <module>
from transformers import (
ImportError: cannot import name 'MODEL_WITH_LM_HEAD_MAPPING' from 'transformers' (C:\Users\<name>\anaconda3\envs\tensorflow\lib\site-packages\transformers\__init__.py)
```
## Expected behavior
There should be an error with not receiving variables or no output at all. If you were to insert data it would train an instance of GPT2 and save it to the output directory
## Environment info
- `transformers` version: 2.10.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.7
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU): 2.1.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
It would be nice to figure out what the issue is. I am aware that there are 3 questions similar to this, but they all use torch while I am using tensorflow. All of the solutions I have tried (reinstalling tensorflow, reinstalling transformers, updating packages, reformatting code, etc.) have not worked. Thanks.
| 06-02-2020 03:58:21 | 06-02-2020 03:58:21 | This might help: #3444<|||||>`MODEL_WITH_LM_HEAD_MAPPING` is a mapping containing all the *Pytorch* models that have an LM head. Since you don't have PyTorch installed, you can't import them.
With TensorFlow you're probably looking for `TF_MODEL_WITH_LM_HEAD_MAPPING`.
Please note that the `run_language_modeling.py` script is currently PyTorch only. A TensorFlow version will be available in the future. |
transformers | 4,713 | closed | Is there any need to fine-tune the already pre-trained GPT-2 models? | Hello,
If I am using the pre-trained GPT-2 for my research, should I still fine-tune the already pre-trained models with my dataset? I am a bit confused by the term "pre-trained".
Thank you, | 06-01-2020 23:38:03 | 06-01-2020 23:38:03 | "Finetuning" a GPT-2 model is generally used for generation, to make it output text more similar to a given dataset.
If you are using it for other tasks (e.g. binary predictions), you may not have to finetune it.<|||||>Hello,
Thank you for your reply.
So say if I am trying to solve multiple choice questions with GPT2DoubleHeadsModel, should I just use the pre-trained model without fine-tuning?
Thanks, <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,712 | closed | Converting model to pytorch | # 🐛 Bug
Folks, I am trying to convert the Biobert model to Pytorch. Here are the things that I did so far:
**1. For the vocab:** I am trying to convert the vocab using solution from #69 :
```tokenizer = BartTokenizer.from_pretrained('/content/biobert_v1.1_pubmed/vocab.txt')```
I get :
`OSError: Model name '/content/biobert_v1.1_pubmed' was not found in tokenizers model name list (bart-large, bart-large-mnli, bart-large-cnn, bart-large-xsum). We assumed '/content/biobert_v1.1_pubmed' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.`
I don’t have the vocab.json, so I how do I convert the vocab for the tokenizer ?
**2. For the model:** As the out of the box `pytorch_pretrained_bert.convert_tf_checkpoint_to_pytorch` did not work I customized it per #2 by adding:
```
excluded = ['BERTAdam','_power','global_step']
init_vars = list(filter(lambda x:all([True if e not in x[0] else False for e in excluded]),init_vars))
```
With this the model 'seems' to be converting fine. But When I load this using:
`model = BartForConditionalGeneration.from_pretrained('path/to/model/biobert_v1.1_pubmed_pytorch.model') `
I still get
`UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
`
Can you pl. help me to understand what is going on here ?
| 06-01-2020 22:44:20 | 06-01-2020 22:44:20 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,711 | closed | Make docstring match args | When replying to #4698, I realized some language model docstrings are using arguments that are not present in the function signature. This PR addresses that (for all the ones I found at least).
The alternative would be to change the argument names in the function signatures (if it makes the various model APIs more consistent). | 06-01-2020 19:18:02 | 06-01-2020 19:18:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4711?src=pr&el=h1) Report
> Merging [#4711](https://codecov.io/gh/huggingface/transformers/pull/4711?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6449c494d0f40f0b70442f3da9e61f042ff807a8&el=desc) will **decrease** coverage by `0.18%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4711?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4711 +/- ##
==========================================
- Coverage 77.32% 77.14% -0.19%
==========================================
Files 128 128
Lines 21071 21071
==========================================
- Hits 16294 16256 -38
- Misses 4777 4815 +38
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4711?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4711/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.79% <ø> (ø)` | |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/4711/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `72.11% <ø> (-14.11%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4711/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.41% <ø> (-1.38%)` | :arrow_down: |
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/4711/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `77.00% <ø> (ø)` | |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/4711/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `89.46% <ø> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4711/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `89.94% <0.00%> (-0.24%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4711/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4711/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.66% <0.00%> (+1.80%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4711?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4711?src=pr&el=footer). Last update [6449c49...e7d6cb9](https://codecov.io/gh/huggingface/transformers/pull/4711?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I can work on this if there is no one on it. Quick question though: what about the models that have both `lm_labels` *and* `masked_lm_labels`? encode_decoder is one of them for instance, don't know if there are more.<|||||>Yes, that's the case for [`BertForMaskedLM` for example](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L850). I don't really know the best way to handle this.
As with this update we're trying to have the *exact* same API for all models so that the training/inference code is model agnostic, I'd say that we should look for the most natural on a case-by-case basis.
For example with the `BertForMaskedLM` example, I believe the `labels` should be the `masked_lm_labels`, as BERT should be used for MLM rather than CLM. <|||||>> the
As far as I know, `BertForMaskedLM` does not really use `lm_labels` at the moment. I think it was added to support a causal `Bert` in an encoder-decoder setting so that the decoder can be trained with a causal mask with the language model objective. Since the encoder-decoder framework is not really released yet, I think we can also add a new `BertWithLMHead` class so that each class only has one `labels` argument. It would be a breaking change in terms of the class name for people that already implemented Bert2Bert models, but I think it's worth it for consistency. What do you think?
@sgugger - In the encoder-decoder model I added both `lm_labels` and `masked_lm_labels` because `Bert` has both `lm_labels` and `masked_lm_labels`. Normally, encoder-decoder models are trained with a CLM objective so not sure if we even need `masked_lm_lables` for the encoder-decoder model wrapper. <|||||>@patrickvonplaten good for me |
transformers | 4,710 | closed | Specify PyTorch versions | Specify that the examples require a different PyTorch version than the base library. | 06-01-2020 18:52:30 | 06-01-2020 18:52:30 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4710?src=pr&el=h1) Report
> Merging [#4710](https://codecov.io/gh/huggingface/transformers/pull/4710?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6449c494d0f40f0b70442f3da9e61f042ff807a8&el=desc) will **increase** coverage by `0.05%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4710?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4710 +/- ##
==========================================
+ Coverage 77.32% 77.38% +0.05%
==========================================
Files 128 128
Lines 21071 21071
==========================================
+ Hits 16294 16305 +11
+ Misses 4777 4766 -11
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4710?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.85% <0.00%> (+0.41%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.50% <0.00%> (+1.64%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4710?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4710?src=pr&el=footer). Last update [6449c49...6081929](https://codecov.io/gh/huggingface/transformers/pull/4710?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,709 | closed | Wrong argument passed during TFRobertaClassificationHead initialization | # 🐛 Bug
## Information
There is an issue preventing a RoBERTa classification model from being serialized. It is related to a problem in passing `config` as the first argument to `tf.keras.layers.Layer`. However, the [expected positional argument](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer) is `trainable`:
https://github.com/huggingface/transformers/blob/d6a677b14bcfd56b22fafeb212a27c6068886e07/src/transformers/modeling_tf_roberta.py#L327-L331
This is the root cause behind issue #3664 (about serialization).
A related fix for GPT2: #2738.
Model I am using (Bert, XLNet ...):
RoBERTa
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the code below:
```python
from transformers import (TFRobertaForSequenceClassification)
base_model = TFRobertaForSequenceClassification.from_pretrained("roberta-base")
print(base_model.classifier.trainable)
```
## Expected behavior
The output is:
`True`
The current output is
```
RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"type_vocab_size": 1,
"vocab_size": 50265
}
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.10.0
- Platform: Colab
- Python version: 3.6.9
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.2.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 06-01-2020 18:43:56 | 06-01-2020 18:43:56 | Hello,
The way the TF models behave has been recently updated. Can you please retry with the `master` version?<|||||>Hi Julien,
Thanks. It's the same issue in the `master` currently:
https://github.com/huggingface/transformers/blob/9f5d5a531d769d07403f59661884e254f8420afe/src/transformers/modeling_tf_roberta.py#L320-L324
`config` is still passed as the first parameter to the `__init__` of parent class `tf.keras.layers.Layer` while the latter expects `trainable` as the first parameter. I fixed it on my side by simply removing that `config` parameter from the `super().__init__(` call. But I wasn't sure if this affects other parts of the repo. Otherwise, I would have submitted a PR.
<|||||>Ok, thanks for the feedback. Indeed, the `config` parameter is important. I will take some time to review this. Sorry for the inconvenience.<|||||>You are totally right, `config` here is useless and the same appears in other models . Do you mind to do a PR? And I will help you to fix all this :)<|||||>Thanks for making the checks. I submitted a PR: https://github.com/huggingface/transformers/pull/4884 |
transformers | 4,708 | closed | Is the separation token absolutely necessary if I use GPT2DoubleHeadsModel with token_type_ids? | Hello,
I am trying to use the GPT2DoubleHeadsModel to process the multiple choice questions.
For the pre-processing of the multiple choice questions, I didn't add any special separating token between the multiple choice question and the multiple choice option. Instead, I generated the token_type_ids, which denote 0 for the question portion of the text, and 1 for the multiple choice option. Then I tried to make the GPT2DoubleHeadsModel to predict the correct answer by doing:
```python
gpt2DoubleHeadsModel(input_ids=input_ids, token_type_ids = token_type_ids)
```
Is this practice acceptable? or do I absolutely need to insert a special separation token between the question text and the multiple choice option text?
Thank you,
| 06-01-2020 18:28:01 | 06-01-2020 18:28:01 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,707 | closed | How tokenizers work | Good afternoon.
If possible, I would ask a few questions which I cannot solve for several days running.
1) What does the file _merge.txt_ mean which is used in `ByteLevelBPETokenizer`? Does it store the config for the tokenizer? I deepened into it, however, did not manage to understand its purpose.
2) Is it possible to get offsets for the text when using models-stacked tokenizers? For instance, I am using ElectraTokenizer and I would like it to return the offsets for my texts - do you probably have any templates of how to do it?
Thanks in advance! | 06-01-2020 18:27:04 | 06-01-2020 18:27:04 | pinging @n1t0, lead on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) :)<|||||>Thank you! Closing it here, moved link: [https://github.com/huggingface/tokenizers/issues/290](url) |
transformers | 4,706 | closed | When using the Hugging Face Transformer, if I set my pad_token to be a different token then the default, do I need to train my model on that new pad_token as well? | Hello,
For using the GPT2DoubleHeadsModel, I used ``<eos>`` as the last token of my text sequence, which my model will use to make predictions for multiple-choice questions.
I also set ``<pad>`` as the padding token in my tokenizer, which is different than the default.
When using the Hugging Face pre-trained GPT2DoubleHeadsModel, do I need to train the already pre-trained Transformer because the two tokens I mentioned above are new?
Thank you,
| 06-01-2020 17:56:07 | 06-01-2020 17:56:07 | Hi @h56cho, if you add new tokens then yes, you'll need to train the model. If you just want `eos` and `pad` token then its a good idea to use the model defaults. `eos` and `pad` tokens are already available in GPT2 model <|||||>Hi,
Thank you for your reply.
So what I am getting is that if I add any new "special token" onto the existing pre-trained tokenizer, I will need to re-train the pre-trained Transformer to make it learn that new special token.
But what if I just add extra non-special tokens? for example, a word "paradox" is not included in the existing GPT-2 tokenizer, so say I add the word "paradox" to the existing set of GPT-2 vocabulary. If I didn't make any changes to the special tokens in the GPT-2 tokenizer, do I still need to train the pre-trained GPT-2 because I added a new word to a set of vocabulary?
Thanks, |
transformers | 4,705 | closed | Is transformers 2.11.0 compatible with tokenizers 0.8.0(-dev*)? | 06-01-2020 17:22:30 | 06-01-2020 17:22:30 | ||
transformers | 4,704 | closed | Tensorflow XLMRoberta Multi-Class Problem | ## Details
I am attempting to fine tune an XLMRoberta sequence classification model. I have an array of text snippets from physicians labelled 1-8 with various diagnostic indications. I've created a tensorflow dataset object with the
```
convert_raw_to_xlmroberta_tfdataset()
```
function seen here (https://stackoverflow.com/questions/62095316/tensorflow-xlmroberta-multi-class)
I then create the model:
```
from transformers import TFXLMRobertaForSequenceClassification
import tensorflow as tf
learning_rate = 2e-5
number_of_epochs = 2
model = TFXLMRobertaForSequenceClassification.from_pretrained("jplu/tf-xlm-roberta-base")
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate, epsilon=1e-08)
loss = tf.keras.losses.SparseCategoricalCrossentropy()
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
```
but consistently get this error:
```
ValueError: Shapes (None, 1, 8) and (None, 2) are incompatible
```
(full trace at the above SO link).
I've tried both Sparse Categorical Cross Entropy and just Categorical Cross Entropy. I've used one-hot encoded labels and "normal" labels. Is it even possible to do multi-class classification with TFXLMRoberta? It started to work when I fed in a binary dummy set of labels.
| 06-01-2020 15:40:30 | 06-01-2020 15:40:30 | Hello! If you're using 8 labels, you'll need to tell the model that it needs to do 8-way classification. You can do so by specifying the `num_labels` argument during instantiation:
```py
model = TFXLMRobertaForSequenceClassification.from_pretrained(
"jplu/tf-xlm-roberta-base",
num_labels=8
)
```
Let me know if this fixes your issue.<|||||>Yes! That fixed the issue. I apologize for my oversight of that argument. Thank you very much for your time.<|||||>Sure, my pleasure! |
transformers | 4,703 | closed | Fused_norm_layer_cuda | Hi,
I have created a model on kaggle tpu using run_pretraining.py and it gives me tf_checkpoint file. First of all the model is 12gb in size due to which it is throwing memory error while loading weights into it on kaggle and colab.
Is there a way to reduce the size?
Secondly, I tried performing the operation over a linux server without Gpu so it throws an error no module "fused_norm_layer_cuda" which is obvious but I want to know if there is a way to convert the model from tf to pytorch without GPU, or is there any parameter in BertForPreTraining which can instantiate the model without GPU. Please help. 🤗 | 06-01-2020 14:41:22 | 06-01-2020 14:41:22 | Hi! What's your model? Do you have the configuration file?
What is the `run_pretraining.py`? This doesn't seem to be one of our scripts.<|||||>Hi, extremely sorry for not providing adequate information.
I used google-research/bert for creating the model. The bert repo contains the run_pretraining.py, before that I created the vocab.txt file from my own data and after that I used create_pretraining_data.py and yes I do have the config.json file as well.
I am trying to convert tf_checkpoint file to pytorch_model.bin where I am encountering the issue of memory error on Kaggle and Colab and fused_norm_layer_cuda error on linux server without GPU. Just want to know if it is possible to convert from tf_checkpoint to pytorch without GPU or any way I can reduce the model size to load weights on kaggle or colab without running out of memory<|||||>Do you mind sharing the configuration file?
So you pre-trained a model using google-research/bert, and now you're trying to convert it to one of our models, is that correct?<|||||>{
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 384,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 256,
"num_attention_heads": 12,
"num_hidden_layers": 6,
"type_vocab_size": 2,
"vocab_size": 2800000
}
This is the config.json file.
Not exactly, I have used bert repo to create a model which is in tensorflow and have files model.ckpt-100000.data-00000-of-00001 which seems to be in tensorflow but I need model_pytorch.bin so converting from tf_checkpoint file to pytorch using this amazing library. Trying to use the code written in convert_bert_original_tf_checkpoint_to_pytorch.py<|||||>Right, you're using the correct script!
Please note, that having a 2_800_000 vocabulary size is absolutely huge!
Unfortunately it's hard to get around memory errors during conversion. The script should work whether you have a GPU available or not. Can you show me the command you used to convert the model on your linux server?<|||||>I have little data and many unique terms which are written in English but are addresses from all over India due to which I have to keep the vocabulary size this high and due to which I compromised with the dimensions of the model and even the embedding size as well.
Here is the code snippet:-
import torch
from pytorch_transformers.modeling_bert import BertConfig, BertForPreTraining, load_tf_weights_in_bert
tf_checkpoint_path="./distlang/"
bert_config_file = "./config.json"
pytorch_dump_path="./distlangpytorch"
config = BertConfig.from_json_file(bert_config_file)
print("Building PyTorch model from configuration: {}".format(str(config)))
model = BertForPreTraining(config)
# Load weights from tf checkpoint
load_tf_weights_in_bert(model, config, tf_checkpoint_path)
# Save pytorch-model
print("Save PyTorch model to {}".format(pytorch_dump_path))
torch.save(model.state_dict(), pytorch_dump_path)
Its, throwing error while instantiation of the model model = BertForPreTraining(config)
I have apex installed but without cuda_ext as my server is not having GPU, so it won't install 😅<|||||>Hmm I can't reproduce on my setup. Two things:
- Can you update your `transformers` library? It seems that you're using `pytorch-transformers` which is starting to be quite old now. We've patched quite a few bugs since then! > `pip install -U transformers` and update your imports to `from transformers import`
- Do you mind pasting the stack trace?<|||||>If I use thrasformers library will it work fine?
And yes here the stack trace:-
Building PyTorch model from configuration: {
"attention_probs_dropout_prob": 0.1,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 384,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 256,
"num_attention_heads": 12,
"num_hidden_layers": 6,
"num_labels": 2,
"output_attentions": false,
"output_hidden_states": false,
"pruned_heads": {},
"torchscript": false,
"type_vocab_size": 2,
"vocab_size": 2800000
}
Traceback (most recent call last):
File "tftopytorch.py", line 18, in <module>
model = BertForPreTraining(config)
File "/home/rohit.anand/envs/data_augment/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py", line 761, in __init__
self.bert = BertModel(config)
File "/home/rohit.anand/envs/data_augment/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py", line 651, in __init__
self.embeddings = BertEmbeddings(config)
File "/home/rohit.anand/envs/data_augment/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py", line 240, in __init__
self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps)
File "/home/rohit.anand/envs/data_augment/lib/python3.6/site-packages/apex/normalization/fused_layer_norm.py", line 133, in __init__
fused_layer_norm_cuda = importlib.import_module("fused_layer_norm_cuda")
File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'fused_layer_norm_cuda'<|||||>Yes, upgrading the library should solve your problems. Since [this commit](https://github.com/huggingface/transformers/commit/98dd19b96b351f481e1268ab6c7b035bb21d106e) we're not using apex for the LayerNorm anymore, so having PyTorch installed on a recent version alongside `transformers` on a recent version should solve your issue!<|||||>Turns out the problem is solved. Thanks a lot man for saving the day. Amazing library as well 🤗 really appreciate your quick response and understanding my problem and solving it.<|||||>My pleasure :hugs: <|||||>Hey the above problem is solved but I am running into this problem while writing the file to the directory, to my understanding I have to provide the path where I have to save the pyorch model and I am providing a path. Am I missing out something here?
Here's the snippet of the error the code is same as mentioned above just used transformers instead of pytorch_transformer
Save PyTorch model to ./distlangpytorch/
Traceback (most recent call last):
File "tftopytorch.py", line 25, in <module>
torch.save(model.state_dict(), pytorch_dump_path)
File "/home/rohit.anand/envs/data_augment/lib/python3.6/site-packages/torch/serialization.py", line 369, in save
with _open_file_like(f, 'wb') as opened_file:
File "/home/rohit.anand/envs/data_augment/lib/python3.6/site-packages/torch/serialization.py", line 234, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/home/rohit.anand/envs/data_augment/lib/python3.6/site-packages/torch/serialization.py", line 215, in __init__
super(_open_file, self).__init__(open(name, mode))
IsADirectoryError: [Errno 21] Is a directory: './distlangpytorch/'<|||||>We usually recommend saving using our method `from_pretrained`, as it saves the configuration as well as the model state dict.
Can you try using the following?
```py
model = load_tf_weights_in_bert(model, config, tf_checkpoint_path)
model.save_pretrained(pytorch_dump_path)
```<|||||>It worked fine, the model is saved now and the size of the model is 1/3rd the original size which was of the tf_checkpoint. Is the model quantized as well, is it something implemented in the library? Also thanks a ton have been struggling since evening and once posted the issue here got the resolution immediately. 🤗<|||||>That's great! The model is not quantized, having a 2/3 reduction is surprising. It's possible that your checkpoint originally had optimizer states which do take quite a big amount of memory. We don't save that in our checkpoints.
Cool, glad I could help!<|||||>Hey, its not related to the issue can you please help me with post training quantization. I only am able to see tutorials of dynamic quantization which indeed increases the size of my model. I am not able to find any decent tutorials on post training(static) quantization. Tried torch.quantization.quantize but the method asks for two positional arguments fn_run and fn_args and I am not sure how to define them or create a function to pass to these arguments<|||||>So, it turns out I might have figured it out. The model size is 4.3 GB my vocab size is 28_000_00 and hidden layer size is 384. So the vocab part of model turn out to be 4X2800000X384 = 4.3GB. What I don't understand is why the model size only have vocabulary part and no part of weights strange. And is there any free resource where I can train this model for classification you might be aware of 🙈, On colab and kaggle the kernel restarts because they only have 12,16 GB RAM, the model needs a bit more 🙈 |
transformers | 4,702 | closed | Why DataCollatorForLanguageModeling do not make attention_mask feature? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
As we know, according to padding on batch of input_ids, we should mask the padding ids as invalidate to avoid attention on them. But I found that collate function of DataCollatorForLanguageModeling didn't implement attention_mask feature.
Is this expected? Or do you think attention_mask is not necessary?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 06-01-2020 07:50:12 | 06-01-2020 07:50:12 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@makailove123 Have you figured out why? |
transformers | 4,701 | closed | Why we need the init_weight function in BERT pretrained model | # ❓ Questions & Help
I have already tried asking the question is SO, which you can find the link [here](https://stackoverflow.com/questions/62040309/why-we-need-the-init-weight-function-in-bert-pretrained-model-in-huggingface-tra/62053791#62053791).
## Details
In the code by Hugginface transformers, there are many fine-tuning models have the function `init_weight`.
For example([here](https://github.com/huggingface/transformers/blob/a9aa7456ac/src/transformers/modeling_bert.py#L1073-L1082)), there is a `init_weight` function at last. Even though we use `from_pretrained`, it will still call the constructor and call `init_weight` function.
```python
class BertForSequenceClassification(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
self.init_weights()
```
As I know, it will call the following [code](https://github.com/huggingface/transformers/blob/a9aa7456ac/src/transformers/modeling_bert.py#L520-L530)
```python
def _init_weights(self, module):
""" Initialize the weights """
if isinstance(module, (nn.Linear, nn.Embedding)):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
elif isinstance(module, BertLayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
if isinstance(module, nn.Linear) and module.bias is not None:
module.bias.data.zero_()
```
My question is **If we are loading the pre-trained language model, why do we need to initialize the weight for every module?**
I guess I must be misunderstanding something here.
| 06-01-2020 02:38:22 | 06-01-2020 02:38:22 | Have a look at the code for [`.from_pretrained()`](https://github.com/huggingface/transformers/blob/a9aa7456ac824c9027385b149f405e4f5649273f/src/transformers/modeling_utils.py#L490). What actually happens is something like this:
- find the correct base model class to initialise
- initialise that class with pseudo-random initialisation (by using the `_init_weights` function that you mention)
- find the file with the pretrained weights
- overwrite the weights of the model that we just created with the pretrained weightswhere applicable
This ensure that layers were not pretrained (e.g. in some cases the final classification layer) _do_ get initialised in `_init_weights` but don't get overridden.<|||||>Great. Thanks. I also read through the code and that really clears my confusion. <|||||>Good. If the answer was sufficient on Stack Overflow as well, please close that too. <|||||>
> Have a look at the code for [`.from_pretrained()`](https://github.com/huggingface/transformers/blob/a9aa7456ac824c9027385b149f405e4f5649273f/src/transformers/modeling_utils.py#L490). What actually happens is something like this:
>
> * find the correct base model class to initialise
> * initialise that class with pseudo-random initialisation (by using the `_init_weights` function that you mention)
> * find the file with the pretrained weights
> * overwrite the weights of the model that we just created with the pretrained weightswhere applicable
>
> This ensure that layers were not pretrained (e.g. in some cases the final classification layer) _do_ get initialised in `_init_weights` but don't get overridden.
when we construct BertForSequenceClassification from pre-trained model, didn't we overwrite the loaded weights with random initialisation?<|||||>@sunersheng No, the random initialization happens [first](https://github.com/huggingface/transformers/blob/a9aa7456ac824c9027385b149f405e4f5649273f/src/transformers/modeling_utils.py#L659) and then the existing weights are loaded [into it](https://github.com/huggingface/transformers/blob/a9aa7456ac824c9027385b149f405e4f5649273f/src/transformers/modeling_utils.py#L732). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.